├── README.md
├── config.yml
├── data
├── 2D_data
│ ├── README.md
│ └── read_db.py
├── MOF_data
│ ├── README.md
│ └── process.py
├── README.md
├── bulk_data
│ ├── README.md
│ ├── get_MP.py
│ └── mp-ids-46744.csv
├── pt_data
│ ├── README.md
│ └── pt_data.tar.gz
├── surface_data
│ └── ase_cathub.py
└── test_data
│ ├── README.md
│ └── test_data.tar.gz
├── main.py
├── matdeeplearn
├── __init__.py
├── __pycache__
│ ├── __init__.cpython-37.pyc
│ ├── config.cpython-37.pyc
│ ├── models.cpython-37.pyc
│ ├── process.cpython-37.pyc
│ ├── process_HEA.cpython-37.pyc
│ └── training.cpython-37.pyc
├── models
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── MLP.cpython-37.pyc
│ │ ├── __init__.cpython-37.pyc
│ │ ├── cgcnn.cpython-37.pyc
│ │ ├── cgcnn_nmr.cpython-37.pyc
│ │ ├── cgcnn_test.cpython-37.pyc
│ │ ├── cnnet.cpython-37.pyc
│ │ ├── descriptor_nn.cpython-37.pyc
│ │ ├── gcn.cpython-37.pyc
│ │ ├── megnet.cpython-37.pyc
│ │ ├── mpnn.cpython-37.pyc
│ │ ├── schnet.cpython-37.pyc
│ │ ├── test_cgcnn2.cpython-37.pyc
│ │ ├── test_dosgnn.cpython-37.pyc
│ │ ├── test_forces.cpython-37.pyc
│ │ ├── test_matgnn.cpython-37.pyc
│ │ ├── test_misc.cpython-37.pyc
│ │ ├── testing.cpython-37.pyc
│ │ └── utils.cpython-37.pyc
│ ├── cgcnn.py
│ ├── descriptor_nn.py
│ ├── gcn.py
│ ├── megnet.py
│ ├── mpnn.py
│ ├── schnet.py
│ └── utils.py
├── process
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── __init__.cpython-37.pyc
│ │ └── process.cpython-37.pyc
│ ├── dictionary_blank.json
│ ├── dictionary_default.json
│ └── process.py
└── training
│ ├── __init__.py
│ ├── __pycache__
│ ├── __init__.cpython-37.pyc
│ └── training.cpython-37.pyc
│ └── training.py
├── old
├── MatDeepLearn_v0.1.tar
└── README.md
└── requirements.txt
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # MatDeepLearn
4 |
5 | MatDeepLearn is a platform for testing and using graph neural networks (GNNs) and other machine learning (ML) models for materials chemistry applications. MatDeepLearn takes in data in the form of atomic structures and their target properties, processes the data into graphs, trains the ML model of choice (optionally with hyperparameter optimization), and provides predictions on unseen data. It allows for different GNNs to be benchmarked on diverse datasets drawn from materials repositories as well as conventional training/prediction tasks. This package makes use of the [Pytorch-Geometric](https://github.com/rusty1s/pytorch_geometric) library, which provides powerful tools for GNN development and many prebuilt models readily available for use.
6 |
7 | MatDeepLearn is currently under active development with more features to be added soon. Please contact the developer(s) for bug fixes and feature requests.
8 |
9 | ## Table of contents
10 |
11 | - Installation
12 | - Usage
13 | - FAQ
14 | - Roadmap
15 | - License
16 | - Acknowledgements
17 |
18 |
19 | ## Installation
20 |
21 |
22 | ### Prerequisites
23 |
24 | Prerequisites are listed in requirements.txt. You will need two key packages, 1. Pytorch and 2. Pytorch-Geometric. You may want to create a virtual environment first, using Conda for example.
25 |
26 | 1. **Pytorch**: The package has been tested on Pytorch 1.9. To install, for example:
27 | ```bash
28 | pip install torch==1.9.0
29 | ```
30 | 2. **Pytorch-Geometric:** The package has been tested on Pytorch-Geometric. 2.0.1 To install, [follow their instructions](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html), for example:
31 | ```bash
32 | pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
33 | pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
34 | pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
35 | pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
36 | pip install torch-geometric
37 | ```
38 | where where ${CUDA} and ${TORCH} should be replaced by your specific CUDA version (cpu, cu92, cu101, cu102, cu110, cu111) and PyTorch version (1.7.0, 1.8.0, 1.9.0), respectively.
39 |
40 | 3. **Remaining requirements:** The remainder may be installed by:
41 | ```bash
42 | git clone https://github.com/vxfung/MatDeepLearn
43 | cd MatDeepLearn
44 | pip install -r requirements.txt
45 | ```
46 |
47 | ## Usage
48 |
49 | ### Running your first calculation
50 |
51 | This example provides instructions for a bare minimum calculation. We will run the example with a on a small dataset (the Pt subset dataset containing ~1000 entries). This is just a toy example to test if the package is installed and working. Procedure below:
52 |
53 | 1. Go to MatDeepLearn/data/ and type
54 | ```bash
55 | tar -xvf test_data.tar.gz
56 | ```
57 | to unpack the a test dataset of Pt clusters.
58 |
59 | 2. Go to MatDeepLearn, type
60 | ```bash
61 | python main.py --data_path=data/test_data/test_data
62 | ```
63 | where default settings will be used and configurations will be read from the provided config.yml.
64 |
65 | 3. The program will begin training; on a regular CPU this should take ~10-20s per epoch. It is recommended to use GPUs which can provide a roughly ~5-20 times speedup, which is needed for the larger datasets. As default, the program will provide two outputs: (1) "my_model.pth" which is a saved model which can be used for predictions on new structures, (2) "myjob_train_job_XXX_outputs.csv" where XXX are train, val and test; these contain structure ids, targets and the predicted values from the last epoch of training and validation, and for the test set.
66 |
67 | ### The configuration file
68 |
69 | The configuration file is provided in .yml format and encodes all the settings used. By default it should be in the same directory as main.py or specified in a separate location by --config_path in the command line.
70 |
71 | There are four categories or sections: 1. Job, 2. Processing, 3. Training, 4. Models
72 |
73 | 1. **Job:** This section encodes the settings specific to the type of job to run. Current supported are: Training, Predict, Repeat, CV, Hyperparameter, Ensemble, Analysis. The program will only read the section for the current job, which is selected by --run_mode in the command line, e.g. --run_mode=Training. Some other settings which can be changed in the command line are: --job_name, --model, --seed, --parallel.
74 |
75 | 2. **Processing:** This section encodes the settings specific to the processing of structures to graphs or other features. Primary settings are the "graph_max_radius", "graph_max_neighbors" and "graph_edge_length" which controls radius cutoff for edges, maximum number of edges, and length of edges from a basis expansion, respectively. Prior to this, the directory path containing the structure files must be specified by "data_path" in the file or --data_path in the command line.
76 |
77 | 3. **Training:** This section encodes the settings specific to the training. Primary settings are the "loss", "train_ratio" and "val_ratio" and "test_ratio". This can also be specified in the command line by --train_ratio, --val_ratio, --test_ratio.
78 |
79 | 4. **Models:** This section encodes the settings specific to the model used, aka hyperparameters. Example hyperparameters are provided in the example config.yml. Only the settings for the model selected in the Job section will be used. Model settings which can be changed in the command line are: --epochs, --batch_size, and --lr.
80 |
81 |
82 | ### Training and prediction on an unseen dataset
83 |
84 | This example provides instructions for a conventional ML task of training on an existing dataset, and using a trained model to provide predictions on an unseen dataset for screening. This assumes the model used is already sufficiently good at the task to be performed (with suitable model hyperparameters, etc.). The default hyperparameters can do a reasonably good job for testing purposes; for hyperparameter optimization refer to the next section.
85 |
86 | 1. To run, MatDeepLearn requires:
87 | - A configuration file, config.yml, as described in the previous section.
88 | - A dataset directory containing structure files, a csv file containing structure ids and target properties (default: targets.csv), and optionally a json file containing elemental properties (default: atom_dict.json). Five example datasets are provided with all requisite files needed. Structure files can take any format supported by the Atomic Simulation Environment [(ASE)](https://wiki.fysik.dtu.dk/ase/) such as .cif, .xyz, POSCAR, and ASE's own .json format.
89 |
90 | 2. It is then necessary to first train the ML model an on existing dataset with available target properties. A general example for training is:
91 |
92 | ```bash
93 | python main.py --data_path='XXX' --job_name="my_training_job" --run_mode='Training' --model='CGCNN_demo' --save_model='True' --model_path='my_trained_model.pth'
94 | ```
95 | where "data_path" points to the path of the training dataset, "model" selects the model to use, and "run_mode" specifies training. Once finished, a "my_trained_model.pth" should be saved.
96 |
97 | 3. Run the prediction on an unseen dataset by:
98 |
99 | ```bash
100 | python main.py --data_path='YYY' --job_name="my_prediction_job" --run_mode='Predict' --model_path='my_trained_model.pth'
101 | ```
102 | where the "data_path" and "run_mode" are now updated, and the model path is specified. The predictions will then be saved to my_prediction_job_predicted_outputs.csv for analysis.
103 |
104 | ### Hyperparameter optimization
105 |
106 | This example provides instructions for hyperparameter optimization.
107 |
108 | 1. Similar to regular training, ensure the dataset is available with requisite files in the directory.
109 |
110 | 2. To run hyperparameter optimization, one must first define the hyperparameter search space. MatDeepLearn uses [RayTune](https://docs.ray.io/en/master/tune/index.html) for distributed optimization, and the search space is defined with their provided methods. The choice of search space will depend on many factors, including available computational resources and focus of the study; we provide some examples for the existing models in main.py.
111 |
112 | 3. Assuming the search space is defined, we run hyperparameter optimization with :
113 | ```bash
114 | python main.py --data_path=data/test_data --model='CGCNN_demo' --job_name="my_hyperparameter_job" --run_mode='Hyperparameter'
115 | ```
116 | this sets the run mode to hyperparameter optimization, with a set number of trials and concurrency. Concurrently sets the number of trials to be performed in parallel; this number should be higher than the number of available devices to avoid bottlenecking. The program should automatically detect number of GPUs and run on each device accordingly. Finally, an output will be written called "optimized_hyperparameters.json" which contains the hyperparameters for the model with the lowest test error. Raw results are saved in a directory called "ray_results."
117 |
118 | ### Ensemble
119 |
120 | Ensemble functionality is provided here in the form of stacked models, where prediction output is the average of individual models. "ensemble_list" in the config.yml controls the list of individual models represented as a comma separated string.
121 |
122 | ```bash
123 | python main.py --data_path=data/test_data --run_mode=Ensemble
124 | ```
125 |
126 | ### Repeat trials
127 |
128 | Sometimes it is desirable to obtain performance averaged over many trials. Specify repeat_trials in the config.yml for how many trials to run.
129 |
130 | ```bash
131 | python main.py --data_path=data/test_data --run_mode=Repeat
132 | ```
133 |
134 | ### Cross validation
135 |
136 | Specify cv_folds in the config.yml for how many folds in the CV.
137 |
138 | ```bash
139 | python main.py --data_path=data/test_data --run_mode=CV
140 | ```
141 |
142 | ### Analysis
143 |
144 | This mode allows the visualization of graph-wide features with t-SNE.
145 |
146 | ```bash
147 | python main.py --data_path=data/test_data --run_mode=Analysis --model_path=XXX
148 | ```
149 |
150 | ## FAQ
151 |
152 |
153 |
154 | ## Roadmap
155 |
156 | TBA
157 |
158 |
159 | ## License
160 |
161 | Distributed under the MIT License.
162 |
163 |
164 | ## Acknowledgements
165 |
166 | Contributors: Victor Fung, Eric Juarez, [Jiaxin Zhang](https://github.com/jxzhangjhu), Bobby Sumpter
167 |
168 | ## Contact
169 |
170 | Code is maintained by:
171 |
172 | [Victor Fung](https://www.ornl.gov/staff-profile/victor-fung), fungv (at) ornl.gov
173 |
174 |
--------------------------------------------------------------------------------
/config.yml:
--------------------------------------------------------------------------------
1 | Job:
2 | run_mode: "Training"
3 | #{Training, Predict, Repeat, CV, Hyperparameter, Ensemble, Analysis}
4 | Training:
5 | job_name: "my_train_job"
6 | reprocess: "False"
7 | model: CGCNN_demo
8 | load_model: "False"
9 | save_model: "True"
10 | model_path: "my_model.pth"
11 | write_output: "True"
12 | parallel: "True"
13 | #seed=0 means random initalization
14 | seed: 0
15 | Predict:
16 | job_name: "my_predict_job"
17 | reprocess: "False"
18 | model_path: "my_model.pth"
19 | write_output: "True"
20 | seed: 0
21 | Repeat:
22 | job_name: "my_repeat_job"
23 | reprocess: "False"
24 | model: CGCNN_demo
25 | model_path: "my_model.pth"
26 | write_output: "False"
27 | parallel: "True"
28 | seed: 0
29 | ###specific options
30 | #number of repeat trials
31 | repeat_trials: 5
32 | CV:
33 | job_name: "my_CV_job"
34 | reprocess: "False"
35 | model: CGCNN_demo
36 | write_output: "True"
37 | parallel: "True"
38 | seed: 0
39 | ###specific options
40 | #number of folds for n-fold CV
41 | cv_folds: 5
42 | Hyperparameter:
43 | job_name: "my_hyperparameter_job"
44 | reprocess: "False"
45 | model: CGCNN_demo
46 | seed: 0
47 | ###specific options
48 | hyper_trials: 10
49 | #number of concurrent trials (can be greater than number of GPUs)
50 | hyper_concurrency: 8
51 | #frequency of checkpointing and update (default: 1)
52 | hyper_iter: 1
53 | #resume a previous hyperparameter optimization run
54 | hyper_resume: "True"
55 | #Verbosity of ray tune output; available: (1, 2, 3)
56 | hyper_verbosity: 1
57 | #Delete processed datasets
58 | hyper_delete_processed: "True"
59 | Ensemble:
60 | job_name: "my_ensemble_job"
61 | reprocess: "False"
62 | save_model: "False"
63 | model_path: "my_model.pth"
64 | write_output: "Partial"
65 | parallel: "True"
66 | seed: 0
67 | ###specific options
68 | #List of models to use: (Example: "CGCNN_demo,MPNN_demo,SchNet_demo,MEGNet_demo" or "CGCNN_demo,CGCNN_demo,CGCNN_demo,CGCNN_demo")
69 | ensemble_list: "CGCNN_demo,CGCNN_demo,CGCNN_demo,CGCNN_demo,CGCNN_demo"
70 | Analysis:
71 | job_name: "my_job"
72 | reprocess: "False"
73 | model: CGCNN_demo
74 | model_path: "my_model.pth"
75 | write_output: "True"
76 | seed: 0
77 |
78 | Processing:
79 | #Whether to use "inmemory" or "large" format for pytorch-geometric dataset. Reccomend inmemory unless the dataset is too large
80 | dataset_type: "inmemory"
81 | #Path to data files
82 | data_path: "/data"
83 | #Path to target file within data_path
84 | target_path: "targets.csv"
85 | #Method of obtaining atom idctionary: available:(provided, default, blank, generated)
86 | dictionary_source: "default"
87 | #Path to atom dictionary file within data_path
88 | dictionary_path: "atom_dict.json"
89 | #Format of data files (limit to those supported by ASE)
90 | data_format: "json"
91 | #Print out processing info
92 | verbose: "True"
93 | #graph specific settings
94 | graph_max_radius : 8.0
95 | graph_max_neighbors : 12
96 | voronoi: "False"
97 | edge_features: "True"
98 | graph_edge_length : 50
99 | #SM specific settings
100 | SM_descriptor: "False"
101 | #SOAP specific settings
102 | SOAP_descriptor: "False"
103 | SOAP_rcut : 8.0
104 | SOAP_nmax : 6
105 | SOAP_lmax : 4
106 | SOAP_sigma : 0.3
107 |
108 | Training:
109 | #Index of target column in targets.csv
110 | target_index: 0
111 | #Loss functions (from pytorch) examples: l1_loss, mse_loss, binary_cross_entropy
112 | loss: "l1_loss"
113 | #Ratios for train/val/test split out of a total of 1
114 | train_ratio: 0.8
115 | val_ratio: 0.05
116 | test_ratio: 0.15
117 | #Training print out frequency (print per n number of epochs)
118 | verbosity: 5
119 |
120 | Models:
121 | CGCNN_demo:
122 | model: CGCNN
123 | dim1: 100
124 | dim2: 150
125 | pre_fc_count: 1
126 | gc_count: 4
127 | post_fc_count: 3
128 | pool: "global_mean_pool"
129 | pool_order: "early"
130 | batch_norm: "True"
131 | batch_track_stats: "True"
132 | act: "relu"
133 | dropout_rate: 0.0
134 | epochs: 250
135 | lr: 0.002
136 | batch_size: 100
137 | optimizer: "AdamW"
138 | optimizer_args: {}
139 | scheduler: "ReduceLROnPlateau"
140 | scheduler_args: {"mode":"min", "factor":0.8, "patience":10, "min_lr":0.00001, "threshold":0.0002}
141 | MPNN_demo:
142 | model: MPNN
143 | dim1: 100
144 | dim2: 100
145 | dim3: 100
146 | pre_fc_count: 1
147 | gc_count: 4
148 | post_fc_count: 3
149 | pool: "global_mean_pool"
150 | pool_order: "early"
151 | batch_norm: "True"
152 | batch_track_stats: "True"
153 | act: "relu"
154 | dropout_rate: 0.0
155 | epochs: 250
156 | lr: 0.001
157 | batch_size: 100
158 | optimizer: "AdamW"
159 | optimizer_args: {}
160 | scheduler: "ReduceLROnPlateau"
161 | scheduler_args: {"mode":"min", "factor":0.8, "patience":10, "min_lr":0.00001, "threshold":0.0002}
162 | SchNet_demo:
163 | model: SchNet
164 | dim1: 100
165 | dim2: 100
166 | dim3: 150
167 | cutoff: 8
168 | pre_fc_count: 1
169 | gc_count: 4
170 | post_fc_count: 3
171 | pool: "global_mean_pool"
172 | pool_order: "early"
173 | batch_norm: "True"
174 | batch_track_stats: "True"
175 | act: "relu"
176 | dropout_rate: 0.0
177 | epochs: 250
178 | lr: 0.0005
179 | batch_size: 100
180 | optimizer: "AdamW"
181 | optimizer_args: {}
182 | scheduler: "ReduceLROnPlateau"
183 | scheduler_args: {"mode":"min", "factor":0.8, "patience":10, "min_lr":0.00001, "threshold":0.0002}
184 | MEGNet_demo:
185 | model: MEGNet
186 | dim1: 100
187 | dim2: 100
188 | dim3: 100
189 | pre_fc_count: 1
190 | gc_count: 4
191 | gc_fc_count: 1
192 | post_fc_count: 3
193 | pool: "global_mean_pool"
194 | pool_order: "early"
195 | batch_norm: "True"
196 | batch_track_stats: "True"
197 | act: "relu"
198 | dropout_rate: 0.0
199 | epochs: 250
200 | lr: 0.0005
201 | batch_size: 100
202 | optimizer: "AdamW"
203 | optimizer_args: {}
204 | scheduler: "ReduceLROnPlateau"
205 | scheduler_args: {"mode":"min", "factor":0.8, "patience":10, "min_lr":0.00001, "threshold":0.0002}
206 | GCN_demo:
207 | model: GCN
208 | dim1: 100
209 | dim2: 150
210 | pre_fc_count: 1
211 | gc_count: 4
212 | post_fc_count: 3
213 | pool: "global_mean_pool"
214 | pool_order: "early"
215 | batch_norm: "True"
216 | batch_track_stats: "True"
217 | act: "relu"
218 | dropout_rate: 0.0
219 | epochs: 250
220 | lr: 0.002
221 | batch_size: 100
222 | optimizer: "AdamW"
223 | optimizer_args: {}
224 | scheduler: "ReduceLROnPlateau"
225 | scheduler_args: {"mode":"min", "factor":0.8, "patience":10, "min_lr":0.00001, "threshold":0.0002}
226 | SM_demo:
227 | model: SM
228 | dim1: 100
229 | fc_count: 2
230 | epochs: 200
231 | lr: 0.002
232 | batch_size: 100
233 | optimizer: "AdamW"
234 | optimizer_args: {}
235 | scheduler: "ReduceLROnPlateau"
236 | scheduler_args: {"mode":"min", "factor":0.8, "patience":10, "min_lr":0.00001, "threshold":0.0002}
237 | SOAP_demo:
238 | model: SOAP
239 | dim1: 100
240 | fc_count: 2
241 | epochs: 200
242 | lr: 0.002
243 | batch_size: 100
244 | optimizer: "AdamW"
245 | optimizer_args: {}
246 | scheduler: "ReduceLROnPlateau"
247 | scheduler_args: {"mode":"min", "factor":0.8, "patience":10, "min_lr":0.00001, "threshold":0.0002}
248 |
249 |
--------------------------------------------------------------------------------
/data/2D_data/README.md:
--------------------------------------------------------------------------------
1 | 1. Download database file from https://cmr.fysik.dtu.dk/c2db/c2db.html in the form of a db file.
2 |
3 | 2. Run the python script to process into structure files with "python read_db.py"
--------------------------------------------------------------------------------
/data/2D_data/read_db.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import os
3 | import csv
4 | import ase.db
5 | import ase.io
6 |
7 | if not os.path.exists('2D_data'):
8 | os.mkdir('2D_data')
9 |
10 | # Connect to database
11 | db = ase.db.connect('c2db.db')
12 |
13 | rows = db.select(selection='workfunction')
14 | count=0
15 | out=[]
16 | for row in rows:
17 | atoms=row.toatoms()
18 | out.append(row.workfunction)
19 | ase.io.write('2D_data/'+str(count)+'.json', atoms)
20 | count=count+1
21 | print(count)
22 |
23 | with open('2D_data/targets.csv', 'w') as f:
24 | for i in range(0, count):
25 | f.write(str(i)+','+str(out[i]) + '\n')
--------------------------------------------------------------------------------
/data/MOF_data/README.md:
--------------------------------------------------------------------------------
1 | 1. Download database file from https://github.com/arosen93/QMOF.
2 |
3 | 2. Run the python script to process into structure files with "python process.py"
--------------------------------------------------------------------------------
/data/MOF_data/process.py:
--------------------------------------------------------------------------------
1 | import os
2 | import csv
3 | import json
4 | import pandas as pd
5 | from pymatgen.core import Structure
6 | from pymatgen.io.ase import AseAtomsAdaptor
7 | from ase.io import write
8 |
9 | # Read in QMOF data
10 | with open("qmof.json") as f:
11 | qmof = json.load(f)
12 | qmof_df = pd.json_normalize(qmof).set_index("qmof_id")
13 |
14 | with open("qmof_structure_data.json") as f:
15 | qmof_struct_data = json.load(f)
16 |
17 | # Make MOF_data folder
18 | if not os.path.exists("MOF_data"):
19 | os.mkdir("MOF_data")
20 |
21 | # Write out data
22 | targets = []
23 | for entry in qmof_struct_data:
24 | qmof_id = entry["qmof_id"]
25 | print(f"Writing {qmof_id}")
26 | mof = AseAtomsAdaptor().get_atoms(Structure.from_dict(entry["structure"]))
27 | write(os.path.join("MOF_data", f"{qmof_id}.json"), mof)
28 | targets.append([qmof_id, qmof_df.loc[qmof_id]["outputs.pbe.bandgap"]])
29 |
30 | with open(os.path.join("MOF_data", "targets.csv"), "w", newline="") as f:
31 | wr = csv.writer(f)
32 | wr.writerows(targets)
33 |
34 | print(len(targets))
35 |
--------------------------------------------------------------------------------
/data/README.md:
--------------------------------------------------------------------------------
1 |
2 | Five datasets were used in benchmarking in https://chemrxiv.org/articles/preprint/Benchmarking_Graph_Neural_Networks_for_Materials_Chemistry/13615421, and were chosen to reflect a variety of different materials and properties found in computational studies. They are:
3 |
4 | 1. A 3D bulk crystal structure dataset obtained from [Materials Project](https://materialsproject.org/open), for prediction of formation energies (bulk_dataset).
5 |
6 | 2. A 3D porous material dataset of MOFs obtained from Rosen et al. from the [QMOF database](https://github.com/arosen93/QMOF), for prediction of band gaps (MOF_dataset).
7 |
8 | 3. A metal alloy surface dataset obtained from Mamun et al., for prediction of adsorption energies from [CatHub](https://www.catalysis-hub.org/) (surface_dataset).
9 |
10 | 4. A 2D material dataset obtained from Haastrup at al., for prediction of workfunctions from [C2DB](https://cmr.fysik.dtu.dk/c2db/c2db.html) (2D_dataset).
11 |
12 | 5. A 0D sub-nanometer Pt cluster dataset from [Fung et al.](https://pubs.acs.org/doi/abs/10.1021/acs.jpcc.6b11968), for prediction of total energies, provided here in full (pt_dataset).
13 |
14 | Datasets include the individual structure files, encoded as an ASE .json file, a dictionary of elemental properties in atom_dict.json, and the prediction targets in targets.csv. Unzip the datasets via the command "tar -xvf XXX.tar.gz" to use. We cannot redistribute all of the referenced datasets here, please follow the links for instructions to download. We will include additional datasets of our own in the near future.
15 |
--------------------------------------------------------------------------------
/data/bulk_data/README.md:
--------------------------------------------------------------------------------
1 | 1. Get API key from https://materialsproject.org/open
2 |
3 | 2. Run script "python get_MP.py"
--------------------------------------------------------------------------------
/data/bulk_data/get_MP.py:
--------------------------------------------------------------------------------
1 | from pymatgen import MPRester
2 | from pymatgen.io.cif import CifWriter
3 | import csv
4 | import sys
5 | import os
6 |
7 | print('start')
8 |
9 | ###need 2 files: csv containing materials project IDs and atom_init.json (generic)
10 |
11 | ###get API key from materials project login dashboard online
12 | API_KEY='aB1VM67J38kTL0Zy'
13 | mpr = MPRester(API_KEY)
14 |
15 | ###open file containing IDs
16 | f=open('mp-ids-46744.csv')
17 | csvfile = csv.reader(f, delimiter=',')
18 | materials_ids = list(csvfile)
19 | f.close()
20 |
21 | print(len(materials_ids))
22 |
23 | if not os.path.exists('bulk_data'):
24 | os.mkdir('bulk_data')
25 |
26 | retries=5
27 | count=0
28 | write_prop=open('bulk_data/targets.csv', 'w')
29 | out=[]
30 | for material_id in materials_ids:
31 |
32 | #print(material_id[0])
33 | try_count=0
34 | success=0
35 | while try_count H*','H2O(g) - H2(g) + * -> O*','H2O(g) - 0.5H2(g) + * -> OH*','H2O(g) + * -> H2O*','CH4(g) - 2.0H2(g) + * -> C*','CH4(g) - 1.5H2(g) + * -> CH*','CH4(g) - H2(g) + * -> CH2*','CH4(g) - 0.5H2(g) + * -> CH3*','0.5N2(g) + * -> N*',
94 | '0.5H2(g) + 0.5N2(g) + * -> NH*','H2S(g) - H2(g) + * -> S*','H2S(g) - 0.5H2(g) + * -> SH*']
95 | key_name=['Hstar','Ostar', 'OHstar', 'H2Ostar', 'Cstar', 'CHstar', 'CH2star', 'CH3star', 'Nstar', 'NHstar', 'Sstar', 'SHstar']
96 |
97 | os.system('mkdir ads_data')
98 |
99 | count=0
100 | energies=[]
101 | for i in range(0, len(equations)):
102 | for j in range(0, len(reactions)):
103 | try:
104 | if reactions[j]['Equation'].find(equations[i]) != -1:
105 | if reactions[j]['sites'].find('top|') != -1:
106 | ase.io.write('ads_data/'+str(count)+'.json', reactions[j]['reactionSystems'][key_name[i]])
107 | count=count+1
108 | energies.append(reactions[j]['reactionEnergy'])
109 | elif reactions[j]['sites'].find('bridge|') != -1:
110 | ase.io.write('ads_data/'+str(count)+'.json', reactions[j]['reactionSystems'][key_name[i]])
111 | count=count+1
112 | energies.append(reactions[j]['reactionEnergy'])
113 | elif reactions[j]['sites'].find('hollow|') != -1:
114 | ase.io.write('ads_data/'+str(count)+'.json', reactions[j]['reactionSystems'][key_name[i]])
115 | count=count+1
116 | energies.append(reactions[j]['reactionEnergy'])
117 | except Exception as e:
118 | print(e)
119 |
120 | print(count)
121 |
122 | with open('ads_data/targets.csv', 'w') as f:
123 | for i in range(0,len(energies)):
124 | f.write(str(i)+','+str(energies[i]) + '\n')
125 |
126 |
--------------------------------------------------------------------------------
/data/test_data/README.md:
--------------------------------------------------------------------------------
1 | 1. Unzip the tar file with: "tar -xvf test_data.tar.gz"
--------------------------------------------------------------------------------
/data/test_data/test_data.tar.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/data/test_data/test_data.tar.gz
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | import argparse
3 | import time
4 | import csv
5 | import sys
6 | import json
7 | import random
8 | import numpy as np
9 | import pprint
10 | import yaml
11 |
12 | import torch
13 | import torch.multiprocessing as mp
14 |
15 | import ray
16 | from ray import tune
17 |
18 | from matdeeplearn import models, process, training
19 |
20 | ################################################################################
21 | #
22 | ################################################################################
23 | # MatDeepLearn code
24 | ################################################################################
25 | #
26 | ################################################################################
27 | def main():
28 | start_time = time.time()
29 | print("Starting...")
30 | print(
31 | "GPU is available:",
32 | torch.cuda.is_available(),
33 | ", Quantity: ",
34 | torch.cuda.device_count(),
35 | )
36 |
37 | parser = argparse.ArgumentParser(description="MatDeepLearn inputs")
38 | ###Job arguments
39 | parser.add_argument(
40 | "--config_path",
41 | default="config.yml",
42 | type=str,
43 | help="Location of config file (default: config.yml)",
44 | )
45 | parser.add_argument(
46 | "--run_mode",
47 | default=None,
48 | type=str,
49 | help="run modes: Training, Predict, Repeat, CV, Hyperparameter, Ensemble, Analysis",
50 | )
51 | parser.add_argument(
52 | "--job_name",
53 | default=None,
54 | type=str,
55 | help="name of your job and output files/folders",
56 | )
57 | parser.add_argument(
58 | "--model",
59 | default=None,
60 | type=str,
61 | help="CGCNN_demo, MPNN_demo, SchNet_demo, MEGNet_demo, GCN_demo, SOAP_demo, SM_demo",
62 | )
63 | parser.add_argument(
64 | "--seed",
65 | default=None,
66 | type=int,
67 | help="seed for data split, 0=random",
68 | )
69 | parser.add_argument(
70 | "--model_path",
71 | default=None,
72 | type=str,
73 | help="path of the model .pth file",
74 | )
75 | parser.add_argument(
76 | "--save_model",
77 | default=None,
78 | type=str,
79 | help="Save model",
80 | )
81 | parser.add_argument(
82 | "--load_model",
83 | default=None,
84 | type=str,
85 | help="Load model",
86 | )
87 | parser.add_argument(
88 | "--write_output",
89 | default=None,
90 | type=str,
91 | help="Write outputs to csv",
92 | )
93 | parser.add_argument(
94 | "--parallel",
95 | default=None,
96 | type=str,
97 | help="Use parallel mode (ddp) if available",
98 | )
99 | parser.add_argument(
100 | "--reprocess",
101 | default=None,
102 | type=str,
103 | help="Reprocess data since last run",
104 | )
105 | ###Processing arguments
106 | parser.add_argument(
107 | "--data_path",
108 | default=None,
109 | type=str,
110 | help="Location of data containing structures (json or any other valid format) and accompanying files",
111 | )
112 | parser.add_argument("--format", default=None, type=str, help="format of input data")
113 | ###Training arguments
114 | parser.add_argument("--train_ratio", default=None, type=float, help="train ratio")
115 | parser.add_argument(
116 | "--val_ratio", default=None, type=float, help="validation ratio"
117 | )
118 | parser.add_argument("--test_ratio", default=None, type=float, help="test ratio")
119 | parser.add_argument(
120 | "--verbosity", default=None, type=int, help="prints errors every x epochs"
121 | )
122 | parser.add_argument(
123 | "--target_index",
124 | default=None,
125 | type=int,
126 | help="which column to use as target property in the target file",
127 | )
128 | ###Model arguments
129 | parser.add_argument(
130 | "--epochs",
131 | default=None,
132 | type=int,
133 | help="number of total epochs to run",
134 | )
135 | parser.add_argument("--batch_size", default=None, type=int, help="batch size")
136 | parser.add_argument("--lr", default=None, type=float, help="learning rate")
137 |
138 | ##Get arguments from command line
139 | args = parser.parse_args(sys.argv[1:])
140 |
141 | ##Open provided config file
142 | assert os.path.exists(args.config_path), (
143 | "Config file not found in " + args.config_path
144 | )
145 | with open(args.config_path, "r") as ymlfile:
146 | config = yaml.load(ymlfile, Loader=yaml.FullLoader)
147 |
148 | ##Update config values from command line
149 | if args.run_mode != None:
150 | config["Job"]["run_mode"] = args.run_mode
151 | run_mode = config["Job"].get("run_mode")
152 | config["Job"] = config["Job"].get(run_mode)
153 | if config["Job"] == None:
154 | print("Invalid run mode")
155 | sys.exit()
156 |
157 | if args.job_name != None:
158 | config["Job"]["job_name"] = args.job_name
159 | if args.model != None:
160 | config["Job"]["model"] = args.model
161 | if args.seed != None:
162 | config["Job"]["seed"] = args.seed
163 | if args.model_path != None:
164 | config["Job"]["model_path"] = args.model_path
165 | if args.load_model != None:
166 | config["Job"]["load_model"] = args.load_model
167 | if args.save_model != None:
168 | config["Job"]["save_model"] = args.save_model
169 | if args.write_output != None:
170 | config["Job"]["write_output"] = args.write_output
171 | if args.parallel != None:
172 | config["Job"]["parallel"] = args.parallel
173 | if args.reprocess != None:
174 | config["Job"]["reprocess"] = args.reprocess
175 |
176 | if args.data_path != None:
177 | config["Processing"]["data_path"] = args.data_path
178 | if args.format != None:
179 | config["Processing"]["data_format"] = args.format
180 |
181 | if args.train_ratio != None:
182 | config["Training"]["train_ratio"] = args.train_ratio
183 | if args.val_ratio != None:
184 | config["Training"]["val_ratio"] = args.val_ratio
185 | if args.test_ratio != None:
186 | config["Training"]["test_ratio"] = args.test_ratio
187 | if args.verbosity != None:
188 | config["Training"]["verbosity"] = args.verbosity
189 | if args.target_index != None:
190 | config["Training"]["target_index"] = args.target_index
191 |
192 | for key in config["Models"]:
193 | if args.epochs != None:
194 | config["Models"][key]["epochs"] = args.epochs
195 | if args.batch_size != None:
196 | config["Models"][key]["batch_size"] = args.batch_size
197 | if args.lr != None:
198 | config["Models"][key]["lr"] = args.lr
199 |
200 | if run_mode == "Predict":
201 | config["Models"] = {}
202 | elif run_mode == "Ensemble":
203 | config["Job"]["ensemble_list"] = config["Job"]["ensemble_list"].split(",")
204 | models_temp = config["Models"]
205 | config["Models"] = {}
206 | for i in range(0, len(config["Job"]["ensemble_list"])):
207 | config["Models"][config["Job"]["ensemble_list"][i]] = models_temp.get(
208 | config["Job"]["ensemble_list"][i]
209 | )
210 | else:
211 | config["Models"] = config["Models"].get(config["Job"]["model"])
212 |
213 | if config["Job"]["seed"] == 0:
214 | config["Job"]["seed"] = np.random.randint(1, 1e6)
215 |
216 | ##Print and write settings for job
217 | print("Settings: ")
218 | pprint.pprint(config)
219 | with open(str(config["Job"]["job_name"]) + "_settings.txt", "w") as log_file:
220 | pprint.pprint(config, log_file)
221 |
222 | ################################################################################
223 | # Begin processing
224 | ################################################################################
225 |
226 | if run_mode != "Hyperparameter":
227 |
228 | process_start_time = time.time()
229 |
230 | dataset = process.get_dataset(
231 | config["Processing"]["data_path"],
232 | config["Training"]["target_index"],
233 | config["Job"]["reprocess"],
234 | config["Processing"],
235 | )
236 |
237 | print("Dataset used:", dataset)
238 | print(dataset[0])
239 | print(dataset[0].x[0],dataset[0].x[-1])
240 |
241 | print("--- %s seconds for processing ---" % (time.time() - process_start_time))
242 |
243 | ################################################################################
244 | # Training begins
245 | ################################################################################
246 |
247 | ##Regular training
248 | if run_mode == "Training":
249 |
250 | print("Starting regular training")
251 | print(
252 | "running for "
253 | + str(config["Models"]["epochs"])
254 | + " epochs"
255 | + " on "
256 | + str(config["Job"]["model"])
257 | + " model"
258 | )
259 | world_size = torch.cuda.device_count()
260 | if world_size == 0:
261 | print("Running on CPU - this will be slow")
262 | training.train_regular(
263 | "cpu",
264 | world_size,
265 | config["Processing"]["data_path"],
266 | config["Job"],
267 | config["Training"],
268 | config["Models"],
269 | )
270 |
271 | elif world_size > 0:
272 | if config["Job"]["parallel"] == "True":
273 | print("Running on", world_size, "GPUs")
274 | mp.spawn(
275 | training.train_regular,
276 | args=(
277 | world_size,
278 | config["Processing"]["data_path"],
279 | config["Job"],
280 | config["Training"],
281 | config["Models"],
282 | ),
283 | nprocs=world_size,
284 | join=True,
285 | )
286 | if config["Job"]["parallel"] == "False":
287 | print("Running on one GPU")
288 | training.train_regular(
289 | "cuda",
290 | world_size,
291 | config["Processing"]["data_path"],
292 | config["Job"],
293 | config["Training"],
294 | config["Models"],
295 | )
296 |
297 | ##Predicting from a trained model
298 | elif run_mode == "Predict":
299 |
300 | print("Starting prediction from trained model")
301 | train_error = training.predict(
302 | dataset, config["Training"]["loss"], config["Job"]
303 | )
304 | print("Test Error: {:.5f}".format(train_error))
305 |
306 | ##Running n fold cross validation
307 | elif run_mode == "CV":
308 |
309 | print("Starting cross validation")
310 | print(
311 | "running for "
312 | + str(config["Models"]["epochs"])
313 | + " epochs"
314 | + " on "
315 | + str(config["Job"]["model"])
316 | + " model"
317 | )
318 | world_size = torch.cuda.device_count()
319 | if world_size == 0:
320 | print("Running on CPU - this will be slow")
321 | training.train_CV(
322 | "cpu",
323 | world_size,
324 | config["Processing"]["data_path"],
325 | config["Job"],
326 | config["Training"],
327 | config["Models"],
328 | )
329 |
330 | elif world_size > 0:
331 | if config["Job"]["parallel"] == "True":
332 | print("Running on", world_size, "GPUs")
333 | mp.spawn(
334 | training.train_CV,
335 | args=(
336 | world_size,
337 | config["Processing"]["data_path"],
338 | config["Job"],
339 | config["Training"],
340 | config["Models"],
341 | ),
342 | nprocs=world_size,
343 | join=True,
344 | )
345 | if config["Job"]["parallel"] == "False":
346 | print("Running on one GPU")
347 | training.train_CV(
348 | "cuda",
349 | world_size,
350 | config["Processing"]["data_path"],
351 | config["Job"],
352 | config["Training"],
353 | config["Models"],
354 | )
355 |
356 | ##Running repeated trials
357 | elif run_mode == "Repeat":
358 | print("Repeat training for " + str(config["Job"]["repeat_trials"]) + " trials")
359 | training.train_repeat(
360 | config["Processing"]["data_path"],
361 | config["Job"],
362 | config["Training"],
363 | config["Models"],
364 | )
365 |
366 | ##Hyperparameter optimization
367 | elif run_mode == "Hyperparameter":
368 |
369 | print("Starting hyperparameter optimization")
370 | print(
371 | "running for "
372 | + str(config["Models"]["epochs"])
373 | + " epochs"
374 | + " on "
375 | + str(config["Job"]["model"])
376 | + " model"
377 | )
378 |
379 | ##Reprocess here if not reprocessing between trials
380 | if config["Job"]["reprocess"] == "False":
381 | process_start_time = time.time()
382 |
383 | dataset = process.get_dataset(
384 | config["Processing"]["data_path"],
385 | config["Training"]["target_index"],
386 | config["Job"]["reprocess"],
387 | config["Processing"],
388 | )
389 |
390 | print("Dataset used:", dataset)
391 | print(dataset[0])
392 |
393 | if config["Training"]["target_index"] == -1:
394 | config["Models"]["output_dim"] = len(dataset[0].y[0])
395 | # print(len(dataset[0].y))
396 |
397 | print(
398 | "--- %s seconds for processing ---" % (time.time() - process_start_time)
399 | )
400 |
401 | ##Set up search space for each model; these can subject to change
402 | hyper_args = {}
403 | dim1 = [x * 10 for x in range(1, 20)]
404 | dim2 = [x * 10 for x in range(1, 20)]
405 | dim3 = [x * 10 for x in range(1, 20)]
406 | batch = [x * 10 for x in range(1, 20)]
407 | hyper_args["SchNet_demo"] = {
408 | "dim1": tune.choice(dim1),
409 | "dim2": tune.choice(dim2),
410 | "dim3": tune.choice(dim3),
411 | "gnn_count": tune.choice([1, 2, 3, 4, 5, 6, 7, 8, 9]),
412 | "post_fc_count": tune.choice([1, 2, 3, 4, 5, 6]),
413 | "pool": tune.choice(
414 | ["global_mean_pool", "global_add_pool", "global_max_pool", "set2set"]
415 | ),
416 | "lr": tune.loguniform(1e-4, 0.05),
417 | "batch_size": tune.choice(batch),
418 | "cutoff": config["Processing"]["graph_max_radius"],
419 | }
420 | hyper_args["CGCNN_demo"] = {
421 | "dim1": tune.choice(dim1),
422 | "dim2": tune.choice(dim2),
423 | "gnn_count": tune.choice([1, 2, 3, 4, 5, 6, 7, 8, 9]),
424 | "post_fc_count": tune.choice([1, 2, 3, 4, 5, 6]),
425 | "pool": tune.choice(
426 | ["global_mean_pool", "global_add_pool", "global_max_pool", "set2set"]
427 | ),
428 | "lr": tune.loguniform(1e-4, 0.05),
429 | "batch_size": tune.choice(batch),
430 | }
431 | hyper_args["MPNN_demo"] = {
432 | "dim1": tune.choice(dim1),
433 | "dim2": tune.choice(dim2),
434 | "dim3": tune.choice(dim3),
435 | "gnn_count": tune.choice([1, 2, 3, 4, 5, 6, 7, 8, 9]),
436 | "post_fc_count": tune.choice([1, 2, 3, 4, 5, 6]),
437 | "pool": tune.choice(
438 | ["global_mean_pool", "global_add_pool", "global_max_pool", "set2set"]
439 | ),
440 | "lr": tune.loguniform(1e-4, 0.05),
441 | "batch_size": tune.choice(batch),
442 | }
443 | hyper_args["MEGNet_demo"] = {
444 | "dim1": tune.choice(dim1),
445 | "dim2": tune.choice(dim2),
446 | "dim3": tune.choice(dim3),
447 | "gnn_count": tune.choice([1, 2, 3, 4, 5, 6, 7, 8, 9]),
448 | "post_fc_count": tune.choice([1, 2, 3, 4, 5, 6]),
449 | "pool": tune.choice(["global_mean_pool", "global_add_pool", "global_max_pool", "set2set"]),
450 | "lr": tune.loguniform(1e-4, 0.05),
451 | "batch_size": tune.choice(batch),
452 | }
453 | hyper_args["GCN_demo"] = {
454 | "dim1": tune.choice(dim1),
455 | "dim2": tune.choice(dim2),
456 | "gnn_count": tune.choice([1, 2, 3, 4, 5, 6, 7, 8, 9]),
457 | "post_fc_count": tune.choice([1, 2, 3, 4, 5, 6]),
458 | "pool": tune.choice(
459 | ["global_mean_pool", "global_add_pool", "global_max_pool", "set2set"]
460 | ),
461 | "lr": tune.loguniform(1e-4, 0.05),
462 | "batch_size": tune.choice(batch),
463 | }
464 | hyper_args["SOAP_demo"] = {
465 | "dim1": tune.choice(dim1),
466 | "post_fc_count": tune.choice([1, 2, 3, 4, 5, 6]),
467 | "lr": tune.loguniform(1e-4, 0.05),
468 | "batch_size": tune.choice(batch),
469 | "nmax": tune.choice([1, 2, 3, 4, 5, 6, 7, 8, 9]),
470 | "lmax": tune.choice([1, 2, 3, 4, 5, 6, 7, 8, 9]),
471 | "sigma": tune.uniform(0.1, 2.0),
472 | "rcut": tune.uniform(1.0, 10.0),
473 | }
474 | hyper_args["SM_demo"] = {
475 | "dim1": tune.choice(dim1),
476 | "post_fc_count": tune.choice([1, 2, 3, 4, 5, 6]),
477 | "lr": tune.loguniform(1e-4, 0.05),
478 | "batch_size": tune.choice(batch),
479 | }
480 |
481 | ##Run tune setup and trials
482 | best_trial = training.tune_setup(
483 | hyper_args[config["Job"]["model"]],
484 | config["Job"],
485 | config["Processing"],
486 | config["Training"],
487 | config["Models"],
488 | )
489 |
490 | ##Write hyperparameters to file
491 | hyperparameters = best_trial.config["hyper_args"]
492 | hyperparameters = {
493 | k: round(v, 6) if isinstance(v, float) else v
494 | for k, v in hyperparameters.items()
495 | }
496 | with open(
497 | config["Job"]["job_name"] + "_optimized_hyperparameters.json",
498 | "w",
499 | encoding="utf-8",
500 | ) as f:
501 | json.dump(hyperparameters, f, ensure_ascii=False, indent=4)
502 |
503 | ##Print best hyperparameters
504 | print("Best trial hyper_args: {}".format(hyperparameters))
505 | print(
506 | "Best trial final validation error: {:.5f}".format(
507 | best_trial.last_result["loss"]
508 | )
509 | )
510 |
511 | ##Ensemble mode using simple averages
512 | elif run_mode == "Ensemble":
513 |
514 | print("Starting simple (average) ensemble training")
515 | print("Ensemble list: ", config["Job"]["ensemble_list"])
516 | training.train_ensemble(
517 | config["Processing"]["data_path"],
518 | config["Job"],
519 | config["Training"],
520 | config["Models"],
521 | )
522 |
523 | ##Analysis mode
524 | ##NOTE: this only works for "early" pooling option, because it assumes the graph-level features are plotted, not the node-level ones
525 | elif run_mode == "Analysis":
526 | print("Starting analysis of graph features")
527 |
528 | ##dict for the tsne settings; please refer to sklearn.manifold.TSNE for information on the function arguments
529 | tsne_args = {
530 | "perplexity": 50,
531 | "early_exaggeration": 12,
532 | "learning_rate": 300,
533 | "n_iter": 5000,
534 | "verbose": 1,
535 | "random_state": 42,
536 | }
537 | ##this saves the tsne output as a csv file with: structure id, y, tsne 1, tsne 2 as the columns
538 | ##Currently only works if there is one y column in targets.csv
539 | training.analysis(
540 | dataset,
541 | config["Job"]["model_path"],
542 | tsne_args,
543 | )
544 |
545 | else:
546 | print("No valid mode selected, try again")
547 |
548 | print("--- %s total seconds elapsed ---" % (time.time() - start_time))
549 |
550 |
551 | if __name__ == "__main__":
552 | main()
553 |
--------------------------------------------------------------------------------
/matdeeplearn/__init__.py:
--------------------------------------------------------------------------------
1 | from .models import *
2 | from .training import *
3 | from .process import *
4 |
--------------------------------------------------------------------------------
/matdeeplearn/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/__pycache__/config.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/__pycache__/config.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/__pycache__/models.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/__pycache__/models.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/__pycache__/process.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/__pycache__/process.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/__pycache__/process_HEA.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/__pycache__/process_HEA.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/__pycache__/training.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/__pycache__/training.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__init__.py:
--------------------------------------------------------------------------------
1 | from .gcn import GCN
2 | from .mpnn import MPNN
3 | from .schnet import SchNet
4 | from .cgcnn import CGCNN
5 | from .megnet import MEGNet
6 | from .descriptor_nn import SOAP, SM
7 |
8 | __all__ = [
9 | "GCN",
10 | "MPNN",
11 | "SchNet",
12 | "CGCNN",
13 | "MEGNet",
14 | "SOAP",
15 | "SM",
16 | ]
17 |
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/MLP.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/MLP.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/cgcnn.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/cgcnn.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/cgcnn_nmr.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/cgcnn_nmr.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/cgcnn_test.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/cgcnn_test.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/cnnet.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/cnnet.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/descriptor_nn.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/descriptor_nn.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/gcn.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/gcn.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/megnet.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/megnet.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/mpnn.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/mpnn.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/schnet.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/schnet.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/test_cgcnn2.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/test_cgcnn2.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/test_dosgnn.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/test_dosgnn.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/test_forces.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/test_forces.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/test_matgnn.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/test_matgnn.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/test_misc.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/test_misc.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/testing.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/testing.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/__pycache__/utils.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/models/__pycache__/utils.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/models/cgcnn.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import Tensor
3 | import torch.nn.functional as F
4 | from torch.nn import Sequential, Linear, BatchNorm1d
5 | import torch_geometric
6 | from torch_geometric.nn import (
7 | Set2Set,
8 | global_mean_pool,
9 | global_add_pool,
10 | global_max_pool,
11 | CGConv,
12 | )
13 | from torch_scatter import scatter_mean, scatter_add, scatter_max, scatter
14 |
15 |
16 | # CGCNN
17 | class CGCNN(torch.nn.Module):
18 | def __init__(
19 | self,
20 | data,
21 | dim1=64,
22 | dim2=64,
23 | pre_fc_count=1,
24 | gc_count=3,
25 | post_fc_count=1,
26 | pool="global_mean_pool",
27 | pool_order="early",
28 | batch_norm="True",
29 | batch_track_stats="True",
30 | act="relu",
31 | dropout_rate=0.0,
32 | **kwargs
33 | ):
34 | super(CGCNN, self).__init__()
35 |
36 | if batch_track_stats == "False":
37 | self.batch_track_stats = False
38 | else:
39 | self.batch_track_stats = True
40 | self.batch_norm = batch_norm
41 | self.pool = pool
42 | self.act = act
43 | self.pool_order = pool_order
44 | self.dropout_rate = dropout_rate
45 |
46 | ##Determine gc dimension dimension
47 | assert gc_count > 0, "Need at least 1 GC layer"
48 | if pre_fc_count == 0:
49 | gc_dim = data.num_features
50 | else:
51 | gc_dim = dim1
52 | ##Determine post_fc dimension
53 | if pre_fc_count == 0:
54 | post_fc_dim = data.num_features
55 | else:
56 | post_fc_dim = dim1
57 | ##Determine output dimension length
58 | if data[0].y.ndim == 0:
59 | output_dim = 1
60 | else:
61 | output_dim = len(data[0].y[0])
62 |
63 | ##Set up pre-GNN dense layers (NOTE: in v0.1 this is always set to 1 layer)
64 | if pre_fc_count > 0:
65 | self.pre_lin_list = torch.nn.ModuleList()
66 | for i in range(pre_fc_count):
67 | if i == 0:
68 | lin = torch.nn.Linear(data.num_features, dim1)
69 | self.pre_lin_list.append(lin)
70 | else:
71 | lin = torch.nn.Linear(dim1, dim1)
72 | self.pre_lin_list.append(lin)
73 | elif pre_fc_count == 0:
74 | self.pre_lin_list = torch.nn.ModuleList()
75 |
76 | ##Set up GNN layers
77 | self.conv_list = torch.nn.ModuleList()
78 | self.bn_list = torch.nn.ModuleList()
79 | for i in range(gc_count):
80 | conv = CGConv(
81 | gc_dim, data.num_edge_features, aggr="mean", batch_norm=False
82 | )
83 | self.conv_list.append(conv)
84 | ##Track running stats set to false can prevent some instabilities; this causes other issues with different val/test performance from loader size?
85 | if self.batch_norm == "True":
86 | bn = BatchNorm1d(gc_dim, track_running_stats=self.batch_track_stats)
87 | self.bn_list.append(bn)
88 |
89 | ##Set up post-GNN dense layers (NOTE: in v0.1 there was a minimum of 2 dense layers, and fc_count(now post_fc_count) added to this number. In the current version, the minimum is zero)
90 | if post_fc_count > 0:
91 | self.post_lin_list = torch.nn.ModuleList()
92 | for i in range(post_fc_count):
93 | if i == 0:
94 | ##Set2set pooling has doubled dimension
95 | if self.pool_order == "early" and self.pool == "set2set":
96 | lin = torch.nn.Linear(post_fc_dim * 2, dim2)
97 | else:
98 | lin = torch.nn.Linear(post_fc_dim, dim2)
99 | self.post_lin_list.append(lin)
100 | else:
101 | lin = torch.nn.Linear(dim2, dim2)
102 | self.post_lin_list.append(lin)
103 | self.lin_out = torch.nn.Linear(dim2, output_dim)
104 |
105 | elif post_fc_count == 0:
106 | self.post_lin_list = torch.nn.ModuleList()
107 | if self.pool_order == "early" and self.pool == "set2set":
108 | self.lin_out = torch.nn.Linear(post_fc_dim*2, output_dim)
109 | else:
110 | self.lin_out = torch.nn.Linear(post_fc_dim, output_dim)
111 |
112 | ##Set up set2set pooling (if used)
113 | ##Should processing_setps be a hypereparameter?
114 | if self.pool_order == "early" and self.pool == "set2set":
115 | self.set2set = Set2Set(post_fc_dim, processing_steps=3)
116 | elif self.pool_order == "late" and self.pool == "set2set":
117 | self.set2set = Set2Set(output_dim, processing_steps=3, num_layers=1)
118 | # workaround for doubled dimension by set2set; if late pooling not reccomended to use set2set
119 | self.lin_out_2 = torch.nn.Linear(output_dim * 2, output_dim)
120 |
121 | def forward(self, data):
122 |
123 | ##Pre-GNN dense layers
124 | for i in range(0, len(self.pre_lin_list)):
125 | if i == 0:
126 | out = self.pre_lin_list[i](data.x)
127 | out = getattr(F, self.act)(out)
128 | else:
129 | out = self.pre_lin_list[i](out)
130 | out = getattr(F, self.act)(out)
131 |
132 | ##GNN layers
133 | for i in range(0, len(self.conv_list)):
134 | if len(self.pre_lin_list) == 0 and i == 0:
135 | if self.batch_norm == "True":
136 | out = self.conv_list[i](data.x, data.edge_index, data.edge_attr)
137 | out = self.bn_list[i](out)
138 | else:
139 | out = self.conv_list[i](data.x, data.edge_index, data.edge_attr)
140 | else:
141 | if self.batch_norm == "True":
142 | out = self.conv_list[i](out, data.edge_index, data.edge_attr)
143 | out = self.bn_list[i](out)
144 | else:
145 | out = self.conv_list[i](out, data.edge_index, data.edge_attr)
146 | #out = getattr(F, self.act)(out)
147 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
148 |
149 | ##Post-GNN dense layers
150 | if self.pool_order == "early":
151 | if self.pool == "set2set":
152 | out = self.set2set(out, data.batch)
153 | else:
154 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
155 | for i in range(0, len(self.post_lin_list)):
156 | out = self.post_lin_list[i](out)
157 | out = getattr(F, self.act)(out)
158 | out = self.lin_out(out)
159 |
160 | elif self.pool_order == "late":
161 | for i in range(0, len(self.post_lin_list)):
162 | out = self.post_lin_list[i](out)
163 | out = getattr(F, self.act)(out)
164 | out = self.lin_out(out)
165 | if self.pool == "set2set":
166 | out = self.set2set(out, data.batch)
167 | out = self.lin_out_2(out)
168 | else:
169 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
170 |
171 | if out.shape[1] == 1:
172 | return out.view(-1)
173 | else:
174 | return out
175 |
--------------------------------------------------------------------------------
/matdeeplearn/models/descriptor_nn.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 | from torch import Tensor
4 | from torch.nn import (
5 | Sequential,
6 | Linear,
7 | ReLU,
8 | GRU,
9 | Embedding,
10 | BatchNorm1d,
11 | Dropout,
12 | LayerNorm,
13 | )
14 | from torch_geometric.nn import (
15 | Set2Set,
16 | global_mean_pool,
17 | global_add_pool,
18 | global_max_pool,
19 | )
20 |
21 |
22 | # Sine matrix with neural network
23 | class SM(torch.nn.Module):
24 | def __init__(self, data, dim1=64, fc_count=1, **kwargs):
25 | super(SM, self).__init__()
26 |
27 | self.lin1 = torch.nn.Linear(data[0].extra_features_SM.shape[1], dim1)
28 |
29 | self.lin_list = torch.nn.ModuleList(
30 | [torch.nn.Linear(dim1, dim1) for i in range(fc_count)]
31 | )
32 |
33 | self.lin2 = torch.nn.Linear(dim1, 1)
34 |
35 | def forward(self, data):
36 |
37 | out = F.relu(self.lin1(data.extra_features_SM))
38 | for layer in self.lin_list:
39 | out = F.relu(layer(out))
40 | out = self.lin2(out)
41 | if out.shape[1] == 1:
42 | return out.view(-1)
43 | else:
44 | return out
45 |
46 |
47 | # Smooth Overlap of Atomic Positions with neural network
48 | class SOAP(torch.nn.Module):
49 | def __init__(self, data, dim1, fc_count, **kwargs):
50 | super(SOAP, self).__init__()
51 |
52 | self.lin1 = torch.nn.Linear(data[0].extra_features_SOAP.shape[1], dim1)
53 |
54 | self.lin_list = torch.nn.ModuleList(
55 | [torch.nn.Linear(dim1, dim1) for i in range(fc_count)]
56 | )
57 |
58 | self.lin2 = torch.nn.Linear(dim1, 1)
59 |
60 | def forward(self, data):
61 |
62 | out = F.relu(self.lin1(data.extra_features_SOAP))
63 | for layer in self.lin_list:
64 | out = F.relu(layer(out))
65 | out = self.lin2(out)
66 | if out.shape[1] == 1:
67 | return out.view(-1)
68 | else:
69 | return out
70 |
--------------------------------------------------------------------------------
/matdeeplearn/models/gcn.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import Tensor
3 | import torch.nn.functional as F
4 | from torch.nn import Sequential, Linear, BatchNorm1d
5 | import torch_geometric
6 | from torch_geometric.nn import (
7 | Set2Set,
8 | global_mean_pool,
9 | global_add_pool,
10 | global_max_pool,
11 | GCNConv,
12 | )
13 | from torch_scatter import scatter_mean, scatter_add, scatter_max, scatter
14 |
15 |
16 | # CGCNN
17 | class GCN(torch.nn.Module):
18 | def __init__(
19 | self,
20 | data,
21 | dim1=64,
22 | dim2=64,
23 | pre_fc_count=1,
24 | gc_count=3,
25 | post_fc_count=1,
26 | pool="global_mean_pool",
27 | pool_order="early",
28 | batch_norm="True",
29 | batch_track_stats="True",
30 | act="relu",
31 | dropout_rate=0.0,
32 | **kwargs
33 | ):
34 | super(GCN, self).__init__()
35 |
36 | if batch_track_stats == "False":
37 | self.batch_track_stats = False
38 | else:
39 | self.batch_track_stats = True
40 | self.batch_norm = batch_norm
41 | self.pool = pool
42 | self.act = act
43 | self.pool_order = pool_order
44 | self.dropout_rate = dropout_rate
45 |
46 | ##Determine gc dimension dimension
47 | assert gc_count > 0, "Need at least 1 GC layer"
48 | if pre_fc_count == 0:
49 | gc_dim = data.num_features
50 | else:
51 | gc_dim = dim1
52 | ##Determine post_fc dimension
53 | if pre_fc_count == 0:
54 | post_fc_dim = data.num_features
55 | else:
56 | post_fc_dim = dim1
57 | ##Determine output dimension length
58 | if data[0].y.ndim == 0:
59 | output_dim = 1
60 | else:
61 | output_dim = len(data[0].y[0])
62 |
63 | ##Set up pre-GNN dense layers (NOTE: in v0.1 this is always set to 1 layer)
64 | if pre_fc_count > 0:
65 | self.pre_lin_list = torch.nn.ModuleList()
66 | for i in range(pre_fc_count):
67 | if i == 0:
68 | lin = torch.nn.Linear(data.num_features, dim1)
69 | self.pre_lin_list.append(lin)
70 | else:
71 | lin = torch.nn.Linear(dim1, dim1)
72 | self.pre_lin_list.append(lin)
73 | elif pre_fc_count == 0:
74 | self.pre_lin_list = torch.nn.ModuleList()
75 |
76 | ##Set up GNN layers
77 | self.conv_list = torch.nn.ModuleList()
78 | self.bn_list = torch.nn.ModuleList()
79 | for i in range(gc_count):
80 | conv = GCNConv(
81 | gc_dim, gc_dim, improved=True, add_self_loops=False
82 | )
83 | self.conv_list.append(conv)
84 | ##Track running stats set to false can prevent some instabilities; this causes other issues with different val/test performance from loader size?
85 | if self.batch_norm == "True":
86 | bn = BatchNorm1d(gc_dim, track_running_stats=self.batch_track_stats)
87 | self.bn_list.append(bn)
88 |
89 | ##Set up post-GNN dense layers (NOTE: in v0.1 there was a minimum of 2 dense layers, and fc_count(now post_fc_count) added to this number. In the current version, the minimum is zero)
90 | if post_fc_count > 0:
91 | self.post_lin_list = torch.nn.ModuleList()
92 | for i in range(post_fc_count):
93 | if i == 0:
94 | ##Set2set pooling has doubled dimension
95 | if self.pool_order == "early" and self.pool == "set2set":
96 | lin = torch.nn.Linear(post_fc_dim * 2, dim2)
97 | else:
98 | lin = torch.nn.Linear(post_fc_dim, dim2)
99 | self.post_lin_list.append(lin)
100 | else:
101 | lin = torch.nn.Linear(dim2, dim2)
102 | self.post_lin_list.append(lin)
103 | self.lin_out = torch.nn.Linear(dim2, output_dim)
104 |
105 | elif post_fc_count == 0:
106 | self.post_lin_list = torch.nn.ModuleList()
107 | if self.pool_order == "early" and self.pool == "set2set":
108 | self.lin_out = torch.nn.Linear(post_fc_dim*2, output_dim)
109 | else:
110 | self.lin_out = torch.nn.Linear(post_fc_dim, output_dim)
111 |
112 | ##Set up set2set pooling (if used)
113 | if self.pool_order == "early" and self.pool == "set2set":
114 | self.set2set = Set2Set(post_fc_dim, processing_steps=3)
115 | elif self.pool_order == "late" and self.pool == "set2set":
116 | self.set2set = Set2Set(output_dim, processing_steps=3, num_layers=1)
117 | # workaround for doubled dimension by set2set; if late pooling not reccomended to use set2set
118 | self.lin_out_2 = torch.nn.Linear(output_dim * 2, output_dim)
119 |
120 | def forward(self, data):
121 |
122 | ##Pre-GNN dense layers
123 | for i in range(0, len(self.pre_lin_list)):
124 | if i == 0:
125 | out = self.pre_lin_list[i](data.x)
126 | out = getattr(F, self.act)(out)
127 | else:
128 | out = self.pre_lin_list[i](out)
129 | out = getattr(F, self.act)(out)
130 |
131 | ##GNN layers
132 | for i in range(0, len(self.conv_list)):
133 | if len(self.pre_lin_list) == 0 and i == 0:
134 | if self.batch_norm == "True":
135 | out = self.conv_list[i](data.x, data.edge_index, data.edge_weight)
136 | out = self.bn_list[i](out)
137 | else:
138 | out = self.conv_list[i](data.x, data.edge_index, data.edge_weight)
139 | else:
140 | if self.batch_norm == "True":
141 | out = self.conv_list[i](out, data.edge_index, data.edge_weight)
142 | out = self.bn_list[i](out)
143 | else:
144 | out = self.conv_list[i](out, data.edge_index, data.edge_weight)
145 | out = getattr(F, self.act)(out)
146 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
147 |
148 | ##Post-GNN dense layers
149 | if self.pool_order == "early":
150 | if self.pool == "set2set":
151 | out = self.set2set(out, data.batch)
152 | else:
153 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
154 | for i in range(0, len(self.post_lin_list)):
155 | out = self.post_lin_list[i](out)
156 | out = getattr(F, self.act)(out)
157 | out = self.lin_out(out)
158 |
159 | elif self.pool_order == "late":
160 | for i in range(0, len(self.post_lin_list)):
161 | out = self.post_lin_list[i](out)
162 | out = getattr(F, self.act)(out)
163 | out = self.lin_out(out)
164 | if self.pool == "set2set":
165 | out = self.set2set(out, data.batch)
166 | out = self.lin_out_2(out)
167 | else:
168 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
169 |
170 | if out.shape[1] == 1:
171 | return out.view(-1)
172 | else:
173 | return out
--------------------------------------------------------------------------------
/matdeeplearn/models/megnet.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import Tensor
3 | import torch.nn.functional as F
4 | from torch.nn import Sequential, Linear, ReLU, BatchNorm1d
5 | import torch_geometric
6 | from torch_geometric.nn import (
7 | Set2Set,
8 | global_mean_pool,
9 | global_add_pool,
10 | global_max_pool,
11 | MetaLayer,
12 | )
13 | from torch_scatter import scatter_mean, scatter_add, scatter_max, scatter
14 |
15 | # Megnet
16 | class Megnet_EdgeModel(torch.nn.Module):
17 | def __init__(self, dim, act, batch_norm, batch_track_stats, dropout_rate, fc_layers=2):
18 | super(Megnet_EdgeModel, self).__init__()
19 | self.act=act
20 | self.fc_layers = fc_layers
21 | if batch_track_stats == "False":
22 | self.batch_track_stats = False
23 | else:
24 | self.batch_track_stats = True
25 | self.batch_norm = batch_norm
26 | self.dropout_rate = dropout_rate
27 |
28 | self.edge_mlp = torch.nn.ModuleList()
29 | self.bn_list = torch.nn.ModuleList()
30 | for i in range(self.fc_layers + 1):
31 | if i == 0:
32 | lin = torch.nn.Linear(dim * 4, dim)
33 | self.edge_mlp.append(lin)
34 | else:
35 | lin = torch.nn.Linear(dim, dim)
36 | self.edge_mlp.append(lin)
37 | if self.batch_norm == "True":
38 | bn = BatchNorm1d(dim, track_running_stats=self.batch_track_stats)
39 | self.bn_list.append(bn)
40 |
41 | def forward(self, src, dest, edge_attr, u, batch):
42 | comb = torch.cat([src, dest, edge_attr, u[batch]], dim=1)
43 | for i in range(0, len(self.edge_mlp)):
44 | if i == 0:
45 | out = self.edge_mlp[i](comb)
46 | out = getattr(F, self.act)(out)
47 | if self.batch_norm == "True":
48 | out = self.bn_list[i](out)
49 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
50 | else:
51 | out = self.edge_mlp[i](out)
52 | out = getattr(F, self.act)(out)
53 | if self.batch_norm == "True":
54 | out = self.bn_list[i](out)
55 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
56 | return out
57 |
58 |
59 | class Megnet_NodeModel(torch.nn.Module):
60 | def __init__(self, dim, act, batch_norm, batch_track_stats, dropout_rate, fc_layers=2):
61 | super(Megnet_NodeModel, self).__init__()
62 | self.act=act
63 | self.fc_layers = fc_layers
64 | if batch_track_stats == "False":
65 | self.batch_track_stats = False
66 | else:
67 | self.batch_track_stats = True
68 | self.batch_norm = batch_norm
69 | self.dropout_rate = dropout_rate
70 |
71 | self.node_mlp = torch.nn.ModuleList()
72 | self.bn_list = torch.nn.ModuleList()
73 | for i in range(self.fc_layers + 1):
74 | if i == 0:
75 | lin = torch.nn.Linear(dim * 3, dim)
76 | self.node_mlp.append(lin)
77 | else:
78 | lin = torch.nn.Linear(dim, dim)
79 | self.node_mlp.append(lin)
80 | if self.batch_norm == "True":
81 | bn = BatchNorm1d(dim, track_running_stats=self.batch_track_stats)
82 | self.bn_list.append(bn)
83 |
84 | def forward(self, x, edge_index, edge_attr, u, batch):
85 | # row, col = edge_index
86 | v_e = scatter_mean(edge_attr, edge_index[0, :], dim=0)
87 | comb = torch.cat([x, v_e, u[batch]], dim=1)
88 | for i in range(0, len(self.node_mlp)):
89 | if i == 0:
90 | out = self.node_mlp[i](comb)
91 | out = getattr(F, self.act)(out)
92 | if self.batch_norm == "True":
93 | out = self.bn_list[i](out)
94 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
95 | else:
96 | out = self.node_mlp[i](out)
97 | out = getattr(F, self.act)(out)
98 | if self.batch_norm == "True":
99 | out = self.bn_list[i](out)
100 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
101 | return out
102 |
103 |
104 | class Megnet_GlobalModel(torch.nn.Module):
105 | def __init__(self, dim, act, batch_norm, batch_track_stats, dropout_rate, fc_layers=2):
106 | super(Megnet_GlobalModel, self).__init__()
107 | self.act=act
108 | self.fc_layers = fc_layers
109 | if batch_track_stats == "False":
110 | self.batch_track_stats = False
111 | else:
112 | self.batch_track_stats = True
113 | self.batch_norm = batch_norm
114 | self.dropout_rate = dropout_rate
115 |
116 | self.global_mlp = torch.nn.ModuleList()
117 | self.bn_list = torch.nn.ModuleList()
118 | for i in range(self.fc_layers + 1):
119 | if i == 0:
120 | lin = torch.nn.Linear(dim * 3, dim)
121 | self.global_mlp.append(lin)
122 | else:
123 | lin = torch.nn.Linear(dim, dim)
124 | self.global_mlp.append(lin)
125 | if self.batch_norm == "True":
126 | bn = BatchNorm1d(dim, track_running_stats=self.batch_track_stats)
127 | self.bn_list.append(bn)
128 |
129 | def forward(self, x, edge_index, edge_attr, u, batch):
130 | u_e = scatter_mean(edge_attr, edge_index[0, :], dim=0)
131 | u_e = scatter_mean(u_e, batch, dim=0)
132 | u_v = scatter_mean(x, batch, dim=0)
133 | comb = torch.cat([u_e, u_v, u], dim=1)
134 | for i in range(0, len(self.global_mlp)):
135 | if i == 0:
136 | out = self.global_mlp[i](comb)
137 | out = getattr(F, self.act)(out)
138 | if self.batch_norm == "True":
139 | out = self.bn_list[i](out)
140 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
141 | else:
142 | out = self.global_mlp[i](out)
143 | out = getattr(F, self.act)(out)
144 | if self.batch_norm == "True":
145 | out = self.bn_list[i](out)
146 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
147 | return out
148 |
149 |
150 | class MEGNet(torch.nn.Module):
151 | def __init__(
152 | self,
153 | data,
154 | dim1=64,
155 | dim2=64,
156 | dim3=64,
157 | pre_fc_count=1,
158 | gc_count=3,
159 | gc_fc_count=2,
160 | post_fc_count=1,
161 | pool="global_mean_pool",
162 | pool_order="early",
163 | batch_norm="True",
164 | batch_track_stats="True",
165 | act="relu",
166 | dropout_rate=0.0,
167 | **kwargs
168 | ):
169 | super(MEGNet, self).__init__()
170 |
171 | if batch_track_stats == "False":
172 | self.batch_track_stats = False
173 | else:
174 | self.batch_track_stats = True
175 | self.batch_norm = batch_norm
176 | self.pool = pool
177 | if pool == "global_mean_pool":
178 | self.pool_reduce="mean"
179 | elif pool== "global_max_pool":
180 | self.pool_reduce="max"
181 | elif pool== "global_sum_pool":
182 | self.pool_reduce="sum"
183 | self.act = act
184 | self.pool_order = pool_order
185 | self.dropout_rate = dropout_rate
186 |
187 | ##Determine gc dimension dimension
188 | assert gc_count > 0, "Need at least 1 GC layer"
189 | if pre_fc_count == 0:
190 | gc_dim = data.num_features
191 | else:
192 | gc_dim = dim1
193 | ##Determine post_fc dimension
194 | post_fc_dim = dim3
195 | ##Determine output dimension length
196 | if data[0].y.ndim == 0:
197 | output_dim = 1
198 | else:
199 | output_dim = len(data[0].y[0])
200 |
201 | ##Set up pre-GNN dense layers (NOTE: in v0.1 this is always set to 1 layer)
202 | if pre_fc_count > 0:
203 | self.pre_lin_list = torch.nn.ModuleList()
204 | for i in range(pre_fc_count):
205 | if i == 0:
206 | lin = torch.nn.Linear(data.num_features, dim1)
207 | self.pre_lin_list.append(lin)
208 | else:
209 | lin = torch.nn.Linear(dim1, dim1)
210 | self.pre_lin_list.append(lin)
211 | elif pre_fc_count == 0:
212 | self.pre_lin_list = torch.nn.ModuleList()
213 |
214 | ##Set up GNN layers
215 | self.e_embed_list = torch.nn.ModuleList()
216 | self.x_embed_list = torch.nn.ModuleList()
217 | self.u_embed_list = torch.nn.ModuleList()
218 | self.conv_list = torch.nn.ModuleList()
219 | self.bn_list = torch.nn.ModuleList()
220 | for i in range(gc_count):
221 | if i == 0:
222 | e_embed = Sequential(
223 | Linear(data.num_edge_features, dim3), ReLU(), Linear(dim3, dim3), ReLU()
224 | )
225 | x_embed = Sequential(
226 | Linear(gc_dim, dim3), ReLU(), Linear(dim3, dim3), ReLU()
227 | )
228 | u_embed = Sequential(
229 | Linear((data[0].u.shape[1]), dim3), ReLU(), Linear(dim3, dim3), ReLU()
230 | )
231 | self.e_embed_list.append(e_embed)
232 | self.x_embed_list.append(x_embed)
233 | self.u_embed_list.append(u_embed)
234 | self.conv_list.append(
235 | MetaLayer(
236 | Megnet_EdgeModel(dim3, self.act, self.batch_norm, self.batch_track_stats, self.dropout_rate, gc_fc_count),
237 | Megnet_NodeModel(dim3, self.act, self.batch_norm, self.batch_track_stats, self.dropout_rate, gc_fc_count),
238 | Megnet_GlobalModel(dim3, self.act, self.batch_norm, self.batch_track_stats, self.dropout_rate, gc_fc_count),
239 | )
240 | )
241 | elif i > 0:
242 | e_embed = Sequential(Linear(dim3, dim3), ReLU(), Linear(dim3, dim3), ReLU())
243 | x_embed = Sequential(Linear(dim3, dim3), ReLU(), Linear(dim3, dim3), ReLU())
244 | u_embed = Sequential(Linear(dim3, dim3), ReLU(), Linear(dim3, dim3), ReLU())
245 | self.e_embed_list.append(e_embed)
246 | self.x_embed_list.append(x_embed)
247 | self.u_embed_list.append(u_embed)
248 | self.conv_list.append(
249 | MetaLayer(
250 | Megnet_EdgeModel(dim3, self.act, self.batch_norm, self.batch_track_stats, self.dropout_rate, gc_fc_count),
251 | Megnet_NodeModel(dim3, self.act, self.batch_norm, self.batch_track_stats, self.dropout_rate, gc_fc_count),
252 | Megnet_GlobalModel(dim3, self.act, self.batch_norm, self.batch_track_stats, self.dropout_rate, gc_fc_count),
253 | )
254 | )
255 |
256 | ##Set up post-GNN dense layers (NOTE: in v0.1 there was a minimum of 2 dense layers, and fc_count(now post_fc_count) added to this number. In the current version, the minimum is zero)
257 | if post_fc_count > 0:
258 | self.post_lin_list = torch.nn.ModuleList()
259 | for i in range(post_fc_count):
260 | if i == 0:
261 | ##Set2set pooling has doubled dimension
262 | if self.pool_order == "early" and self.pool == "set2set":
263 | lin = torch.nn.Linear(post_fc_dim * 5, dim2)
264 | elif self.pool_order == "early" and self.pool != "set2set":
265 | lin = torch.nn.Linear(post_fc_dim * 3, dim2)
266 | elif self.pool_order == "late":
267 | lin = torch.nn.Linear(post_fc_dim, dim2)
268 | self.post_lin_list.append(lin)
269 | else:
270 | lin = torch.nn.Linear(dim2, dim2)
271 | self.post_lin_list.append(lin)
272 | self.lin_out = torch.nn.Linear(dim2, output_dim)
273 |
274 | elif post_fc_count == 0:
275 | self.post_lin_list = torch.nn.ModuleList()
276 | if self.pool_order == "early" and self.pool == "set2set":
277 | self.lin_out = torch.nn.Linear(post_fc_dim * 5, output_dim)
278 | elif self.pool_order == "early" and self.pool != "set2set":
279 | self.lin_out = torch.nn.Linear(post_fc_dim * 3, output_dim)
280 | else:
281 | self.lin_out = torch.nn.Linear(post_fc_dim, output_dim)
282 |
283 | ##Set up set2set pooling (if used)
284 | if self.pool_order == "early" and self.pool == "set2set":
285 | self.set2set_x = Set2Set(post_fc_dim, processing_steps=3)
286 | self.set2set_e = Set2Set(post_fc_dim, processing_steps=3)
287 | elif self.pool_order == "late" and self.pool == "set2set":
288 | self.set2set_x = Set2Set(output_dim, processing_steps=3, num_layers=1)
289 | # workaround for doubled dimension by set2set; if late pooling not reccomended to use set2set
290 | self.lin_out_2 = torch.nn.Linear(output_dim * 2, output_dim)
291 |
292 | def forward(self, data):
293 |
294 | ##Pre-GNN dense layers
295 | for i in range(0, len(self.pre_lin_list)):
296 | if i == 0:
297 | out = self.pre_lin_list[i](data.x)
298 | out = getattr(F, self.act)(out)
299 | else:
300 | out = self.pre_lin_list[i](out)
301 | out = getattr(F, self.act)(out)
302 |
303 | ##GNN layers
304 | for i in range(0, len(self.conv_list)):
305 | if i == 0:
306 | if len(self.pre_lin_list) == 0:
307 | e_temp = self.e_embed_list[i](data.edge_attr)
308 | x_temp = self.x_embed_list[i](data.x)
309 | u_temp = self.u_embed_list[i](data.u)
310 | x_out, e_out, u_out = self.conv_list[i](
311 | x_temp, data.edge_index, e_temp, u_temp, data.batch
312 | )
313 | x = torch.add(x_out, x_temp)
314 | e = torch.add(e_out, e_temp)
315 | u = torch.add(u_out, u_temp)
316 | else:
317 | e_temp = self.e_embed_list[i](data.edge_attr)
318 | x_temp = self.x_embed_list[i](out)
319 | u_temp = self.u_embed_list[i](data.u)
320 | x_out, e_out, u_out = self.conv_list[i](
321 | x_temp, data.edge_index, e_temp, u_temp, data.batch
322 | )
323 | x = torch.add(x_out, x_temp)
324 | e = torch.add(e_out, e_temp)
325 | u = torch.add(u_out, u_temp)
326 |
327 | elif i > 0:
328 | e_temp = self.e_embed_list[i](e)
329 | x_temp = self.x_embed_list[i](x)
330 | u_temp = self.u_embed_list[i](u)
331 | x_out, e_out, u_out = self.conv_list[i](
332 | x_temp, data.edge_index, e_temp, u_temp, data.batch
333 | )
334 | x = torch.add(x_out, x)
335 | e = torch.add(e_out, e)
336 | u = torch.add(u_out, u)
337 |
338 | ##Post-GNN dense layers
339 | if self.pool_order == "early":
340 | if self.pool == "set2set":
341 | x_pool = self.set2set_x(x, data.batch)
342 | e = scatter(e, data.edge_index[0, :], dim=0, reduce="mean")
343 | e_pool = self.set2set_e(e, data.batch)
344 | out = torch.cat([x_pool, e_pool, u], dim=1)
345 | else:
346 | x_pool = scatter(x, data.batch, dim=0, reduce=self.pool_reduce)
347 | e_pool = scatter(e, data.edge_index[0, :], dim=0, reduce=self.pool_reduce)
348 | e_pool = scatter(e_pool, data.batch, dim=0, reduce=self.pool_reduce)
349 | out = torch.cat([x_pool, e_pool, u], dim=1)
350 | for i in range(0, len(self.post_lin_list)):
351 | out = self.post_lin_list[i](out)
352 | out = getattr(F, self.act)(out)
353 | out = self.lin_out(out)
354 |
355 | ##currently only uses node features for late pooling
356 | elif self.pool_order == "late":
357 | out = x
358 | for i in range(0, len(self.post_lin_list)):
359 | out = self.post_lin_list[i](out)
360 | out = getattr(F, self.act)(out)
361 | out = self.lin_out(out)
362 | if self.pool == "set2set":
363 | out = self.set2set_x(out, data.batch)
364 | out = self.lin_out_2(out)
365 | else:
366 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
367 |
368 | if out.shape[1] == 1:
369 | return out.view(-1)
370 | else:
371 | return out
372 |
--------------------------------------------------------------------------------
/matdeeplearn/models/mpnn.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import Tensor
3 | import torch.nn.functional as F
4 | from torch.nn import Sequential, Linear, ReLU, BatchNorm1d, GRU
5 | import torch_geometric
6 | from torch_geometric.nn import (
7 | Set2Set,
8 | global_mean_pool,
9 | global_add_pool,
10 | global_max_pool,
11 | NNConv,
12 | )
13 | from torch_scatter import scatter_mean, scatter_add, scatter_max, scatter
14 |
15 |
16 | # CGCNN
17 | class MPNN(torch.nn.Module):
18 | def __init__(
19 | self,
20 | data,
21 | dim1=64,
22 | dim2=64,
23 | dim3=64,
24 | pre_fc_count=1,
25 | gc_count=3,
26 | post_fc_count=1,
27 | pool="global_mean_pool",
28 | pool_order="early",
29 | batch_norm="True",
30 | batch_track_stats="True",
31 | act="relu",
32 | dropout_rate=0.0,
33 | **kwargs
34 | ):
35 | super(MPNN, self).__init__()
36 |
37 |
38 | if batch_track_stats == "False":
39 | self.batch_track_stats = False
40 | else:
41 | self.batch_track_stats = True
42 | self.batch_norm = batch_norm
43 | self.pool = pool
44 | self.act = act
45 | self.pool_order = pool_order
46 | self.dropout_rate = dropout_rate
47 |
48 | ##Determine gc dimension dimension
49 | assert gc_count > 0, "Need at least 1 GC layer"
50 | if pre_fc_count == 0:
51 | gc_dim = data.num_features
52 | else:
53 | gc_dim = dim1
54 | ##Determine post_fc dimension
55 | if pre_fc_count == 0:
56 | post_fc_dim = data.num_features
57 | else:
58 | post_fc_dim = dim1
59 | ##Determine output dimension length
60 | if data[0].y.ndim == 0:
61 | output_dim = 1
62 | else:
63 | output_dim = len(data[0].y[0])
64 |
65 | ##Set up pre-GNN dense layers (NOTE: in v0.1 this is always set to 1 layer)
66 | if pre_fc_count > 0:
67 | self.pre_lin_list = torch.nn.ModuleList()
68 | for i in range(pre_fc_count):
69 | if i == 0:
70 | lin = torch.nn.Linear(data.num_features, dim1)
71 | self.pre_lin_list.append(lin)
72 | else:
73 | lin = torch.nn.Linear(dim1, dim1)
74 | self.pre_lin_list.append(lin)
75 | elif pre_fc_count == 0:
76 | self.pre_lin_list = torch.nn.ModuleList()
77 |
78 | ##Set up GNN layers
79 | self.conv_list = torch.nn.ModuleList()
80 | self.gru_list = torch.nn.ModuleList()
81 | self.bn_list = torch.nn.ModuleList()
82 | for i in range(gc_count):
83 | nn = Sequential(
84 | Linear(data.num_edge_features, dim3), ReLU(), Linear(dim3, gc_dim * gc_dim)
85 | )
86 | conv = NNConv(
87 | gc_dim, gc_dim, nn, aggr="mean"
88 | )
89 | self.conv_list.append(conv)
90 | gru = GRU(gc_dim, gc_dim)
91 | self.gru_list.append(gru)
92 |
93 | ##Track running stats set to false can prevent some instabilities; this causes other issues with different val/test performance from loader size?
94 | if self.batch_norm == "True":
95 | bn = BatchNorm1d(gc_dim, track_running_stats=self.batch_track_stats)
96 | self.bn_list.append(bn)
97 |
98 | ##Set up post-GNN dense layers (NOTE: in v0.1 there was a minimum of 2 dense layers, and fc_count(now post_fc_count) added to this number. In the current version, the minimum is zero)
99 | if post_fc_count > 0:
100 | self.post_lin_list = torch.nn.ModuleList()
101 | for i in range(post_fc_count):
102 | if i == 0:
103 | ##Set2set pooling has doubled dimension
104 | if self.pool_order == "early" and self.pool == "set2set":
105 | lin = torch.nn.Linear(post_fc_dim * 2, dim2)
106 | else:
107 | lin = torch.nn.Linear(post_fc_dim, dim2)
108 | self.post_lin_list.append(lin)
109 | else:
110 | lin = torch.nn.Linear(dim2, dim2)
111 | self.post_lin_list.append(lin)
112 | self.lin_out = torch.nn.Linear(dim2, output_dim)
113 |
114 | elif post_fc_count == 0:
115 | self.post_lin_list = torch.nn.ModuleList()
116 | if self.pool_order == "early" and self.pool == "set2set":
117 | self.lin_out = torch.nn.Linear(post_fc_dim*2, output_dim)
118 | else:
119 | self.lin_out = torch.nn.Linear(post_fc_dim, output_dim)
120 |
121 | ##Set up set2set pooling (if used)
122 | if self.pool_order == "early" and self.pool == "set2set":
123 | self.set2set = Set2Set(post_fc_dim, processing_steps=3)
124 | elif self.pool_order == "late" and self.pool == "set2set":
125 | self.set2set = Set2Set(output_dim, processing_steps=3, num_layers=1)
126 | # workaround for doubled dimension by set2set; if late pooling not reccomended to use set2set
127 | self.lin_out_2 = torch.nn.Linear(output_dim * 2, output_dim)
128 |
129 | def forward(self, data):
130 |
131 | ##Pre-GNN dense layers
132 | for i in range(0, len(self.pre_lin_list)):
133 | if i == 0:
134 | out = self.pre_lin_list[i](data.x)
135 | out = getattr(F, self.act)(out)
136 | else:
137 | out = self.pre_lin_list[i](out)
138 | out = getattr(F, self.act)(out)
139 |
140 | ##GNN layers
141 | if len(self.pre_lin_list) == 0:
142 | h = data.x.unsqueeze(0)
143 | else:
144 | h = out.unsqueeze(0)
145 | for i in range(0, len(self.conv_list)):
146 | if len(self.pre_lin_list) == 0 and i == 0:
147 | if self.batch_norm == "True":
148 | m = self.conv_list[i](data.x, data.edge_index, data.edge_attr)
149 | m = self.bn_list[i](m)
150 | else:
151 | m = self.conv_list[i](data.x, data.edge_index, data.edge_attr)
152 | else:
153 | if self.batch_norm == "True":
154 | m = self.conv_list[i](out, data.edge_index, data.edge_attr)
155 | m = self.bn_list[i](m)
156 | else:
157 | m = self.conv_list[i](out, data.edge_index, data.edge_attr)
158 | m = getattr(F, self.act)(m)
159 | m = F.dropout(m, p=self.dropout_rate, training=self.training)
160 | out, h = self.gru_list[i](m.unsqueeze(0), h)
161 | out = out.squeeze(0)
162 |
163 | ##Post-GNN dense layers
164 | if self.pool_order == "early":
165 | if self.pool == "set2set":
166 | out = self.set2set(out, data.batch)
167 | else:
168 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
169 | for i in range(0, len(self.post_lin_list)):
170 | out = self.post_lin_list[i](out)
171 | out = getattr(F, self.act)(out)
172 | out = self.lin_out(out)
173 |
174 | elif self.pool_order == "late":
175 | for i in range(0, len(self.post_lin_list)):
176 | out = self.post_lin_list[i](out)
177 | out = getattr(F, self.act)(out)
178 | out = self.lin_out(out)
179 | if self.pool == "set2set":
180 | out = self.set2set(out, data.batch)
181 | out = self.lin_out_2(out)
182 | else:
183 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
184 |
185 | if out.shape[1] == 1:
186 | return out.view(-1)
187 | else:
188 | return out
189 |
--------------------------------------------------------------------------------
/matdeeplearn/models/schnet.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from torch import Tensor
3 | import torch.nn.functional as F
4 | from torch.nn import Sequential, Linear, BatchNorm1d
5 | import torch_geometric
6 | from torch_geometric.nn import (
7 | Set2Set,
8 | global_mean_pool,
9 | global_add_pool,
10 | global_max_pool,
11 | )
12 | from torch_scatter import scatter_mean, scatter_add, scatter_max, scatter
13 | from torch_geometric.nn.models.schnet import InteractionBlock
14 |
15 | # Schnet
16 | class SchNet(torch.nn.Module):
17 | def __init__(
18 | self,
19 | data,
20 | dim1=64,
21 | dim2=64,
22 | dim3=64,
23 | cutoff=8,
24 | pre_fc_count=1,
25 | gc_count=3,
26 | post_fc_count=1,
27 | pool="global_mean_pool",
28 | pool_order="early",
29 | batch_norm="True",
30 | batch_track_stats="True",
31 | act="relu",
32 | dropout_rate=0.0,
33 | **kwargs
34 | ):
35 | super(SchNet, self).__init__()
36 |
37 | if batch_track_stats == "False":
38 | self.batch_track_stats = False
39 | else:
40 | self.batch_track_stats = True
41 | self.batch_norm = batch_norm
42 | self.pool = pool
43 | self.act = act
44 | self.pool_order = pool_order
45 | self.dropout_rate = dropout_rate
46 |
47 | ##Determine gc dimension dimension
48 | assert gc_count > 0, "Need at least 1 GC layer"
49 | if pre_fc_count == 0:
50 | gc_dim = data.num_features
51 | else:
52 | gc_dim = dim1
53 | ##Determine post_fc dimension
54 | if pre_fc_count == 0:
55 | post_fc_dim = data.num_features
56 | else:
57 | post_fc_dim = dim1
58 | ##Determine output dimension length
59 | if data[0].y.ndim == 0:
60 | output_dim = 1
61 | else:
62 | output_dim = len(data[0].y[0])
63 |
64 | ##Set up pre-GNN dense layers (NOTE: in v0.1 this is always set to 1 layer)
65 | if pre_fc_count > 0:
66 | self.pre_lin_list = torch.nn.ModuleList()
67 | for i in range(pre_fc_count):
68 | if i == 0:
69 | lin = torch.nn.Linear(data.num_features, dim1)
70 | self.pre_lin_list.append(lin)
71 | else:
72 | lin = torch.nn.Linear(dim1, dim1)
73 | self.pre_lin_list.append(lin)
74 | elif pre_fc_count == 0:
75 | self.pre_lin_list = torch.nn.ModuleList()
76 |
77 | ##Set up GNN layers
78 | self.conv_list = torch.nn.ModuleList()
79 | self.bn_list = torch.nn.ModuleList()
80 | for i in range(gc_count):
81 | conv = InteractionBlock(gc_dim, data.num_edge_features, dim3, cutoff)
82 | self.conv_list.append(conv)
83 | ##Track running stats set to false can prevent some instabilities; this causes other issues with different val/test performance from loader size?
84 | if self.batch_norm == "True":
85 | bn = BatchNorm1d(gc_dim, track_running_stats=self.batch_track_stats)
86 | self.bn_list.append(bn)
87 |
88 | ##Set up post-GNN dense layers (NOTE: in v0.1 there was a minimum of 2 dense layers, and fc_count(now post_fc_count) added to this number. In the current version, the minimum is zero)
89 | if post_fc_count > 0:
90 | self.post_lin_list = torch.nn.ModuleList()
91 | for i in range(post_fc_count):
92 | if i == 0:
93 | ##Set2set pooling has doubled dimension
94 | if self.pool_order == "early" and self.pool == "set2set":
95 | lin = torch.nn.Linear(post_fc_dim * 2, dim2)
96 | else:
97 | lin = torch.nn.Linear(post_fc_dim, dim2)
98 | self.post_lin_list.append(lin)
99 | else:
100 | lin = torch.nn.Linear(dim2, dim2)
101 | self.post_lin_list.append(lin)
102 | self.lin_out = torch.nn.Linear(dim2, output_dim)
103 |
104 | elif post_fc_count == 0:
105 | self.post_lin_list = torch.nn.ModuleList()
106 | if self.pool_order == "early" and self.pool == "set2set":
107 | self.lin_out = torch.nn.Linear(post_fc_dim*2, output_dim)
108 | else:
109 | self.lin_out = torch.nn.Linear(post_fc_dim, output_dim)
110 |
111 | ##Set up set2set pooling (if used)
112 | if self.pool_order == "early" and self.pool == "set2set":
113 | self.set2set = Set2Set(post_fc_dim, processing_steps=3)
114 | elif self.pool_order == "late" and self.pool == "set2set":
115 | self.set2set = Set2Set(output_dim, processing_steps=3, num_layers=1)
116 | # workaround for doubled dimension by set2set; if late pooling not reccomended to use set2set
117 | self.lin_out_2 = torch.nn.Linear(output_dim * 2, output_dim)
118 |
119 | def forward(self, data):
120 |
121 | ##Pre-GNN dense layers
122 | for i in range(0, len(self.pre_lin_list)):
123 | if i == 0:
124 | out = self.pre_lin_list[i](data.x)
125 | out = getattr(F, self.act)(out)
126 | else:
127 | out = self.pre_lin_list[i](out)
128 | out = getattr(F, self.act)(out)
129 |
130 | ##GNN layers
131 | for i in range(0, len(self.conv_list)):
132 | if len(self.pre_lin_list) == 0 and i == 0:
133 | if self.batch_norm == "True":
134 | out = data.x + self.conv_list[i](data.x, data.edge_index, data.edge_weight, data.edge_attr)
135 | out = self.bn_list[i](out)
136 | else:
137 | out = data.x + self.conv_list[i](data.x, data.edge_index, data.edge_weight, data.edge_attr)
138 | else:
139 | if self.batch_norm == "True":
140 | out = out + self.conv_list[i](out, data.edge_index, data.edge_weight, data.edge_attr)
141 | out = self.bn_list[i](out)
142 | else:
143 | out = out + self.conv_list[i](out, data.edge_index, data.edge_weight, data.edge_attr)
144 | #out = getattr(F, self.act)(out)
145 | out = F.dropout(out, p=self.dropout_rate, training=self.training)
146 |
147 | ##Post-GNN dense layers
148 | if self.pool_order == "early":
149 | if self.pool == "set2set":
150 | out = self.set2set(out, data.batch)
151 | else:
152 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
153 | for i in range(0, len(self.post_lin_list)):
154 | out = self.post_lin_list[i](out)
155 | out = getattr(F, self.act)(out)
156 | out = self.lin_out(out)
157 |
158 | elif self.pool_order == "late":
159 | for i in range(0, len(self.post_lin_list)):
160 | out = self.post_lin_list[i](out)
161 | out = getattr(F, self.act)(out)
162 | out = self.lin_out(out)
163 | if self.pool == "set2set":
164 | out = self.set2set(out, data.batch)
165 | out = self.lin_out_2(out)
166 | else:
167 | out = getattr(torch_geometric.nn, self.pool)(out, data.batch)
168 |
169 | if out.shape[1] == 1:
170 | return out.view(-1)
171 | else:
172 | return out
173 |
--------------------------------------------------------------------------------
/matdeeplearn/models/utils.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | # Prints model summary
4 | def model_summary(model):
5 | model_params_list = list(model.named_parameters())
6 | print("--------------------------------------------------------------------------")
7 | line_new = "{:>30} {:>20} {:>20}".format(
8 | "Layer.Parameter", "Param Tensor Shape", "Param #"
9 | )
10 | print(line_new)
11 | print("--------------------------------------------------------------------------")
12 | for elem in model_params_list:
13 | p_name = elem[0]
14 | p_shape = list(elem[1].size())
15 | p_count = torch.tensor(elem[1].size()).prod().item()
16 | line_new = "{:>30} {:>20} {:>20}".format(p_name, str(p_shape), str(p_count))
17 | print(line_new)
18 | print("--------------------------------------------------------------------------")
19 | total_params = sum([param.nelement() for param in model.parameters()])
20 | print("Total params:", total_params)
21 | num_trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
22 | print("Trainable params:", num_trainable_params)
23 | print("Non-trainable params:", total_params - num_trainable_params)
24 |
--------------------------------------------------------------------------------
/matdeeplearn/process/__init__.py:
--------------------------------------------------------------------------------
1 | from .process import *
2 |
--------------------------------------------------------------------------------
/matdeeplearn/process/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/process/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/process/__pycache__/process.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/process/__pycache__/process.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/process/dictionary_blank.json:
--------------------------------------------------------------------------------
1 | {"1": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "2": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "3": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "4": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "5": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "6": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "7": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "8": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "9": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "10": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "11": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "12": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "13": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "14": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "15": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "16": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "17": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "18": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "19": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "20": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "21": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "22": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "23": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "24": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "25": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "26": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "27": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "28": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "29": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "30": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "31": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "32": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "33": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "34": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "35": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "36": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "37": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "38": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "39": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "40": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "41": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "42": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "43": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "44": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "45": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "46": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "47": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "48": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "49": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "50": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "51": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "52": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "53": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "54": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "55": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "56": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "57": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "58": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "59": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "60": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "61": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "62": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "63": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "64": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "65": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "66": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "67": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "68": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "69": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "70": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "71": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "72": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "73": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "74": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "75": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "76": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "77": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "78": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "79": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "80": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "81": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "82": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "83": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "84": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "85": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "86": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "87": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "88": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "89": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "90": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "91": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "92": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "93": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "94": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "95": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "96": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "97": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "98": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "99": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "100": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
--------------------------------------------------------------------------------
/matdeeplearn/process/dictionary_default.json:
--------------------------------------------------------------------------------
1 | {"1": [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "2": [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "3": [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "4": [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "5": [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "6": [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "7": [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "8": [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "9": [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "10": [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "11": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "12": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "13": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "14": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "15": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "16": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "17": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "18": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "19": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "20": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "21": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "22": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "23": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "24": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "25": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "26": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "27": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "28": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "29": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "30": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "31": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "32": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "33": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "34": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "35": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "36": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "37": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "38": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "39": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "40": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "41": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "42": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "43": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "44": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "45": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "46": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "47": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "48": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "49": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "50": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "51": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "52": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "53": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "54": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "55": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "56": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "57": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "58": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "59": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "60": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "61": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "62": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "63": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "64": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "65": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "66": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "67": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "68": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "69": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "70": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "71": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "72": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "73": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "74": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "75": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "76": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "77": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "78": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "79": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "80": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "81": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "82": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "83": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "84": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "85": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "86": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "87": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "88": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "89": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "90": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "91": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], "92": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], "93": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], "94": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], "95": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], "96": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], "97": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0], "98": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], "99": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], "100": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]}
--------------------------------------------------------------------------------
/matdeeplearn/process/process.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import time
4 | import csv
5 | import json
6 | import warnings
7 | import numpy as np
8 | import ase
9 | import glob
10 | from ase import io
11 | from scipy.stats import rankdata
12 | from scipy import interpolate
13 |
14 | ##torch imports
15 | import torch
16 | import torch.nn.functional as F
17 | from torch_geometric.data import DataLoader, Dataset, Data, InMemoryDataset
18 | from torch_geometric.utils import dense_to_sparse, degree, add_self_loops
19 | import torch_geometric.transforms as T
20 | from torch_geometric.utils import degree
21 |
22 | ################################################################################
23 | # Data splitting
24 | ################################################################################
25 |
26 | ##basic train, val, test split
27 | def split_data(
28 | dataset,
29 | train_ratio,
30 | val_ratio,
31 | test_ratio,
32 | seed=np.random.randint(1, 1e6),
33 | save=False,
34 | ):
35 | dataset_size = len(dataset)
36 | if (train_ratio + val_ratio + test_ratio) <= 1:
37 | train_length = int(dataset_size * train_ratio)
38 | val_length = int(dataset_size * val_ratio)
39 | test_length = int(dataset_size * test_ratio)
40 | unused_length = dataset_size - train_length - val_length - test_length
41 | (
42 | train_dataset,
43 | val_dataset,
44 | test_dataset,
45 | unused_dataset,
46 | ) = torch.utils.data.random_split(
47 | dataset,
48 | [train_length, val_length, test_length, unused_length],
49 | generator=torch.Generator().manual_seed(seed),
50 | )
51 | print(
52 | "train length:",
53 | train_length,
54 | "val length:",
55 | val_length,
56 | "test length:",
57 | test_length,
58 | "unused length:",
59 | unused_length,
60 | "seed :",
61 | seed,
62 | )
63 | return train_dataset, val_dataset, test_dataset
64 | else:
65 | print("invalid ratios")
66 |
67 |
68 | ##Basic CV split
69 | def split_data_CV(dataset, num_folds=5, seed=np.random.randint(1, 1e6), save=False):
70 | dataset_size = len(dataset)
71 | fold_length = int(dataset_size / num_folds)
72 | unused_length = dataset_size - fold_length * num_folds
73 | folds = [fold_length for i in range(num_folds)]
74 | folds.append(unused_length)
75 | cv_dataset = torch.utils.data.random_split(
76 | dataset, folds, generator=torch.Generator().manual_seed(seed)
77 | )
78 | print("fold length :", fold_length, "unused length:", unused_length, "seed", seed)
79 | return cv_dataset[0:num_folds]
80 |
81 |
82 | ################################################################################
83 | # Pytorch datasets
84 | ################################################################################
85 |
86 | ##Fetch dataset; processes the raw data if specified
87 | def get_dataset(data_path, target_index, reprocess="False", processing_args=None):
88 | if processing_args == None:
89 | processed_path = "processed"
90 | else:
91 | processed_path = processing_args.get("processed_path", "processed")
92 |
93 | transforms = GetY(index=target_index)
94 |
95 | if os.path.exists(data_path) == False:
96 | print("Data not found in:", data_path)
97 | sys.exit()
98 |
99 | if reprocess == "True":
100 | os.system("rm -rf " + os.path.join(data_path, processed_path))
101 | process_data(data_path, processed_path, processing_args)
102 |
103 | if os.path.exists(os.path.join(data_path, processed_path, "data.pt")) == True:
104 | dataset = StructureDataset(
105 | data_path,
106 | processed_path,
107 | transforms,
108 | )
109 | elif os.path.exists(os.path.join(data_path, processed_path, "data0.pt")) == True:
110 | dataset = StructureDataset_large(
111 | data_path,
112 | processed_path,
113 | transforms,
114 | )
115 | else:
116 | process_data(data_path, processed_path, processing_args)
117 | if os.path.exists(os.path.join(data_path, processed_path, "data.pt")) == True:
118 | dataset = StructureDataset(
119 | data_path,
120 | processed_path,
121 | transforms,
122 | )
123 | elif os.path.exists(os.path.join(data_path, processed_path, "data0.pt")) == True:
124 | dataset = StructureDataset_large(
125 | data_path,
126 | processed_path,
127 | transforms,
128 | )
129 | return dataset
130 |
131 |
132 | ##Dataset class from pytorch/pytorch geometric; inmemory case
133 | class StructureDataset(InMemoryDataset):
134 | def __init__(
135 | self, data_path, processed_path="processed", transform=None, pre_transform=None
136 | ):
137 | self.data_path = data_path
138 | self.processed_path = processed_path
139 | super(StructureDataset, self).__init__(data_path, transform, pre_transform)
140 | self.data, self.slices = torch.load(self.processed_paths[0])
141 |
142 | @property
143 | def raw_file_names(self):
144 | return []
145 |
146 | @property
147 | def processed_dir(self):
148 | return os.path.join(self.data_path, self.processed_path)
149 |
150 | @property
151 | def processed_file_names(self):
152 | file_names = ["data.pt"]
153 | return file_names
154 |
155 |
156 | ##Dataset class from pytorch/pytorch geometric
157 | class StructureDataset_large(Dataset):
158 | def __init__(
159 | self, data_path, processed_path="processed", transform=None, pre_transform=None
160 | ):
161 | self.data_path = data_path
162 | self.processed_path = processed_path
163 | super(StructureDataset_large, self).__init__(
164 | data_path, transform, pre_transform
165 | )
166 |
167 | @property
168 | def raw_file_names(self):
169 | return []
170 |
171 | @property
172 | def processed_dir(self):
173 | return os.path.join(self.data_path, self.processed_path)
174 |
175 | @property
176 | def processed_file_names(self):
177 | # file_names = ["data.pt"]
178 | file_names = []
179 | for file_name in glob.glob(self.processed_dir + "/data*.pt"):
180 | file_names.append(os.path.basename(file_name))
181 | # print(file_names)
182 | return file_names
183 |
184 | def len(self):
185 | return len(self.processed_file_names)
186 |
187 | def get(self, idx):
188 | data = torch.load(os.path.join(self.processed_dir, "data_{}.pt".format(idx)))
189 | return data
190 |
191 |
192 | ################################################################################
193 | # Processing
194 | ################################################################################
195 |
196 |
197 | def process_data(data_path, processed_path, processing_args):
198 |
199 | ##Begin processing data
200 | print("Processing data to: " + os.path.join(data_path, processed_path))
201 | assert os.path.exists(data_path), "Data path not found in " + data_path
202 |
203 | ##Load dictionary
204 | if processing_args["dictionary_source"] != "generated":
205 | if processing_args["dictionary_source"] == "default":
206 | print("Using default dictionary.")
207 | atom_dictionary = get_dictionary(
208 | os.path.join(
209 | os.path.dirname(os.path.realpath(__file__)),
210 | "dictionary_default.json",
211 | )
212 | )
213 | elif processing_args["dictionary_source"] == "blank":
214 | print(
215 | "Using blank dictionary. Warning: only do this if you know what you are doing"
216 | )
217 | atom_dictionary = get_dictionary(
218 | os.path.join(
219 | os.path.dirname(os.path.realpath(__file__)), "dictionary_blank.json"
220 | )
221 | )
222 | else:
223 | dictionary_file_path = os.path.join(
224 | data_path, processing_args["dictionary_path"]
225 | )
226 | if os.path.exists(dictionary_file_path) == False:
227 | print("Atom dictionary not found, exiting program...")
228 | sys.exit()
229 | else:
230 | print("Loading atom dictionary from file.")
231 | atom_dictionary = get_dictionary(dictionary_file_path)
232 |
233 | ##Load targets
234 | target_property_file = os.path.join(data_path, processing_args["target_path"])
235 | assert os.path.exists(target_property_file), (
236 | "targets not found in " + target_property_file
237 | )
238 | with open(target_property_file) as f:
239 | reader = csv.reader(f)
240 | target_data = [row for row in reader]
241 |
242 | ##Read db file if specified
243 | ase_crystal_list = []
244 | if processing_args["data_format"] == "db":
245 | db = ase.db.connect(os.path.join(data_path, "data.db"))
246 | row_count = 0
247 | # target_data=[]
248 | for row in db.select():
249 | # target_data.append([str(row_count), row.get('target')])
250 | ase_temp = row.toatoms()
251 | ase_crystal_list.append(ase_temp)
252 | row_count = row_count + 1
253 | if row_count % 500 == 0:
254 | print("db processed: ", row_count)
255 |
256 | ##Process structure files and create structure graphs
257 | data_list = []
258 | for index in range(0, len(target_data)):
259 |
260 | structure_id = target_data[index][0]
261 | data = Data()
262 |
263 | ##Read in structure file using ase
264 | if processing_args["data_format"] != "db":
265 | ase_crystal = ase.io.read(
266 | os.path.join(
267 | data_path, structure_id + "." + processing_args["data_format"]
268 | )
269 | )
270 | data.ase = ase_crystal
271 | else:
272 | ase_crystal = ase_crystal_list[index]
273 | data.ase = ase_crystal
274 |
275 | ##Compile structure sizes (# of atoms) and elemental compositions
276 | if index == 0:
277 | length = [len(ase_crystal)]
278 | elements = [list(set(ase_crystal.get_chemical_symbols()))]
279 | else:
280 | length.append(len(ase_crystal))
281 | elements.append(list(set(ase_crystal.get_chemical_symbols())))
282 |
283 | ##Obtain distance matrix with ase
284 | distance_matrix = ase_crystal.get_all_distances(mic=True)
285 |
286 | ##Create sparse graph from distance matrix
287 | distance_matrix_trimmed = threshold_sort(
288 | distance_matrix,
289 | processing_args["graph_max_radius"],
290 | processing_args["graph_max_neighbors"],
291 | adj=False,
292 | )
293 |
294 | distance_matrix_trimmed = torch.Tensor(distance_matrix_trimmed)
295 | out = dense_to_sparse(distance_matrix_trimmed)
296 | edge_index = out[0]
297 | edge_weight = out[1]
298 |
299 | self_loops = True
300 | if self_loops == True:
301 | edge_index, edge_weight = add_self_loops(
302 | edge_index, edge_weight, num_nodes=len(ase_crystal), fill_value=0
303 | )
304 | data.edge_index = edge_index
305 | data.edge_weight = edge_weight
306 |
307 | distance_matrix_mask = (
308 | distance_matrix_trimmed.fill_diagonal_(1) != 0
309 | ).int()
310 | elif self_loops == False:
311 | data.edge_index = edge_index
312 | data.edge_weight = edge_weight
313 |
314 | distance_matrix_mask = (distance_matrix_trimmed != 0).int()
315 |
316 | data.edge_descriptor = {}
317 | data.edge_descriptor["distance"] = edge_weight
318 | data.edge_descriptor["mask"] = distance_matrix_mask
319 |
320 | target = target_data[index][1:]
321 | y = torch.Tensor(np.array([target], dtype=np.float32))
322 | data.y = y
323 |
324 | # pos = torch.Tensor(ase_crystal.get_positions())
325 | # data.pos = pos
326 | z = torch.LongTensor(ase_crystal.get_atomic_numbers())
327 | data.z = z
328 |
329 | ###placeholder for state feature
330 | u = np.zeros((3))
331 | u = torch.Tensor(u[np.newaxis, ...])
332 | data.u = u
333 |
334 | data.structure_id = [[structure_id] * len(data.y)]
335 |
336 | if processing_args["verbose"] == "True" and (
337 | (index + 1) % 500 == 0 or (index + 1) == len(target_data)
338 | ):
339 | print("Data processed: ", index + 1, "out of", len(target_data))
340 | # if index == 0:
341 | # print(data)
342 | # print(data.edge_weight, data.edge_attr[0])
343 |
344 | data_list.append(data)
345 |
346 | ##
347 | n_atoms_max = max(length)
348 | species = list(set(sum(elements, [])))
349 | species.sort()
350 | num_species = len(species)
351 | if processing_args["verbose"] == "True":
352 | print(
353 | "Max structure size: ",
354 | n_atoms_max,
355 | "Max number of elements: ",
356 | num_species,
357 | )
358 | print("Unique species:", species)
359 | crystal_length = len(ase_crystal)
360 | data.length = torch.LongTensor([crystal_length])
361 |
362 | ##Generate node features
363 | if processing_args["dictionary_source"] != "generated":
364 | ##Atom features(node features) from atom dictionary file
365 | for index in range(0, len(data_list)):
366 | atom_fea = np.vstack(
367 | [
368 | atom_dictionary[str(data_list[index].ase.get_atomic_numbers()[i])]
369 | for i in range(len(data_list[index].ase))
370 | ]
371 | ).astype(float)
372 | data_list[index].x = torch.Tensor(atom_fea)
373 | elif processing_args["dictionary_source"] == "generated":
374 | ##Generates one-hot node features rather than using dict file
375 | from sklearn.preprocessing import LabelBinarizer
376 |
377 | lb = LabelBinarizer()
378 | lb.fit(species)
379 | for index in range(0, len(data_list)):
380 | data_list[index].x = torch.Tensor(
381 | lb.transform(data_list[index].ase.get_chemical_symbols())
382 | )
383 |
384 | ##Adds node degree to node features (appears to improve performance)
385 | for index in range(0, len(data_list)):
386 | data_list[index] = OneHotDegree(
387 | data_list[index], processing_args["graph_max_neighbors"] + 1
388 | )
389 |
390 | ##Get graphs based on voronoi connectivity; todo: also get voronoi features
391 | ##avoid use for the time being until a good approach is found
392 | processing_args["voronoi"] = "False"
393 | if processing_args["voronoi"] == "True":
394 | from pymatgen.core.structure import Structure
395 | from pymatgen.analysis.structure_analyzer import VoronoiConnectivity
396 | from pymatgen.io.ase import AseAtomsAdaptor
397 |
398 | Converter = AseAtomsAdaptor()
399 |
400 | for index in range(0, len(data_list)):
401 | pymatgen_crystal = Converter.get_structure(data_list[index].ase)
402 | # double check if cutoff distance does anything
403 | Voronoi = VoronoiConnectivity(
404 | pymatgen_crystal, cutoff=processing_args["graph_max_radius"]
405 | )
406 | connections = Voronoi.max_connectivity
407 |
408 | distance_matrix_voronoi = threshold_sort(
409 | connections,
410 | 9999,
411 | processing_args["graph_max_neighbors"],
412 | reverse=True,
413 | adj=False,
414 | )
415 | distance_matrix_voronoi = torch.Tensor(distance_matrix_voronoi)
416 |
417 | out = dense_to_sparse(distance_matrix_voronoi)
418 | edge_index_voronoi = out[0]
419 | edge_weight_voronoi = out[1]
420 |
421 | edge_attr_voronoi = distance_gaussian(edge_weight_voronoi)
422 | edge_attr_voronoi = edge_attr_voronoi.float()
423 |
424 | data_list[index].edge_index_voronoi = edge_index_voronoi
425 | data_list[index].edge_weight_voronoi = edge_weight_voronoi
426 | data_list[index].edge_attr_voronoi = edge_attr_voronoi
427 | if index % 500 == 0:
428 | print("Voronoi data processed: ", index)
429 |
430 | ##makes SOAP and SM features from dscribe
431 | if processing_args["SOAP_descriptor"] == "True":
432 | if True in data_list[0].ase.pbc:
433 | periodicity = True
434 | else:
435 | periodicity = False
436 |
437 | from dscribe.descriptors import SOAP
438 |
439 | make_feature_SOAP = SOAP(
440 | species=species,
441 | rcut=processing_args["SOAP_rcut"],
442 | nmax=processing_args["SOAP_nmax"],
443 | lmax=processing_args["SOAP_lmax"],
444 | sigma=processing_args["SOAP_sigma"],
445 | periodic=periodicity,
446 | sparse=False,
447 | average="inner",
448 | rbf="gto",
449 | crossover=False,
450 | )
451 | for index in range(0, len(data_list)):
452 | features_SOAP = make_feature_SOAP.create(data_list[index].ase)
453 | data_list[index].extra_features_SOAP = torch.Tensor(features_SOAP)
454 | if processing_args["verbose"] == "True" and index % 500 == 0:
455 | if index == 0:
456 | print(
457 | "SOAP length: ",
458 | features_SOAP.shape,
459 | )
460 | print("SOAP descriptor processed: ", index)
461 |
462 | elif processing_args["SM_descriptor"] == "True":
463 | if True in data_list[0].ase.pbc:
464 | periodicity = True
465 | else:
466 | periodicity = False
467 |
468 | from dscribe.descriptors import SineMatrix, CoulombMatrix
469 |
470 | if periodicity == True:
471 | make_feature_SM = SineMatrix(
472 | n_atoms_max=n_atoms_max,
473 | permutation="eigenspectrum",
474 | sparse=False,
475 | flatten=True,
476 | )
477 | else:
478 | make_feature_SM = CoulombMatrix(
479 | n_atoms_max=n_atoms_max,
480 | permutation="eigenspectrum",
481 | sparse=False,
482 | flatten=True,
483 | )
484 |
485 | for index in range(0, len(data_list)):
486 | features_SM = make_feature_SM.create(data_list[index].ase)
487 | data_list[index].extra_features_SM = torch.Tensor(features_SM)
488 | if processing_args["verbose"] == "True" and index % 500 == 0:
489 | if index == 0:
490 | print(
491 | "SM length: ",
492 | features_SM.shape,
493 | )
494 | print("SM descriptor processed: ", index)
495 |
496 | ##Generate edge features
497 | if processing_args["edge_features"] == "True":
498 |
499 | ##Distance descriptor using a Gaussian basis
500 | distance_gaussian = GaussianSmearing(
501 | 0, 1, processing_args["graph_edge_length"], 0.2
502 | )
503 | # print(GetRanges(data_list, 'distance'))
504 | NormalizeEdge(data_list, "distance")
505 | # print(GetRanges(data_list, 'distance'))
506 | for index in range(0, len(data_list)):
507 | data_list[index].edge_attr = distance_gaussian(
508 | data_list[index].edge_descriptor["distance"]
509 | )
510 | if processing_args["verbose"] == "True" and (
511 | (index + 1) % 500 == 0 or (index + 1) == len(target_data)
512 | ):
513 | print("Edge processed: ", index + 1, "out of", len(target_data))
514 |
515 | Cleanup(data_list, ["ase", "edge_descriptor"])
516 |
517 | if os.path.isdir(os.path.join(data_path, processed_path)) == False:
518 | os.mkdir(os.path.join(data_path, processed_path))
519 |
520 | ##Save processed dataset to file
521 | if processing_args["dataset_type"] == "inmemory":
522 | data, slices = InMemoryDataset.collate(data_list)
523 | torch.save((data, slices), os.path.join(data_path, processed_path, "data.pt"))
524 |
525 | elif processing_args["dataset_type"] == "large":
526 | for i in range(0, len(data_list)):
527 | torch.save(
528 | data_list[i],
529 | os.path.join(
530 | os.path.join(data_path, processed_path), "data_{}.pt".format(i)
531 | ),
532 | )
533 |
534 |
535 | ################################################################################
536 | # Processing sub-functions
537 | ################################################################################
538 |
539 | ##Selects edges with distance threshold and limited number of neighbors
540 | def threshold_sort(matrix, threshold, neighbors, reverse=False, adj=False):
541 | mask = matrix > threshold
542 | distance_matrix_trimmed = np.ma.array(matrix, mask=mask)
543 | if reverse == False:
544 | distance_matrix_trimmed = rankdata(
545 | distance_matrix_trimmed, method="ordinal", axis=1
546 | )
547 | elif reverse == True:
548 | distance_matrix_trimmed = rankdata(
549 | distance_matrix_trimmed * -1, method="ordinal", axis=1
550 | )
551 | distance_matrix_trimmed = np.nan_to_num(
552 | np.where(mask, np.nan, distance_matrix_trimmed)
553 | )
554 | distance_matrix_trimmed[distance_matrix_trimmed > neighbors + 1] = 0
555 |
556 | if adj == False:
557 | distance_matrix_trimmed = np.where(
558 | distance_matrix_trimmed == 0, distance_matrix_trimmed, matrix
559 | )
560 | return distance_matrix_trimmed
561 | elif adj == True:
562 | adj_list = np.zeros((matrix.shape[0], neighbors + 1))
563 | adj_attr = np.zeros((matrix.shape[0], neighbors + 1))
564 | for i in range(0, matrix.shape[0]):
565 | temp = np.where(distance_matrix_trimmed[i] != 0)[0]
566 | adj_list[i, :] = np.pad(
567 | temp,
568 | pad_width=(0, neighbors + 1 - len(temp)),
569 | mode="constant",
570 | constant_values=0,
571 | )
572 | adj_attr[i, :] = matrix[i, adj_list[i, :].astype(int)]
573 | distance_matrix_trimmed = np.where(
574 | distance_matrix_trimmed == 0, distance_matrix_trimmed, matrix
575 | )
576 | return distance_matrix_trimmed, adj_list, adj_attr
577 |
578 |
579 | ##Slightly edited version from pytorch geometric to create edge from gaussian basis
580 | class GaussianSmearing(torch.nn.Module):
581 | def __init__(self, start=0.0, stop=5.0, resolution=50, width=0.05, **kwargs):
582 | super(GaussianSmearing, self).__init__()
583 | offset = torch.linspace(start, stop, resolution)
584 | # self.coeff = -0.5 / (offset[1] - offset[0]).item() ** 2
585 | self.coeff = -0.5 / ((stop - start) * width) ** 2
586 | self.register_buffer("offset", offset)
587 |
588 | def forward(self, dist):
589 | dist = dist.unsqueeze(-1) - self.offset.view(1, -1)
590 | return torch.exp(self.coeff * torch.pow(dist, 2))
591 |
592 |
593 | ##Obtain node degree in one-hot representation
594 | def OneHotDegree(data, max_degree, in_degree=False, cat=True):
595 | idx, x = data.edge_index[1 if in_degree else 0], data.x
596 | deg = degree(idx, data.num_nodes, dtype=torch.long)
597 | deg = F.one_hot(deg, num_classes=max_degree + 1).to(torch.float)
598 |
599 | if x is not None and cat:
600 | x = x.view(-1, 1) if x.dim() == 1 else x
601 | data.x = torch.cat([x, deg.to(x.dtype)], dim=-1)
602 | else:
603 | data.x = deg
604 |
605 | return data
606 |
607 |
608 | ##Obtain dictionary file for elemental features
609 | def get_dictionary(dictionary_file):
610 | with open(dictionary_file) as f:
611 | atom_dictionary = json.load(f)
612 | return atom_dictionary
613 |
614 |
615 | ##Deletes unnecessary data due to slow dataloader
616 | def Cleanup(data_list, entries):
617 | for data in data_list:
618 | for entry in entries:
619 | try:
620 | delattr(data, entry)
621 | except Exception:
622 | pass
623 |
624 |
625 | ##Get min/max ranges for normalized edges
626 | def GetRanges(dataset, descriptor_label):
627 | mean = 0.0
628 | std = 0.0
629 | for index in range(0, len(dataset)):
630 | if len(dataset[index].edge_descriptor[descriptor_label]) > 0:
631 | if index == 0:
632 | feature_max = dataset[index].edge_descriptor[descriptor_label].max()
633 | feature_min = dataset[index].edge_descriptor[descriptor_label].min()
634 | mean += dataset[index].edge_descriptor[descriptor_label].mean()
635 | std += dataset[index].edge_descriptor[descriptor_label].std()
636 | if dataset[index].edge_descriptor[descriptor_label].max() > feature_max:
637 | feature_max = dataset[index].edge_descriptor[descriptor_label].max()
638 | if dataset[index].edge_descriptor[descriptor_label].min() < feature_min:
639 | feature_min = dataset[index].edge_descriptor[descriptor_label].min()
640 |
641 | mean = mean / len(dataset)
642 | std = std / len(dataset)
643 | return mean, std, feature_min, feature_max
644 |
645 |
646 | ##Normalizes edges
647 | def NormalizeEdge(dataset, descriptor_label):
648 | mean, std, feature_min, feature_max = GetRanges(dataset, descriptor_label)
649 |
650 | for data in dataset:
651 | data.edge_descriptor[descriptor_label] = (
652 | data.edge_descriptor[descriptor_label] - feature_min
653 | ) / (feature_max - feature_min)
654 |
655 |
656 | # WIP
657 | def SM_Edge(dataset):
658 | from dscribe.descriptors import (
659 | CoulombMatrix,
660 | SOAP,
661 | MBTR,
662 | EwaldSumMatrix,
663 | SineMatrix,
664 | )
665 |
666 | count = 0
667 | for data in dataset:
668 | n_atoms_max = len(data.ase)
669 | make_feature_SM = SineMatrix(
670 | n_atoms_max=n_atoms_max,
671 | permutation="none",
672 | sparse=False,
673 | flatten=False,
674 | )
675 | features_SM = make_feature_SM.create(data.ase)
676 | features_SM_trimmed = np.where(data.mask == 0, data.mask, features_SM)
677 | features_SM_trimmed = torch.Tensor(features_SM_trimmed)
678 | out = dense_to_sparse(features_SM_trimmed)
679 | edge_index = out[0]
680 | edge_weight = out[1]
681 | data.edge_descriptor["SM"] = edge_weight
682 |
683 | if count % 500 == 0:
684 | print("SM data processed: ", count)
685 | count = count + 1
686 |
687 | return dataset
688 |
689 |
690 | ################################################################################
691 | # Transforms
692 | ################################################################################
693 |
694 | ##Get specified y index from data.y
695 | class GetY(object):
696 | def __init__(self, index=0):
697 | self.index = index
698 |
699 | def __call__(self, data):
700 | # Specify target.
701 | if self.index != -1:
702 | data.y = data.y[0][self.index]
703 | return data
704 |
--------------------------------------------------------------------------------
/matdeeplearn/training/__init__.py:
--------------------------------------------------------------------------------
1 | from .training import *
2 |
--------------------------------------------------------------------------------
/matdeeplearn/training/__pycache__/__init__.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/training/__pycache__/__init__.cpython-37.pyc
--------------------------------------------------------------------------------
/matdeeplearn/training/__pycache__/training.cpython-37.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/matdeeplearn/training/__pycache__/training.cpython-37.pyc
--------------------------------------------------------------------------------
/old/MatDeepLearn_v0.1.tar:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fung-Lab/MatDeepLearn/214c5cd735f1955ff26560322c43f4f2d0d3cdb6/old/MatDeepLearn_v0.1.tar
--------------------------------------------------------------------------------
/old/README.md:
--------------------------------------------------------------------------------
1 | The version 0.1 of this package can be found here for archival purposes.
2 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | #VIRTUAL ENV REQUIREMENTS WITH PYTORCH 1.8.1 AND CUDA 10.2
2 | #COMPATIBLE WITH PYTHON==3.7
3 |
4 | #PyPi
5 | numpy==1.20.1
6 | scipy==1.6.1
7 | matplotlib==3.1.1
8 | pickle5==0.0.11
9 | joblib==0.13.2
10 | dscribe==0.3.5
11 | scikit_learn==0.24.0
12 | ase==3.20.1
13 | pymatgen==2020.9.14
14 | ray==1.0.1
15 |
--------------------------------------------------------------------------------