├── LICENSE ├── README.md ├── partitioning ├── drl_partitioning_coarsest_train.py ├── drl_partitioning_test.py ├── drl_partitioning_train.py └── scotch │ ├── CMakeLists.txt │ ├── SCOTCHWrapper.cpp │ ├── build.sh │ └── cmake │ └── Modules │ └── FindSCOTCH.cmake ├── requirements.txt └── separator ├── amd ├── AMDReordering.cpp ├── CMakeLists.txt ├── amdbar.F └── build.sh ├── drl_nd_testing.py ├── drl_separator_test.py ├── drl_separator_train.py └── scotch ├── CMakeLists.txt ├── SCOTCHWrapper.cpp ├── build.sh └── cmake └── Modules └── FindSCOTCH.cmake /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2021, alga-hopf 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | 1. Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | 2. Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | 3. Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Graph Partitioning and Sparse Matrix Ordering using Reinforcement Learning and Graph Neural Networks 2 | 3 | Deep reinforcement learning models to find a minimum normalized cut partitioning and a minimal vertex separator from https://arxiv.org/abs/2104.03546. 4 | 5 | ## Minimum normalized cut 6 | 7 | Given a graph G=(V,E), the goal is to find a partition of the set of vertices V such that its normalized cut is minimized. This is known to be an NP-complete problem, hence we look for approximate solutions to it. In order to do that, we train a Distributed Advantage Actor-Critic (DA2C) agent to refine the partitions obtained by interpolating back the partition on a coarser representation of the graph. More precisely, the initial graph is coarsened until it has a small number of nodes, then the coarsest graph is partitioned and the partition is interpolated back one level, where it is further refined. This process is applied recursively up to the initial graph. To partition the coarsest graph one can use the software METIS (through the NetworkX-METIS wrapper) or train another deep reinforcement learning agent to partition it. Both agents are implemented by two-headed deep neural networks containing graph convolutional layers. 8 | 9 | The details about the constructions and the trainings can be found in Section 2 and Section 3 of the paper. The codes for the training (refining and partitioning the coarsest graph) and for the evaluation are in the ``partitioning`` folder. 10 | 11 | ### Training the DRL agent to partition the coarsest graph 12 | Run ``drl_partitioning_coarsest_train.py`` with the following arguments 13 | - ``out``: output folder to save the weights of the trained neural network (default: ``./temp_edge``) 14 | - ``nmin``: minimum graph size (default: 50) 15 | - ``nmax``: maximum graph size (default: 100) 16 | - ``ntrain``: number of training graphs (default: 10000) 17 | - ``print_rew``: steps to take before printing the reward (default: 1000) 18 | - ``lr``: learning rate (default: 0.001) 19 | - ``gamma``: discount factor (default: 0.9) 20 | - ``coeff``: critic loss coefficient (default: 0.1) 21 | - ``units_conv``: units for the 4 convolutional layers (default: 30, 30, 30, 30) 22 | - ``units_dense``: units for the 3 linear layers (default: 30, 30, 20) 23 | 24 | After the training the weights are saved in ``out`` with the name ``model_coarsest``. 25 | 26 | ### Training the DRL agent to refine a given partitioned graph 27 | Run ``drl_partitioning_train.py`` with the following arguments 28 | - ``out``: output folder to save the weights of the trained neural network (default: ``./temp_edge``) 29 | - ``nmin``: minimum graph size (default: 200) 30 | - ``nmax``: maximum graph size (default: 5000) 31 | - ``ntrain``: number of training graphs (default: 10000) 32 | - ``epochs``: number of epochs (default: 1) 33 | - ``print_rew``: steps to take before printing the reward (default: 1000) 34 | - ``batch``: steps to take before updating the loss function (default: 8) 35 | - ``hops``: number of hops (default: 3) 36 | - ``workers``: number of workers (default: 8) 37 | - ``lr``: learning rate (default: 0.001) 38 | - ``gamma``: discount factor (default: 0.9) 39 | - ``coeff``: critic loss coefficient (default: 0.1) 40 | - ``units``: number of units in the graph convolutional layers (default: 5) 41 | - ``dataset``: dataset type to choose between ``'delaunay'`` and ``'suitesparse'`` (default: ``'delaunay'``). With the first choice, random Delaunay graphs in the unit square are generated before the training. With the second choice, the user needs to download the matrices from the [SuiteSparse matrix collection](https://sparse.tamu.edu/) in the Matrix Market format and put the ``.mtx`` files in the folder ``drl-graph-partitioning/suitesparse_train``. In the paper we focus on matrices coming from 2D/3D discretizations. 42 | 43 | After the training the weights are saved in ``out`` with the name ``model_partitioning_delaunay``. 44 | 45 | ### Testing the DRL agent on several datasets 46 | Run ``drl_partitioning_test.py`` with the following arguments 47 | - ``out``: output folder to save the weights of the trained neural network (default: ``./temp_edge``) 48 | - ``nmin``: minimum graph size (default: 100) 49 | - ``nmax``: maximum graph size (default: 10000) 50 | - ``ntest``: number of testing graphs (default: 1000) 51 | - ``hops``: : number of hops (default: 3) 52 | - ``units``: number of units in the graph convolutional layers in the loaded refining DNN (default: 5) 53 | - ``units_conv``: units for the 4 convolutional layers in the loaded DNN for the coarsest graph (default: 30, 30, 30, 30) 54 | - ``units_dense``: units for the 3 linear layers in the DNN for the coarsest graph (default: 30, 30, 20) 55 | - ``attempts``: number of attempts to make (default: 3) 56 | - ``dataset``: dataset type to choose among ``'delaunay'``, ``'suitesparse'``, and the Finite Elements triangulations ``graded_l``, ``hole3``, ``hole6`` (default: ``'delaunay'``). With the first choice, random Delaunay graphs in the unit square are generated before the evaluation. With the second choice, the user needs to download the matrices from the [SuiteSparse matrix collection](https://sparse.tamu.edu/) in the Matrix Market format and put the ``.mtx.`` files in the folder ``drl-graph-partitioning/suitesparse``. In the paper we focus on matrices coming from 2D/3D discretizations. For the Finite Elements triangulations, the user can download the matrices from [here](https://portal.nersc.gov/project/sparse/strumpack/fe_triangulations.tar.xz) and put the 3 folders in ``drl-graph-partitioning/``. 57 | 58 | Make sure that the arguments ``units``, ``units_conv`` and ``units_dense`` are the same used in the training phases. 59 | For each graph in the dataset the following are returned: normalized cut, corresponding volumes and cut computed with DRL, DRL_METIS, METIS and SCOTCH. 60 | 61 | ## Minimal vertex separator 62 | 63 | Given a graph G=(V,E), the goal is to find a minimal vertex separator such that the corresponding normalized separator is minimized. Also in this case we look for approximate solutions to it. In order to do that, we train a Distributed Advantage Actor-Critic (DA2C) agent to refine the vertex separator obtained by interpolating back the partition on a coarser representation of the graph. More precisely, the initial graph is coarsened until it has a small number of nodes, then a minimal vertex separator is computed on the the coarsest graph and the it is interpolated back one level, where it is further refined. This process is applied recursively up to the initial graph. To find the minimal vertex separator the coarsest graph one can use the software METIS (through the NetworkX-METIS wrapper). The agent is implemented by two-headed deep neural network containing graph convolutional layers. 64 | 65 | The details about the constructions and the trainings can be found in Section 4 the paper, while Section 5 contains an application of the above model to the Nested Dissection Sparse Matrix Ordering. The codes for the training, for the evaluation and for the nested dissection ordering are in the ``separator`` folder. 66 | 67 | ### Training the DRL agent to refine a given vertex separator partition 68 | Run ``drl_separator_train.py`` with the following arguments 69 | - ``out``: output folder to save the weights of the trained neural network (default: ``./temp_edge``) 70 | - ``nmin``: minimum graph size (default: 200) 71 | - ``nmax``: maximum graph size (default: 5000) 72 | - ``ntrain``: number of training graphs (default: 10000) 73 | - ``epochs``: number of epochs (default: 1) 74 | - ``print_rew``: steps to take before printing the reward (default: 1000) 75 | - ``batch``: steps to take before updating the loss function (default: 8) 76 | - ``hops``: number of hops (default: 3) 77 | - ``workers``: number of workers (default: 8) 78 | - ``lr``: learning rate (default: 0.001) 79 | - ``gamma``: discount factor (default: 0.9) 80 | - ``coeff``: critic loss coefficient (default: 0.1) 81 | - ``units``: number of units in the graph convolutional layers (default: 7) 82 | - ``dataset``: dataset type to choose between ``'delaunay'`` and ``'suitesparse'`` (default: ``'delaunay'``). With the first choice, random Delaunay graphs in the unit square are generated before the training. With the second choice, the user needs to download the matrices from the [SuiteSparse matrix collection](https://sparse.tamu.edu/) in the Matrix Market format and put the ``.mtx`` files in the folder ``drl-graph-partitioning/suitesparse_train``. In the paper we focus on matrices coming from 2D/3D discretizations. 83 | 84 | After the training the weights are saved in ``out`` with the name ``model_separator_delaunay``. 85 | 86 | ### Testing the DRL agent on several datasets 87 | Run ``drl_separator_test.py`` with the following arguments 88 | - ``out``: output folder to save the weights of the trained neural network (default: ``./temp_edge``) 89 | - ``nmin``: minimum graph size (default: 100) 90 | - ``nmax``: maximum graph size (default: 10000) 91 | - ``ntest``: number of testing graphs (default: 1000) 92 | - ``hops``: : number of hops (default:3) 93 | - ``units``: number of units in the graph convolutional layers in the loaded refining DNN (default: 7) 94 | - ``attempts``: number of attempts to make (default: 3) 95 | - ``dataset``: dataset type to choose among ``'delaunay'``, ``'suitesparse'``, and the Finite Elements triangulations ``graded_l``, ``hole3``, ``hole6`` (default: ``'delaunay'``). With the first choice, random Delaunay graphs in the unit square are generated before the evaluation. With the second choice, the user needs to download the matrices from the [SuiteSparse matrix collection](https://sparse.tamu.edu/) in the Matrix Market format and put the ``.mtx`` files in the folder ``drl-graph-partitioning/suitesparse``. In the paper we focus on matrices coming from 2D/3D discretizations. For the Finite Elements triangulations, the user can download the matrices from [here](https://portal.nersc.gov/project/sparse/strumpack/fe_triangulations.tar.xz) and put the 3 folders in ``drl-graph-partitioning/``. 96 | 97 | For each graph in the dataset the normalized separator computed with DRL and METIS is returned. 98 | 99 | ### DRL agent for the nested dissection ordering 100 | Run ``drl_nd_testing.py`` with the following arguments 101 | - ``out``: output folder to save the weights of the trained neural network (default: ``./temp_edge``) 102 | - ``nmin``: minimum graph size (default: 100) 103 | - ``nmax``: maximum graph size (default: 10000) 104 | - ``ntest``: number of testing graphs (default: 1000) 105 | - ``hops``: : number of hops (default: 3) 106 | - ``units``: number of units in the graph convolutional layers in the loaded refining DNN (default: 7) 107 | - ``attempts``: number of attempts to make (default: 3) 108 | - ``dataset``: dataset type to choose among ``'delaunay'``, ``'suitesparse'``, and the Finite Elements triangulations ``graded_l``, ``hole3``, ``hole6`` (default: ``'delaunay'``). With the first choice, random Delaunay graphs in the unit square are generated before the evaluation. With the second choice, the user needs to download the matrices from the [SuiteSparse matrix collection](https://sparse.tamu.edu/) in the Matrix Market format and put the ``.mtx`` files in the folder ``drl-graph-partitioning/suitesparse``. In the paper we focus on matrices coming from 2D/3D discretizations. For the Finite Elements triangulations, the user can download the matrices from [here](https://portal.nersc.gov/project/sparse/strumpack/fe_triangulations.tar.xz) and put the 3 folders in ``drl-graph-partitioning/``. 109 | 110 | For each graph in the dataset it is returned the number of non-zero entries in the LU factorization of the associated adjacency matrix computed with DRL, METIS_ND, COLAMD, METIS, SCOTCH. 111 | 112 | ## Required software 113 | Most of the required software is in the ``requirements.txt`` file. [Networkx-METIS](https://networkx-metis.readthedocs.io/en/latest/index.html) can be installed as follows 114 | ``` 115 | git clone https://github.com/networkx/networkx-metis.git 116 | cd networkx-metis 117 | python setup.py build 118 | python setup.py install 119 | ``` 120 | See also the [installation page](https://networkx-metis.readthedocs.io/en/latest/install.html). You also need to build AMD and Scotch yourself. 121 | 122 | ### Scotch 123 | The following steps assume you are using Linux. 124 | 125 | To install Scotch, first download Scotch from [here](https://gitlab.inria.fr/scotch/scotch/-/releases/v6.1.0) and extract it in your home directory. Then go to ``scotch-v6.1.0/src/Make.inc`` and copy the file ``Makefile.inc.x86-64_pc_linux2`` into the ``src`` directory and rename it as ``Makefile.inc``. Open the file and add ``-DINTSIZE32`` to ``CFLAGS``. Open a terminal, go to ``~/scotch-v6.1.0/src`` and type ``make``. This will compile Scotch. 126 | 127 | Once you have done this, in a terminal type 128 | ``` 129 | cd ~/drl-graph-partitioning/partitioning/scotch 130 | bash build.sh 131 | ``` 132 | This will compile the Scotch wrapper. Do the same for the Scotch wrapper in the ``separator`` folder. 133 | 134 | ### AMD 135 | Opena a terminal and write 136 | ``` 137 | cd ~/drl-graph-partitioning/separator/amd 138 | bash build.sh 139 | ``` 140 | This will compile the AMD wrapper. 141 | -------------------------------------------------------------------------------- /partitioning/drl_partitioning_coarsest_train.py: -------------------------------------------------------------------------------- 1 | 2 | import argparse 3 | from pathlib import Path 4 | 5 | import networkx as nx 6 | 7 | import torch 8 | import torch.nn as nn 9 | 10 | from torch_geometric.data import Data, DataLoader, Batch 11 | from torch_geometric.nn import GATConv, GlobalAttention 12 | from torch_geometric.utils import to_networkx, degree 13 | 14 | import numpy as np 15 | 16 | import scipy 17 | from scipy.sparse import coo_matrix 18 | from scipy.io import mmread 19 | from scipy.spatial import Delaunay 20 | 21 | import math 22 | import copy 23 | import timeit 24 | import os 25 | from itertools import combinations 26 | 27 | # Cut of the input graph 28 | 29 | 30 | def cut(graph): 31 | cut = torch.sum((graph.x[graph.edge_index[0], 32 | :2] != graph.x[graph.edge_index[1], 33 | :2]).all(axis=-1)).detach().item() / 2 34 | return cut 35 | 36 | # Volumes of the partitions 37 | 38 | 39 | def volumes(graph): 40 | ia = torch.where( 41 | (graph.x[:, :2] == torch.tensor([1.0, 0.0])).all(axis=-1))[0] 42 | ib = torch.where( 43 | (graph.x[:, :2] != torch.tensor([1.0, 0.0])).all(axis=-1))[0] 44 | degs = degree( 45 | graph.edge_index[0], 46 | num_nodes=graph.x.size(0), 47 | dtype=torch.uint8) 48 | da = torch.sum(degs[ia]).detach().item() 49 | db = torch.sum(degs[ib]).detach().item() 50 | cut = torch.sum((graph.x[graph.edge_index[0], 51 | :2] != graph.x[graph.edge_index[1], 52 | :2]).all(axis=-1)).detach().item() / 2 53 | return cut, da, db 54 | 55 | # Normalized cut of the input graph 56 | 57 | 58 | def normalized_cut(graph): 59 | cut, da, db = volumes(graph) 60 | if da == 0 or db == 0: 61 | return 2 62 | else: 63 | return cut * (1 / da + 1 / db) 64 | 65 | # Change the feature of the selected vertex 66 | 67 | 68 | def change_vertex(state, vertex): 69 | if (state.x[vertex] == torch.tensor([1., 0.])).all(): 70 | state.x[vertex] = torch.tensor([0., 1.]) 71 | else: 72 | state.x[vertex] = torch.tensor([1., 0.]) 73 | return state 74 | 75 | # Reward to train the DRL agent 76 | 77 | 78 | def reward_nc(state, vertex): 79 | new_state = state.clone() 80 | new_state = change_vertex(new_state, vertex) 81 | return normalized_cut(state) - normalized_cut(new_state) 82 | 83 | # Networkx geometric Delaunay mesh with n random points in the unit square 84 | 85 | 86 | def graph_delaunay_from_points(points): 87 | mesh = Delaunay(points, qhull_options="QJ") 88 | mesh_simp = mesh.simplices 89 | edges = [] 90 | for i in range(len(mesh_simp)): 91 | edges += combinations(mesh_simp[i], 2) 92 | e = list(set(edges)) 93 | return nx.Graph(e) 94 | 95 | # Build a pytorch geometric graph with features [1,0] form a networkx 96 | # graph. Then it turns the feature of one of the vertices with minimum 97 | # degree into [0,1] 98 | 99 | 100 | def torch_from_graph(g): 101 | 102 | adj_sparse = nx.to_scipy_sparse_array(g, format='coo') 103 | row = adj_sparse.row 104 | col = adj_sparse.col 105 | 106 | one_hot = [] 107 | for i in range(g.number_of_nodes()): 108 | one_hot.append([1., 0.]) 109 | 110 | edges = torch.tensor([row, col], dtype=torch.long) 111 | nodes = torch.tensor(np.array(one_hot), dtype=torch.float) 112 | graph_torch = Data(x=nodes, edge_index=edges) 113 | 114 | degs = np.sum(adj_sparse.todense(), axis=0) 115 | first_vertices = np.where(degs == np.min(degs))[0] 116 | first_vertex = np.random.choice(first_vertices) 117 | change_vertex(graph_torch, first_vertex) 118 | 119 | return graph_torch 120 | 121 | # Training dataset made of Delaunay graphs generated from random points in 122 | # the unit square 123 | 124 | 125 | def delaunay_dataset(n, n_min, n_max): 126 | dataset = [] 127 | for i in range(n): 128 | num_nodes = np.random.choice(np.arange(n_min, n_max + 1, 2)) 129 | points = np.random.random_sample((num_nodes, 2)) 130 | g = graph_delaunay_from_points(points) 131 | g_t = torch_from_graph(g) 132 | dataset.append(g_t) 133 | loader = DataLoader(dataset, batch_size=1, shuffle=False) 134 | return loader 135 | 136 | # DRL training loop 137 | 138 | 139 | def training_loop( 140 | model, 141 | training_dataset, 142 | gamma, 143 | time_to_sample, 144 | coeff, 145 | optimizer, 146 | print_loss 147 | ): 148 | 149 | i = 0 150 | 151 | # Here start the main loop for training 152 | for graph in training_dataset: 153 | 154 | if i % print_loss == 0 and i > 0: 155 | print('graph:', i, ' reward:', rew_partial) 156 | rew_partial = 0 157 | 158 | first_vertex = torch.where(graph.x == torch.tensor([0., 1.]))[ 159 | 0][0].item() 160 | 161 | start = graph 162 | 163 | len_episode = math.ceil(start.num_nodes / 2 - 1) # length of an episode 164 | 165 | # this is the array that keeps track of vertices that have been flipped 166 | # yet 167 | S = [first_vertex] 168 | 169 | time = 0 170 | rew_partial = 0 171 | 172 | rews, vals, logprobs = [], [], [] 173 | 174 | # Here starts the episode related to the graph "start" 175 | while time < len_episode: 176 | 177 | # we evaluate the A2C agent on the graph 178 | policy, values = model(start) 179 | probs = policy.view(-1).clone().detach().numpy() 180 | 181 | action = np.random.choice(np.arange(start.num_nodes), p=probs) 182 | 183 | S.append(action.item()) 184 | 185 | rew = reward_nc(start, action) 186 | 187 | # Collect all the rewards in this episode 188 | rew_partial += rew 189 | 190 | # Collect the log-probability of the chosen action 191 | logprobs.append(torch.log(policy.view(-1)[action])) 192 | # Collect the value of the chosen action 193 | vals.append(values) 194 | # Collect the reward 195 | rews.append(rew) 196 | 197 | new_state = start.clone() 198 | # we flip the vertex returned by the policy 199 | new_state = change_vertex(new_state, action) 200 | # Update the state 201 | start = new_state 202 | 203 | time += 1 204 | 205 | # After time_to_sample episodes we update the loss 206 | if i % time_to_sample == 0: # and i > 0: 207 | 208 | logprobs = torch.stack(logprobs).flip(dims=(0,)).view(-1) 209 | vals = torch.stack(vals).flip(dims=(0,)).view(-1) 210 | rews = torch.tensor(rews).flip(dims=(0,)).view(-1) 211 | 212 | # Compute the advantage 213 | R = [] 214 | R_partial = torch.tensor([0.]) 215 | for j in range(rews.shape[0]): 216 | R_partial = rews[j] + gamma * R_partial 217 | R.append(R_partial) 218 | 219 | R = torch.stack(R).view(-1) 220 | advantage = R - vals.detach() 221 | 222 | # Actor loss 223 | actor_loss = (-1 * logprobs * advantage) 224 | 225 | # Critic loss 226 | critic_loss = torch.pow(R - vals, 2) 227 | 228 | # Finally we update the loss 229 | optimizer.zero_grad() 230 | 231 | loss = torch.mean(actor_loss) + \ 232 | torch.tensor(coeff) * torch.mean(critic_loss) 233 | 234 | rews, vals, logprobs = [], [], [] 235 | 236 | loss.backward() 237 | 238 | optimizer.step() 239 | 240 | rew_partial = 0 # restart the partial reward 241 | 242 | i += 1 243 | 244 | return model 245 | 246 | 247 | if __name__ == "__main__": 248 | parser = argparse.ArgumentParser( 249 | formatter_class=argparse.ArgumentDefaultsHelpFormatter) 250 | parser.add_argument('--out', default='./temp_edge/', type=str) 251 | parser.add_argument( 252 | "--nmin", 253 | default=50, 254 | help="Minimum graph size", 255 | type=int) 256 | parser.add_argument( 257 | "--nmax", 258 | default=100, 259 | help="Maximum graph size", 260 | type=int) 261 | parser.add_argument( 262 | "--ntrain", 263 | default=1000, 264 | help="Number of training graphs", 265 | type=int) 266 | parser.add_argument( 267 | "--print_rew", 268 | default=1000, 269 | help="Steps to take before printing the reward", 270 | type=int) 271 | parser.add_argument("--batch", default=8, help="Batch size", type=int) 272 | parser.add_argument( 273 | "--lr", 274 | default=0.001, 275 | help="Learning rate", 276 | type=float) 277 | parser.add_argument( 278 | "--gamma", 279 | default=0.9, 280 | help="Gamma, discount factor", 281 | type=float) 282 | parser.add_argument( 283 | "--coeff", 284 | default=0.1, 285 | help="Critic loss coefficient", 286 | type=float) 287 | parser.add_argument( 288 | "--units_conv", 289 | default=[ 290 | 30, 291 | 30, 292 | 30, 293 | 30], 294 | help="Number of units in conv layers", 295 | nargs='+', 296 | type=int) 297 | parser.add_argument( 298 | "--units_dense", 299 | default=[ 300 | 30, 301 | 30, 302 | 20], 303 | help="Number of units in linear layers", 304 | nargs='+', 305 | type=int) 306 | 307 | torch.manual_seed(1) 308 | np.random.seed(2) 309 | 310 | args = parser.parse_args() 311 | outdir = args.out + '/' 312 | Path(outdir).mkdir(parents=True, exist_ok=True) 313 | 314 | n_min = args.nmin 315 | n_max = args.nmax 316 | n_train = args.ntrain 317 | coeff = args.coeff 318 | print_loss = args.print_rew 319 | 320 | time_to_sample = args.batch 321 | lr = args.lr 322 | gamma = args.gamma 323 | hid_conv = args.units_conv 324 | hid_lin = args.units_dense 325 | 326 | # Deep neural network that models the DRL agent 327 | 328 | class Model(torch.nn.Module): 329 | def __init__(self): 330 | super(Model, self).__init__() 331 | self.conv1 = GATConv(2, hid_conv[0]) 332 | self.conv2 = GATConv(hid_conv[0], hid_conv[1]) 333 | self.conv3 = GATConv(hid_conv[1], hid_conv[2]) 334 | self.conv4 = GATConv(hid_conv[2], hid_conv[3]) 335 | 336 | self.l1 = nn.Linear(hid_conv[3], hid_lin[0]) 337 | self.l2 = nn.Linear(hid_lin[0], hid_lin[1]) 338 | self.actor1 = nn.Linear(hid_lin[1], hid_lin[2]) 339 | self.actor2 = nn.Linear(hid_lin[2], 1) 340 | 341 | self.GlobAtt = GlobalAttention( 342 | nn.Sequential( 343 | nn.Linear( 344 | hid_lin[1], hid_lin[1]), nn.Tanh(), nn.Linear( 345 | hid_lin[1], 1))) 346 | self.critic1 = nn.Linear(hid_lin[1], hid_lin[2]) 347 | self.critic2 = nn.Linear(hid_lin[2], 1) 348 | 349 | def forward(self, graph): 350 | x_start, edge_index, batch = graph.x, graph.edge_index, graph.batch 351 | 352 | x = self.conv1(graph.x, edge_index) 353 | x = torch.tanh(x) 354 | x = self.conv2(x, edge_index) 355 | x = torch.tanh(x) 356 | x = self.conv3(x, edge_index) 357 | x = torch.tanh(x) 358 | x = self.conv4(x, edge_index) 359 | x = torch.tanh(x) 360 | 361 | x = self.l1(x) 362 | x = torch.tanh(x) 363 | x = self.l2(x) 364 | x = torch.tanh(x) 365 | 366 | x_actor = self.actor1(x) 367 | x_actor = torch.tanh(x_actor) 368 | x_actor = self.actor2(x_actor) 369 | flipped = torch.where( 370 | (x_start == torch.tensor([0., 1.])).all(axis=-1))[0] 371 | x_actor.data[flipped] = torch.tensor(-np.Inf) 372 | x_actor = torch.softmax(x_actor, dim=0) 373 | 374 | x_critic = self.GlobAtt(x, batch) 375 | x_critic = self.critic1(x_critic) 376 | x_critic = torch.tanh(x_critic) 377 | x_critic = self.critic2(x_critic) 378 | 379 | return x_actor, x_critic 380 | 381 | dataset = delaunay_dataset(n_train, n_min, n_max) 382 | 383 | model = Model() 384 | print(model) 385 | print('Model parameters:', 386 | sum([w.nelement() for w in model.parameters()])) 387 | 388 | optimizer = torch.optim.Adam(model.parameters(), lr=lr) 389 | 390 | # Training 391 | print('Start training') 392 | t0 = timeit.default_timer() 393 | model = training_loop( 394 | model, 395 | dataset, 396 | gamma, 397 | time_to_sample, 398 | coeff, 399 | optimizer, 400 | print_loss) 401 | ttrain = timeit.default_timer() - t0 402 | print('Training took:', ttrain, 'seconds') 403 | 404 | # Saving the model 405 | torch.save(model.state_dict(), outdir + 'model_coarsest') 406 | -------------------------------------------------------------------------------- /partitioning/drl_partitioning_test.py: -------------------------------------------------------------------------------- 1 | 2 | import argparse 3 | from pathlib import Path 4 | 5 | import networkx as nx 6 | import nxmetis 7 | 8 | import torch 9 | import torch.nn as nn 10 | import torch.multiprocessing as mp 11 | 12 | from torch_geometric.data import Data, DataLoader, Batch 13 | from torch_geometric.nn import SAGEConv, GATConv, GlobalAttention, graclus, avg_pool, global_mean_pool 14 | from torch_geometric.utils import to_networkx, k_hop_subgraph, degree 15 | 16 | import numpy as np 17 | from numpy import random 18 | 19 | import scipy 20 | from scipy.sparse import coo_matrix 21 | from scipy.io import mmread 22 | from scipy.spatial import Delaunay 23 | 24 | #import random_p 25 | import copy 26 | import math 27 | import timeit 28 | import os 29 | from itertools import combinations 30 | 31 | import ctypes 32 | libscotch = ctypes.cdll.LoadLibrary('scotch/build/libSCOTCHWrapper.so') 33 | 34 | 35 | # Networkx geometric Delaunay mesh with n random points in the unit square 36 | def graph_delaunay_from_points(points): 37 | mesh = Delaunay(points, qhull_options="QJ") 38 | mesh_simp = mesh.simplices 39 | edges = [] 40 | for i in range(len(mesh_simp)): 41 | edges += combinations(mesh_simp[i], 2) 42 | e = list(set(edges)) 43 | return nx.Graph(e) 44 | 45 | # Pytorch geometric Delaunay mesh with n random points in the unit square 46 | 47 | 48 | def random_delaunay_graph(n): 49 | points = np.random.random_sample((n, 2)) 50 | g = graph_delaunay_from_points(points) 51 | 52 | adj_sparse = nx.to_scipy_sparse_array(g, format='coo') 53 | row = adj_sparse.row 54 | col = adj_sparse.col 55 | 56 | one_hot = [] 57 | for i in range(g.number_of_nodes()): 58 | one_hot.append([1., 0.]) 59 | 60 | edges = torch.tensor([row, col], dtype=torch.long) 61 | nodes = torch.tensor(np.array(one_hot), dtype=torch.float) 62 | graph_torch = Data(x=nodes, edge_index=edges) 63 | 64 | return graph_torch 65 | 66 | # Build a pytorch geometric graph with features [1,0] form a networkx graph 67 | 68 | def torch_from_graph(g): 69 | 70 | adj_sparse = nx.to_scipy_sparse_array(g, format='coo') 71 | row = adj_sparse.row 72 | col = adj_sparse.col 73 | 74 | one_hot = [] 75 | for i in range(g.number_of_nodes()): 76 | one_hot.append([1., 0.]) 77 | 78 | edges = torch.tensor([row, col], dtype=torch.long) 79 | nodes = torch.tensor(np.array(one_hot), dtype=torch.float) 80 | graph_torch = Data(x=nodes, edge_index=edges) 81 | 82 | degs = np.sum(adj_sparse.todense(), axis=0) 83 | first_vertices = np.where(degs == np.min(degs))[0] 84 | first_vertex = np.random.choice(first_vertices) 85 | change_vertex(graph_torch, first_vertex) 86 | 87 | return graph_torch 88 | 89 | 90 | # Build a pytorch geometric graph with features [1,0] form a sparse matrix 91 | 92 | 93 | def torch_from_sparse(adj_sparse): 94 | 95 | row = adj_sparse.row 96 | col = adj_sparse.col 97 | 98 | features = [] 99 | for i in range(adj_sparse.shape[0]): 100 | features.append([1., 0.]) 101 | 102 | edges = torch.tensor([row, col], dtype=torch.long) 103 | nodes = torch.tensor(np.array(features), dtype=torch.float) 104 | graph_torch = Data(x=nodes, edge_index=edges) 105 | 106 | return graph_torch 107 | 108 | # Cut of the input graph 109 | 110 | 111 | def cut(graph): 112 | cut = torch.sum((graph.x[graph.edge_index[0], 113 | :2] != graph.x[graph.edge_index[1], 114 | :2]).all(axis=-1)).detach().item() / 2 115 | return cut 116 | 117 | # Change the feature of the selected vertex 118 | 119 | 120 | def change_vertex(state, vertex): 121 | if (state.x[vertex, :2] == torch.tensor([1., 0.])).all(): 122 | state.x[vertex, 0] = torch.tensor(0.) 123 | state.x[vertex, 1] = torch.tensor(1.) 124 | else: 125 | state.x[vertex, 0] = torch.tensor(1.) 126 | state.x[vertex, 1] = torch.tensor(0.) 127 | 128 | return state 129 | 130 | # Normalized cut of the input graph 131 | 132 | 133 | def normalized_cut(graph): 134 | cut, da, db = volumes(graph) 135 | if da == 0 or db == 0: 136 | return 2 137 | else: 138 | return cut * (1 / da + 1 / db) 139 | 140 | # Coarsen a pytorch geometric graph, then find the cut with METIS and 141 | # interpolate it back 142 | 143 | 144 | def partition_metis_refine(graph): 145 | cluster = graclus(graph.edge_index) 146 | coarse_graph = avg_pool( 147 | cluster, 148 | Batch( 149 | batch=graph.batch, 150 | x=graph.x, 151 | edge_index=graph.edge_index)) 152 | coarse_graph_nx = to_networkx(coarse_graph, to_undirected=True) 153 | _, parts = nxmetis.partition(coarse_graph_nx, 2) 154 | mparts = np.array(parts) 155 | coarse_graph.x[np.array(parts[0])] = torch.tensor([1., 0.]) 156 | coarse_graph.x[np.array(parts[1])] = torch.tensor([0., 1.]) 157 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 158 | graph.x = coarse_graph.x[inverse] 159 | return graph 160 | 161 | # Subgraph around the cut 162 | 163 | 164 | def k_hop_graph_cut(graph, k, g, va, vb): 165 | nei = torch.where((graph.x[graph.edge_index[0], :2] != 166 | graph.x[graph.edge_index[1], :2]).all(axis=-1))[0] 167 | neib = graph.edge_index[0][nei] 168 | data_cut = k_hop_subgraph(neib, k, graph.edge_index, relabel_nodes=True) 169 | data_small = k_hop_subgraph( 170 | neib, 171 | k - 1, 172 | graph.edge_index, 173 | relabel_nodes=True) 174 | nodes_boundary = list( 175 | set(data_cut[0].numpy()).difference(data_small[0].numpy())) 176 | boundary_features = torch.tensor([1. if i.item( 177 | ) in nodes_boundary else 0. for i in data_cut[0]]).reshape(data_cut[0].shape[0], 1) 178 | e = torch.ones(data_cut[0].shape[0], 1) 179 | nnz = graph.num_edges 180 | features = torch.cat((graph.x[data_cut[0]], boundary_features, torch.true_divide( 181 | va, nnz) * e, torch.true_divide(vb, nnz) * e), 1) 182 | g_red = Batch( 183 | batch=torch.zeros( 184 | data_cut[0].shape[0], 185 | dtype=torch.long), 186 | x=features, 187 | edge_index=data_cut[1]) 188 | return g_red, data_cut[0] 189 | 190 | # Volumes of the partitions 191 | 192 | 193 | def volumes(graph): 194 | ia = torch.where( 195 | (graph.x[:, :2] == torch.tensor([1.0, 0.0])).all(axis=-1))[0] 196 | ib = torch.where( 197 | (graph.x[:, :2] != torch.tensor([1.0, 0.0])).all(axis=-1))[0] 198 | degs = degree( 199 | graph.edge_index[0], 200 | num_nodes=graph.x.size(0), 201 | dtype=torch.uint8) 202 | da = torch.sum(degs[ia]).detach().item() 203 | db = torch.sum(degs[ib]).detach().item() 204 | cut = torch.sum((graph.x[graph.edge_index[0], 205 | :2] != graph.x[graph.edge_index[1], 206 | :2]).all(axis=-1)).detach().item() / 2 207 | return cut, da, db 208 | 209 | # Full valuation of the DRL model 210 | 211 | 212 | def ac_eval_coarse_full_drl(ac, graph, k, ac2): 213 | g = graph.clone() 214 | info = [] 215 | edge_info = [] 216 | while g.num_nodes > 100: 217 | edge_info.append(g.edge_index) 218 | cluster = graclus(g.edge_index) 219 | info.append(cluster) 220 | g1 = avg_pool( 221 | cluster, 222 | Batch( 223 | batch=g.batch, 224 | x=g.x, 225 | edge_index=g.edge_index)) 226 | g = g1 227 | 228 | gnx = to_networkx(g, to_undirected=True) 229 | g = torch_from_graph(gnx) 230 | g.batch = torch.zeros(g.num_nodes, dtype=torch.long) 231 | g = ac_eval(ac2, g, 0.01) 232 | 233 | while len(info) > 0: 234 | cluster = info.pop() 235 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 236 | g.x = g.x[inverse] 237 | g.edge_index = edge_info.pop() 238 | _, volA, volB = volumes(g) 239 | gnx = to_networkx(g, to_undirected=True) 240 | g = ac_eval_refine(ac, g, k, gnx, volA, volB) 241 | return g 242 | 243 | # Full valuation of the DRL model repeated for trials number of times. 244 | # Then the best partition is returned 245 | 246 | 247 | def ac_eval_coarse_full_trials_drl(ac, graph, k, trials, ac2): 248 | graph_test = graph.clone() 249 | gg = ac_eval_coarse_full_drl(ac, graph_test, k, ac2) 250 | ncut = normalized_cut(gg) 251 | for j in range(1, trials): 252 | gg1 = ac_eval_coarse_full_drl(ac, graph_test, k, ac2) 253 | if normalized_cut(gg1) < ncut: 254 | ncut = normalized_cut(gg1) 255 | gg = gg1 256 | 257 | return gg 258 | 259 | # Full valuation of the DRL_METIS model repeated for trials number of 260 | # times. Then the best partition is returned 261 | 262 | 263 | def ac_eval_coarse_full_trials(ac, graph, k, trials): 264 | graph_test = graph.clone() 265 | gg = ac_eval_coarse_full(ac, graph_test, k) 266 | ncut = normalized_cut(gg) 267 | for j in range(1, trials): 268 | gg1 = ac_eval_coarse_full(ac, graph_test, k) 269 | if normalized_cut(gg1) < ncut: 270 | ncut = normalized_cut(gg1) 271 | gg = gg1 272 | 273 | return gg 274 | 275 | # Full valuation of the DRL_METIS model 276 | 277 | 278 | def ac_eval_coarse_full(ac, graph, k): 279 | g = graph.clone() 280 | info = [] 281 | edge_info = [] 282 | while g.num_nodes > 100: 283 | edge_info.append(g.edge_index) 284 | cluster = graclus(g.edge_index) 285 | info.append(cluster) 286 | g1 = avg_pool( 287 | cluster, 288 | Batch( 289 | batch=g.batch, 290 | x=g.x, 291 | edge_index=g.edge_index)) 292 | g = g1 293 | 294 | gnx = to_networkx(g, to_undirected=True) 295 | g = partition_metis(g, gnx) 296 | 297 | while len(info) > 0: 298 | cluster = info.pop() 299 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 300 | g.x = g.x[inverse] 301 | g.edge_index = edge_info.pop() 302 | _, volA, volB = volumes(g) 303 | gnx = to_networkx(g, to_undirected=True) 304 | g = ac_eval_refine(ac, g, k, gnx, volA, volB) 305 | 306 | return g 307 | 308 | # Partitioning of a pytorch geometric graph obtained with METIS 309 | 310 | 311 | def partition_metis(graph, graph_nx): 312 | obj, parts = nxmetis.partition(graph_nx, 2) 313 | #mparts = np.array(parts) 314 | graph.x[parts[0]] = torch.tensor([1., 0.]) 315 | graph.x[parts[1]] = torch.tensor([0., 1.]) 316 | 317 | return graph 318 | 319 | # Refining the cut on the subgraph around the cut 320 | 321 | 322 | def ac_eval_refine(ac, graph_t, k, gnx, volA, volB): 323 | graph = graph_t.clone() 324 | g0 = graph_t.clone() 325 | data = k_hop_graph_cut(graph, k, gnx, volA, volB) 326 | graph_cut, positions = data[0], data[1] 327 | 328 | len_episod = int(cut(graph)) 329 | 330 | peak_reward = 0 331 | peak_time = 0 332 | total_reward = 0 333 | actions = [] 334 | 335 | e = torch.ones(graph_cut.num_nodes, 1) 336 | nnz = graph.num_edges 337 | cut_sub = len_episod 338 | for i in range(len_episod): 339 | with torch.no_grad(): 340 | policy = ac(graph_cut) 341 | probs = policy.view(-1).clone().detach().numpy() 342 | flip = np.argmax(probs) 343 | 344 | dv = gnx.degree[positions[flip].item()] 345 | old_nc = cut_sub * (torch.true_divide(1, volA) + 346 | torch.true_divide(1, volB)) 347 | if graph_cut.x[flip, 0] == 1.: 348 | volA = volA - dv 349 | volB = volB + dv 350 | else: 351 | volA = volA + dv 352 | volB = volB - dv 353 | new_nc, cut_sub = update_nc( 354 | graph, gnx, cut_sub, positions[flip].item(), volA, volB) 355 | total_reward += (old_nc - new_nc).item() 356 | 357 | actions.append(flip) 358 | 359 | change_vertex(graph_cut, flip) 360 | change_vertex(graph, positions[flip]) 361 | 362 | graph_cut.x[:, 3] = torch.true_divide(volA, nnz) 363 | graph_cut.x[:, 4] = torch.true_divide(volB, nnz) 364 | 365 | if i >= 1 and actions[-1] == actions[-2]: 366 | break 367 | if total_reward > peak_reward: 368 | peak_reward = total_reward 369 | peak_time = i + 1 370 | 371 | for t in range(peak_time): 372 | g0 = change_vertex(g0, positions[actions[t]]) 373 | 374 | return g0 375 | 376 | # Compute the update for the normalized cut 377 | 378 | 379 | def update_nc(graph, gnx, cut_total, v1, va, vb): 380 | c_v1 = 0 381 | for v in gnx[v1]: 382 | if graph.x[v, 0] != graph.x[v1, 0]: 383 | c_v1 += 1 384 | else: 385 | c_v1 -= 1 386 | cut_new = cut_total - c_v1 387 | return cut_new * (torch.true_divide(1, va) + 388 | torch.true_divide(1, vb)), cut_new 389 | 390 | # Evaluation of the DRL model on the coarsest graph 391 | 392 | 393 | def ac_eval(ac, graph, perc): 394 | graph_test = graph.clone() 395 | error_bal = math.ceil(graph_test.num_nodes * perc) 396 | cuts = [] 397 | nodes = [] 398 | # Run the episod 399 | for i in range(int(graph_test.num_nodes / 2 - 1 + error_bal)): 400 | policy, _ = ac(graph_test) 401 | policy = policy.view(-1).detach().numpy() 402 | flip = random.choice(torch.arange(0, graph_test.num_nodes), p=policy) 403 | graph_test = change_vertex(graph_test, flip) 404 | if i >= int(graph_test.num_nodes / 2 - 1 - error_bal): 405 | cuts.append(cut(graph_test)) 406 | nodes.append(flip) 407 | if len(cuts) > 0: 408 | stops = np.argwhere(cuts == np.min(cuts)) 409 | stops = stops.reshape((stops.shape[0],)) 410 | if len(stops) == 1: 411 | graph_test.x[nodes[stops[0] + 1:]] = torch.tensor([1., 0.]) 412 | else: 413 | diff = [np.abs(i - int(len(stops) / 2 - 1)) for i in stops] 414 | min_dist = np.argwhere(diff == np.min(diff)) 415 | min_dist = min_dist.reshape((min_dist.shape[0],)) 416 | stop = np.random.choice(stops[min_dist]) 417 | graph_test.x[nodes[stop + 1:]] = torch.tensor([1., 0.]) 418 | 419 | return graph_test 420 | 421 | # Partitioning provided by SCOTCH 422 | 423 | 424 | def scotch_partition(g): 425 | gnx = to_networkx(g, to_undirected=True) 426 | a = nx.to_scipy_sparse_array(gnx, format="coo", dtype=np.int32) 427 | a = scipy.sparse.csr_matrix((a.data, (a.row, a.col)), dtype=np.int32) 428 | n = g.num_nodes 429 | part = np.zeros(n, dtype=np.int32) 430 | libscotch.WRAPPER_SCOTCH_graphPart( 431 | ctypes.c_int(n), 432 | ctypes.c_void_p(a.indptr.ctypes.data), 433 | ctypes.c_void_p(a.indices.ctypes.data), 434 | ctypes.c_void_p(part.ctypes.data) 435 | ) 436 | g.x[np.where(part == 0)] = torch.tensor([1., 0.]) 437 | g.x[np.where(part == 1)] = torch.tensor([0., 1.]) 438 | return g 439 | 440 | # Deep neural network for the DRL agent 441 | 442 | 443 | class Model(torch.nn.Module): 444 | def __init__(self, units): 445 | super(Model, self).__init__() 446 | 447 | self.units = units 448 | self.common_layers = 1 449 | self.critic_layers = 1 450 | self.actor_layers = 1 451 | self.activation = torch.tanh 452 | 453 | self.conv_first = SAGEConv(5, self.units) 454 | self.conv_common = nn.ModuleList( 455 | [SAGEConv(self.units, self.units) 456 | for i in range(self.common_layers)] 457 | ) 458 | self.conv_actor = nn.ModuleList( 459 | [SAGEConv(self.units, 460 | 1 if i == self.actor_layers - 1 else self.units) 461 | for i in range(self.actor_layers)] 462 | ) 463 | self.conv_critic = nn.ModuleList( 464 | [SAGEConv(self.units, self.units) 465 | for i in range(self.critic_layers)] 466 | ) 467 | self.final_critic = nn.Linear(self.units, 1) 468 | 469 | def forward(self, graph): 470 | x, edge_index, batch = graph.x, graph.edge_index, graph.batch 471 | 472 | do_not_flip = torch.where(x[:, 2] != 0.) 473 | 474 | x = self.activation(self.conv_first(x, edge_index)) 475 | for i in range(self.common_layers): 476 | x = self.activation(self.conv_common[i](x, edge_index)) 477 | 478 | x_actor = x 479 | for i in range(self.actor_layers): 480 | x_actor = self.conv_actor[i](x_actor, edge_index) 481 | if i < self.actor_layers - 1: 482 | x_actor = self.activation(x_actor) 483 | x_actor[do_not_flip] = torch.tensor(-np.Inf) 484 | x_actor = torch.log_softmax(x_actor, dim=0) 485 | 486 | if not self.training: 487 | return x_actor 488 | 489 | x_critic = x.detach() 490 | for i in range(self.critic_layers): 491 | x_critic = self.conv_critic[i](x_critic, edge_index) 492 | if i < self.critic_layers - 1: 493 | x_critic = self.activation(x_critic) 494 | x_critic = self.final_critic(x_critic) 495 | x_critic = torch.tanh(global_mean_pool(x_critic, batch)) 496 | return x_actor, x_critic 497 | 498 | def forward_c(self, graph, gcsr): 499 | n = gcsr.shape[0] 500 | x_actor = torch.zeros([n, 1], dtype=torch.float32) 501 | libcdrl.forward( 502 | ctypes.c_int(n), 503 | ctypes.c_void_p(gcsr.indptr.ctypes.data), 504 | ctypes.c_void_p(gcsr.indices.ctypes.data), 505 | ctypes.c_void_p(graph.x.data_ptr()), 506 | ctypes.c_void_p(x_actor.data_ptr()), 507 | ctypes.c_void_p(self.conv_first.lin_l.weight.data_ptr()), 508 | ctypes.c_void_p(self.conv_first.lin_r.weight.data_ptr()), 509 | ctypes.c_void_p(self.conv_common[0].lin_l.weight.data_ptr()), 510 | ctypes.c_void_p(self.conv_common[0].lin_r.weight.data_ptr()), 511 | ctypes.c_void_p(self.conv_actor[0].lin_l.weight.data_ptr()), 512 | ctypes.c_void_p(self.conv_actor[0].lin_r.weight.data_ptr()) 513 | ) 514 | return x_actor 515 | 516 | 517 | if __name__ == "__main__": 518 | parser = argparse.ArgumentParser( 519 | formatter_class=argparse.ArgumentDefaultsHelpFormatter) 520 | parser.add_argument('--out', default='./temp_edge/', type=str) 521 | parser.add_argument( 522 | "--nmin", 523 | default=100, 524 | help="Minimum graph size", 525 | type=int) 526 | parser.add_argument( 527 | "--nmax", 528 | default=10000, 529 | help="Maximum graph size", 530 | type=int) 531 | parser.add_argument( 532 | "--ntest", 533 | default=1000, 534 | help="Number of test graphs", 535 | type=int) 536 | parser.add_argument("--hops", default=3, help="Number of hops", type=int) 537 | parser.add_argument( 538 | "--units", 539 | default=5, 540 | help="Number of units in conv layers", 541 | type=int) 542 | parser.add_argument( 543 | "--units_conv", 544 | default=[ 545 | 30, 546 | 30, 547 | 30, 548 | 30], 549 | help="Number of units in conv layers", 550 | nargs='+', 551 | type=int) 552 | parser.add_argument( 553 | "--units_dense", 554 | default=[ 555 | 30, 556 | 30, 557 | 20], 558 | help="Number of units in linear layers", 559 | nargs='+', 560 | type=int) 561 | parser.add_argument( 562 | "--attempts", 563 | default=3, 564 | help="Number of attempts in the DRL", 565 | type=int) 566 | parser.add_argument( 567 | "--dataset", 568 | default='delaunay', 569 | help="Dataset type: delaunay, suitesparse, graded l, hole3, hole6", 570 | type=str) 571 | 572 | torch.manual_seed(1) 573 | np.random.seed(2) 574 | 575 | args = parser.parse_args() 576 | outdir = args.out + '/' 577 | Path(outdir).mkdir(parents=True, exist_ok=True) 578 | 579 | n_min = args.nmin 580 | n_max = args.nmax 581 | n_test = args.ntest 582 | hops = args.hops 583 | units = args.units 584 | trials = args.attempts 585 | hid_conv = args.units_conv 586 | hid_lin = args.units_dense 587 | dataset_type = args.dataset 588 | 589 | # Deep neural network for the DRL agent on the coarsest graph 590 | class ModelCoarsest(torch.nn.Module): 591 | def __init__(self): 592 | super(ModelCoarsest, self).__init__() 593 | self.conv1 = GATConv(2, hid_conv[0]) 594 | self.conv2 = GATConv(hid_conv[0], hid_conv[1]) 595 | self.conv3 = GATConv(hid_conv[1], hid_conv[2]) 596 | self.conv4 = GATConv(hid_conv[2], hid_conv[3]) 597 | 598 | self.l1 = nn.Linear(hid_conv[3], hid_lin[0]) 599 | self.l2 = nn.Linear(hid_lin[0], hid_lin[1]) 600 | self.actor1 = nn.Linear(hid_lin[1], hid_lin[2]) 601 | self.actor2 = nn.Linear(hid_lin[2], 1) 602 | 603 | self.GlobAtt = GlobalAttention( 604 | nn.Sequential( 605 | nn.Linear( 606 | hid_lin[1], hid_lin[1]), nn.Tanh(), nn.Linear( 607 | hid_lin[1], 1))) 608 | self.critic1 = nn.Linear(hid_lin[1], hid_lin[2]) 609 | self.critic2 = nn.Linear(hid_lin[2], 1) 610 | 611 | def forward(self, graph): 612 | x_start, edge_index, batch = graph.x, graph.edge_index, graph.batch 613 | 614 | x = self.conv1(graph.x, edge_index) 615 | x = torch.tanh(x) 616 | x = self.conv2(x, edge_index) 617 | x = torch.tanh(x) 618 | x = self.conv3(x, edge_index) 619 | x = torch.tanh(x) 620 | x = self.conv4(x, edge_index) 621 | x = torch.tanh(x) 622 | 623 | x = self.l1(x) 624 | x = torch.tanh(x) 625 | x = self.l2(x) 626 | x = torch.tanh(x) 627 | 628 | x_actor = self.actor1(x) 629 | x_actor = torch.tanh(x_actor) 630 | x_actor = self.actor2(x_actor) 631 | flipped = torch.where( 632 | (x_start == torch.tensor([0., 1.])).all(axis=-1))[0] 633 | x_actor.data[flipped] = torch.tensor(-np.Inf) 634 | x_actor = torch.softmax(x_actor, dim=0) 635 | 636 | x_critic = self.GlobAtt(x, batch) 637 | x_critic = self.critic1(x_critic) 638 | x_critic = torch.tanh(x_critic) 639 | x_critic = self.critic2(x_critic) 640 | 641 | return x_actor, x_critic 642 | 643 | # Choose the model to load according to the parameter 'dataset_type' 644 | model = Model(units) 645 | if dataset_type == 'suitesparse': 646 | model.load_state_dict( 647 | torch.load('./temp_edge/model_partitioning_suitesparse')) 648 | else: 649 | model.load_state_dict( 650 | torch.load('./temp_edge/model_partitioning_delaunay')) 651 | 652 | model_coarsest = ModelCoarsest() 653 | model_coarsest.load_state_dict(torch.load('./temp_edge/model_coarsest')) 654 | model.eval() 655 | model_coarsest.eval() 656 | for p in model.parameters(): 657 | p.requires_grad = False 658 | for p in model_coarsest.parameters(): 659 | p.requires_grad = False 660 | 661 | print('Models loaded\n') 662 | list_picked = [] 663 | i = 0 664 | while i < n_test: 665 | # Choose the dataset type according to the parameter 'dataset_type' 666 | if dataset_type == 'delaunay': 667 | n_nodes = np.random.choice(np.arange(n_min, n_max)) 668 | g = random_delaunay_graph(n_nodes) 669 | g.batch = torch.zeros(g.num_nodes) 670 | i += 1 671 | else: 672 | if len(list_picked) >= len( 673 | os.listdir( 674 | os.path.expanduser( 675 | 'drl-graph-partitioning/' + 676 | str(dataset_type) + 677 | '/'))): 678 | break 679 | graph = random.choice( 680 | os.listdir( 681 | os.path.expanduser( 682 | 'drl-graph-partitioning/' + 683 | str(dataset_type) + 684 | '/'))) 685 | if str(graph) not in list_picked: 686 | list_picked.append(str(graph)) 687 | matrix_sparse = mmread( 688 | os.path.expanduser( 689 | 'drl-graph-partitioning/' + 690 | str(dataset_type) + 691 | '/' + 692 | str(graph))) 693 | gnx = nx.from_scipy_sparse_array(matrix_sparse) 694 | if nx.number_connected_components(gnx) == 1 and gnx.number_of_nodes( 695 | ) > n_min and gnx.number_of_nodes() < n_max: 696 | g = torch_from_sparse(matrix_sparse) 697 | g.batch = torch.zeros(g.num_nodes) 698 | i += 1 699 | else: 700 | continue 701 | else: 702 | continue 703 | print('Graph:', i, ' Vertices:', g.num_nodes, ' Edges:', g.num_edges) 704 | 705 | gnx = to_networkx(g, to_undirected=True) 706 | 707 | # Partitioning with DRL 708 | graph_p = ac_eval_coarse_full_trials_drl( 709 | model, g, hops, trials, model_coarsest) 710 | cdrl, a, b = volumes(graph_p) 711 | 712 | # Partitioning with DRL_METIS 713 | graph_p1 = ac_eval_coarse_full_trials(model, g, hops, trials) 714 | cdrl_m, a1, b1 = volumes(graph_p1) 715 | 716 | # Partitioning with METIS 717 | cut_met, parts = nxmetis.partition(gnx, 2) 718 | #mparts = np.array(parts) 719 | a_m = sum(gnx.degree(i) for i in parts[0]) 720 | b_m = sum(gnx.degree(i) for i in parts[1]) 721 | 722 | # Partitioning with SCOTCH 723 | gscotch = scotch_partition(g) 724 | csc, a_s, b_s = volumes(gscotch) 725 | 726 | # Print the results 727 | print( 728 | 'NC: DRL:', 729 | np.round( 730 | normalized_cut(graph_p), 731 | 5), 732 | ' DRL_METIS:', 733 | np.round( 734 | normalized_cut(graph_p1), 735 | 5), 736 | ' METIS:', 737 | np.round( 738 | cut_met * ( 739 | 1 / a_m + 1 / b_m), 740 | 5), 741 | ' SCOTCH:', 742 | np.round( 743 | normalized_cut(gscotch), 744 | 5)) 745 | print('Volumes: DRL:', (a, b), ' DRL_METIS:', (a1, b1), 746 | ' METIS:', (a_m, b_m), ' SCOTCH:', (a_s, b_s)) 747 | print( 748 | 'Cut: DRL:', 749 | int(cdrl), 750 | ' DRL_METIS:', 751 | int(cdrl_m), 752 | ' METIS:', 753 | int(cut_met), 754 | ' SCOTCH:', 755 | int(csc)) 756 | print('') 757 | print('Done') 758 | -------------------------------------------------------------------------------- /partitioning/drl_partitioning_train.py: -------------------------------------------------------------------------------- 1 | 2 | import argparse 3 | from pathlib import Path 4 | 5 | import networkx as nx 6 | import nxmetis 7 | 8 | import torch 9 | import torch.nn as nn 10 | 11 | from torch_geometric.data import Data, DataLoader, Batch 12 | from torch_geometric.nn import SAGEConv, graclus, avg_pool, global_mean_pool 13 | from torch_geometric.utils import to_networkx, k_hop_subgraph, degree 14 | 15 | import numpy as np 16 | from numpy import random 17 | 18 | import scipy 19 | from scipy.sparse import coo_matrix, rand 20 | from scipy.io import mmread 21 | from scipy.spatial import Delaunay 22 | 23 | import copy 24 | import timeit 25 | import os 26 | from itertools import combinations 27 | 28 | 29 | # Networkx geometric Delaunay mesh with n random points in the unit square 30 | def graph_delaunay_from_points(points): 31 | mesh = Delaunay(points, qhull_options="QJ") 32 | mesh_simp = mesh.simplices 33 | edges = [] 34 | for i in range(len(mesh_simp)): 35 | edges += combinations(mesh_simp[i], 2) 36 | e = list(set(edges)) 37 | return nx.Graph(e) 38 | 39 | # Pytorch geometric Delaunay mesh with n random points in the unit square 40 | 41 | 42 | def random_delaunay_graph(n): 43 | points = np.random.random_sample((n, 2)) 44 | g = graph_delaunay_from_points(points) 45 | 46 | adj_sparse = nx.to_scipy_sparse_array(g, format='coo') 47 | row = adj_sparse.row 48 | col = adj_sparse.col 49 | 50 | one_hot = [] 51 | for i in range(g.number_of_nodes()): 52 | one_hot.append([1., 0.]) 53 | 54 | edges = torch.tensor([row, col], dtype=torch.long) 55 | nodes = torch.tensor(np.array(one_hot), dtype=torch.float) 56 | graph_torch = Data(x=nodes, edge_index=edges) 57 | 58 | return graph_torch 59 | 60 | # Build a pytorch geometric graph with features [1,0] form a networkx graph 61 | 62 | def torch_from_graph(g): 63 | 64 | adj_sparse = nx.to_scipy_sparse_array(g, format='coo') 65 | row = adj_sparse.row 66 | col = adj_sparse.col 67 | 68 | one_hot = [] 69 | for i in range(g.number_of_nodes()): 70 | one_hot.append([1., 0.]) 71 | 72 | edges = torch.tensor([row, col], dtype=torch.long) 73 | nodes = torch.tensor(np.array(one_hot), dtype=torch.float) 74 | graph_torch = Data(x=nodes, edge_index=edges) 75 | 76 | degs = np.sum(adj_sparse.todense(), axis=0) 77 | first_vertices = np.where(degs == np.min(degs))[0] 78 | first_vertex = np.random.choice(first_vertices) 79 | change_vertex(graph_torch, first_vertex) 80 | 81 | return graph_torch 82 | 83 | # Training dataset made of Delaunay graphs generated from random points in 84 | # the unit square and their coarser graphs 85 | 86 | 87 | def delaunay_dataset_with_coarser(n, n_min, n_max): 88 | dataset = [] 89 | while len(dataset) < n: 90 | number_nodes = np.random.choice(np.arange(n_min, n_max + 1, 2)) 91 | g = random_delaunay_graph(number_nodes) 92 | dataset.append(g) 93 | while g.num_nodes > 200: 94 | cluster = graclus(g.edge_index) 95 | coarse_graph = avg_pool( 96 | cluster, 97 | Batch( 98 | batch=torch.zeros( 99 | g.num_nodes), 100 | x=g.x, 101 | edge_index=g.edge_index)) 102 | g1 = Data(x=coarse_graph.x, edge_index=coarse_graph.edge_index) 103 | dataset.append(g1) 104 | g = g1 105 | 106 | loader = DataLoader(dataset, batch_size=1, shuffle=True) 107 | return loader 108 | 109 | # Build a pytorch geometric graph with features [1,0] form a sparse matrix 110 | 111 | 112 | def torch_from_sparse(adj_sparse): 113 | 114 | row = adj_sparse.row 115 | col = adj_sparse.col 116 | 117 | features = [] 118 | for i in range(adj_sparse.shape[0]): 119 | features.append([1., 0.]) 120 | 121 | edges = torch.tensor([row, col], dtype=torch.long) 122 | nodes = torch.tensor(np.array(features), dtype=torch.float) 123 | graph_torch = Data(x=nodes, edge_index=edges) 124 | 125 | return graph_torch 126 | 127 | # Training dataset made of SuiteSparse graphs and their coarser graphs 128 | 129 | 130 | def suitesparse_dataset_with_coarser(n, n_min, n_max): 131 | dataset, picked = [], [] 132 | for graph in os.listdir(os.path.expanduser( 133 | 'drl-graph-partitioning/suitesparse_train/')): 134 | 135 | if len(dataset) > n or len(picked) >= len(os.listdir(os.path.expanduser( 136 | 'drl-graph-partitioning/suitesparse_train/'))): 137 | break 138 | picked.append(str(graph)) 139 | # print(str(graph)) 140 | matrix_sparse = mmread( 141 | os.path.expanduser( 142 | 'drl-graph-partitioning/suitesparse_train/' + 143 | str(graph))) 144 | gnx = nx.from_scipy_sparse_matrix(matrix_sparse) 145 | if nx.number_connected_components(gnx) == 1 and gnx.number_of_nodes( 146 | ) > n_min and gnx.number_of_nodes() < n_max: 147 | g = torch_from_sparse(matrix_sparse) 148 | g.weight = torch.tensor([1] * g.num_edges) 149 | dataset.append(g) 150 | while g.num_nodes > 200: 151 | cluster = graclus(g.edge_index) 152 | coarse_graph = avg_pool( 153 | cluster, 154 | Batch( 155 | batch=torch.zeros( 156 | g.num_nodes), 157 | x=g.x, 158 | edge_index=g.edge_index)) 159 | g1 = Data(x=coarse_graph.x, edge_index=coarse_graph.edge_index) 160 | dataset.append(g1) 161 | g = g1 162 | 163 | loader = DataLoader(dataset, batch_size=1, shuffle=True) 164 | return loader 165 | 166 | # Cut of the input graph 167 | 168 | 169 | def cut(graph): 170 | cut = torch.sum((graph.x[graph.edge_index[0], 171 | :2] != graph.x[graph.edge_index[1], 172 | :2]).all(axis=-1)).detach().item() / 2 173 | return cut 174 | 175 | # Change the feature of the selected vertex 176 | 177 | 178 | def change_vertex(state, vertex): 179 | if (state.x[vertex, :2] == torch.tensor([1., 0.])).all(): 180 | state.x[vertex, 0] = torch.tensor(0.) 181 | state.x[vertex, 1] = torch.tensor(1.) 182 | else: 183 | state.x[vertex, 0] = torch.tensor(1.) 184 | state.x[vertex, 1] = torch.tensor(0.) 185 | 186 | return state 187 | 188 | # Reward to train the DRL agent 189 | 190 | 191 | def reward_NC(state, vertex): 192 | new_state = state.clone() 193 | new_state = change_vertex(new_state, vertex) 194 | return normalized_cut(state) - normalized_cut(new_state) 195 | 196 | # Normalized cut of the input graph 197 | 198 | 199 | def normalized_cut(graph): 200 | cut, da, db = volumes(graph) 201 | if da == 0 or db == 0: 202 | return 2 203 | else: 204 | return cut * (1 / da + 1 / db) 205 | 206 | # Coarsen a pytorch geometric graph, then find the cut with METIS and 207 | # interpolate it back 208 | 209 | 210 | def partition_metis_refine(graph): 211 | cluster = graclus(graph.edge_index) 212 | coarse_graph = avg_pool( 213 | cluster, 214 | Batch( 215 | batch=graph.batch, 216 | x=graph.x, 217 | edge_index=graph.edge_index)) 218 | coarse_graph_nx = to_networkx(coarse_graph, to_undirected=True) 219 | _, parts = nxmetis.partition(coarse_graph_nx, 2) 220 | #print(parts) 221 | #mparts = np.array(parts) 222 | coarse_graph.x[np.array(parts[0])] = torch.tensor([1., 0.]) 223 | coarse_graph.x[np.array(parts[1])] = torch.tensor([0., 1.]) 224 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 225 | graph.x = coarse_graph.x[inverse] 226 | return graph 227 | 228 | # Subgraph around the cut 229 | 230 | 231 | def k_hop_graph_cut(graph, k): 232 | nei = torch.where((graph.x[graph.edge_index[0], :2] != 233 | graph.x[graph.edge_index[1], :2]).all(axis=-1))[0] 234 | neib = graph.edge_index[0][nei] 235 | data_cut = k_hop_subgraph(neib, k, graph.edge_index, relabel_nodes=True) 236 | data_small = k_hop_subgraph( 237 | neib, 238 | k - 1, 239 | graph.edge_index, 240 | relabel_nodes=True) 241 | nodes_boundary = list( 242 | set(data_cut[0].numpy()).difference(data_small[0].numpy())) 243 | boundary_features = torch.tensor([1. if i.item( 244 | ) in nodes_boundary else 0. for i in data_cut[0]]).reshape(data_cut[0].shape[0], 1) 245 | _, va, vb = volumes(graph) 246 | e = torch.ones(data_cut[0].shape[0], 1) 247 | nnz = graph.num_edges 248 | features = torch.cat((graph.x[data_cut[0]], boundary_features, torch.true_divide( 249 | va, nnz) * e, torch.true_divide(vb, nnz) * e), 1) 250 | 251 | g_red = Batch( 252 | batch=torch.zeros( 253 | data_cut[0].shape[0], 254 | dtype=torch.long), 255 | x=features, 256 | edge_index=data_cut[1]) 257 | return g_red, data_cut[0] 258 | 259 | # Volumes of the partitions 260 | 261 | 262 | def volumes(graph): 263 | ia = torch.where( 264 | (graph.x[:, :2] == torch.tensor([1.0, 0.0])).all(axis=-1))[0] 265 | ib = torch.where( 266 | (graph.x[:, :2] != torch.tensor([1.0, 0.0])).all(axis=-1))[0] 267 | degs = degree( 268 | graph.edge_index[0], 269 | num_nodes=graph.x.size(0), 270 | dtype=torch.uint8) 271 | da = torch.sum(degs[ia]).detach().item() 272 | db = torch.sum(degs[ib]).detach().item() 273 | cut = torch.sum((graph.x[graph.edge_index[0], 274 | :2] != graph.x[graph.edge_index[1], 275 | :2]).all(axis=-1)).detach().item() / 2 276 | return cut, da, db 277 | 278 | 279 | # Training loop 280 | def training_loop( 281 | model, 282 | training_dataset, 283 | episodes, 284 | gamma, 285 | time_to_sample, 286 | coeff, 287 | optimizer, 288 | print_loss, 289 | k): 290 | 291 | # Here start the main loop for training 292 | for i in range(episodes): 293 | rew_partial = 0 294 | p = 0 295 | print('Episode:',i) 296 | print('') 297 | for graph in training_dataset: 298 | print('Graph:',p,' Number of nodes:',graph.num_nodes) 299 | start_all = partition_metis_refine(graph) 300 | 301 | data = k_hop_graph_cut(start_all, k) 302 | graph_cut, positions = data[0], data[1] 303 | len_episode = cut(graph) 304 | 305 | start = graph_cut 306 | time = 0 307 | 308 | rews, vals, logprobs = [], [], [] 309 | # Here starts the episod related to the graph "start" 310 | while time < len_episode: 311 | # we evaluate the A2C agent on the graph 312 | policy, values = model(start) 313 | probs = policy.view(-1) 314 | 315 | action = torch.distributions.Categorical( 316 | logits=probs).sample().detach().item() 317 | 318 | # compute the reward associated with this action 319 | rew = reward_NC(start_all, positions[action]) 320 | rew_partial += rew 321 | # Collect the log-probability of the chosen action 322 | logprobs.append(policy.view(-1)[action]) 323 | # Collect the value of the chosen action 324 | vals.append(values) 325 | # Collect the reward 326 | rews.append(rew) 327 | 328 | new_state = start.clone() 329 | new_state_orig = start_all.clone() 330 | # we flip the vertex returned by the policy 331 | new_state = change_vertex(new_state, action) 332 | new_state_orig = change_vertex( 333 | new_state_orig, positions[action]) 334 | # Update the state 335 | start = new_state 336 | start_all = new_state_orig 337 | 338 | _, va, vb = volumes(start_all) 339 | 340 | nnz = start_all.num_edges 341 | start.x[:, 3] = torch.true_divide(va, nnz) 342 | start.x[:, 4] = torch.true_divide(vb, nnz) 343 | 344 | time += 1 345 | 346 | # After time_to_sample episods we update the loss 347 | if i % time_to_sample == 0 or time == len_episode: 348 | 349 | logprobs = torch.stack(logprobs).flip(dims=(0,)).view(-1) 350 | vals = torch.stack(vals).flip(dims=(0,)).view(-1) 351 | rews = torch.tensor(rews).flip(dims=(0,)).view(-1) 352 | 353 | # Compute the advantage 354 | R = [] 355 | R_partial = torch.tensor([0.]) 356 | for j in range(rews.shape[0]): 357 | R_partial = rews[j] + gamma * R_partial 358 | R.append(R_partial) 359 | 360 | R = torch.stack(R).view(-1) 361 | advantage = R - vals.detach() 362 | 363 | # Actor loss 364 | actor_loss = (-1 * logprobs * advantage) 365 | 366 | # Critic loss 367 | critic_loss = torch.pow(R - vals, 2) 368 | 369 | # Finally we update the loss 370 | optimizer.zero_grad() 371 | 372 | loss = torch.mean(actor_loss) + \ 373 | torch.tensor(coeff) * torch.mean(critic_loss) 374 | 375 | rews, vals, logprobs = [], [], [] 376 | 377 | loss.backward() 378 | 379 | optimizer.step() 380 | if p % print_loss == 0: 381 | print('graph:', p,'reward:', rew_partial) 382 | rew_partial = 0 383 | p += 1 384 | 385 | return model 386 | 387 | # Deep neural network that models the DRL agent 388 | 389 | 390 | class Model(torch.nn.Module): 391 | def __init__(self, units): 392 | super(Model, self).__init__() 393 | 394 | self.units = units 395 | self.common_layers = 1 396 | self.critic_layers = 1 397 | self.actor_layers = 1 398 | self.activation = torch.tanh 399 | 400 | self.conv_first = SAGEConv(5, self.units) 401 | self.conv_common = nn.ModuleList( 402 | [SAGEConv(self.units, self.units) 403 | for i in range(self.common_layers)] 404 | ) 405 | self.conv_actor = nn.ModuleList( 406 | [SAGEConv(self.units, 407 | 1 if i == self.actor_layers - 1 else self.units) 408 | for i in range(self.actor_layers)] 409 | ) 410 | self.conv_critic = nn.ModuleList( 411 | [SAGEConv(self.units, self.units) 412 | for i in range(self.critic_layers)] 413 | ) 414 | self.final_critic = nn.Linear(self.units, 1) 415 | 416 | def forward(self, graph): 417 | x, edge_index, batch = graph.x, graph.edge_index, graph.batch 418 | 419 | do_not_flip = torch.where(x[:, 2] != 0.) 420 | x = self.activation(self.conv_first(x, edge_index)) 421 | for i in range(self.common_layers): 422 | x = self.activation(self.conv_common[i](x, edge_index)) 423 | 424 | x_actor = x 425 | for i in range(self.actor_layers): 426 | x_actor = self.conv_actor[i](x_actor, edge_index) 427 | if i < self.actor_layers - 1: 428 | x_actor = self.activation(x_actor) 429 | x_actor[do_not_flip] = torch.tensor(-np.Inf) 430 | x_actor = torch.log_softmax(x_actor, dim=0) 431 | 432 | 433 | if not self.training: 434 | return x_actor 435 | 436 | x_critic = x.detach() 437 | for i in range(self.critic_layers): 438 | x_critic = self.conv_critic[i](x_critic, edge_index) 439 | if i < self.critic_layers - 1: 440 | x_critic = self.activation(x_critic) 441 | x_critic = self.final_critic(x_critic) 442 | x_critic = torch.tanh(global_mean_pool(x_critic, batch)) 443 | return x_actor, x_critic 444 | ''' 445 | def forward_c(self, graph, gcsr): 446 | n = gcsr.shape[0] 447 | x_actor = torch.zeros([n, 1], dtype=torch.float32) 448 | libcdrl.forward( 449 | ctypes.c_int(n), 450 | ctypes.c_void_p(gcsr.indptr.ctypes.data), 451 | ctypes.c_void_p(gcsr.indices.ctypes.data), 452 | ctypes.c_void_p(graph.x.data_ptr()), 453 | ctypes.c_void_p(x_actor.data_ptr()), 454 | ctypes.c_void_p(self.conv_first.lin_l.weight.data_ptr()), 455 | ctypes.c_void_p(self.conv_first.lin_r.weight.data_ptr()), 456 | ctypes.c_void_p(self.conv_common[0].lin_l.weight.data_ptr()), 457 | ctypes.c_void_p(self.conv_common[0].lin_r.weight.data_ptr()), 458 | ctypes.c_void_p(self.conv_actor[0].lin_l.weight.data_ptr()), 459 | ctypes.c_void_p(self.conv_actor[0].lin_r.weight.data_ptr()) 460 | ) 461 | 462 | return x_actor 463 | ''' 464 | 465 | 466 | if __name__ == "__main__": 467 | parser = argparse.ArgumentParser( 468 | formatter_class=argparse.ArgumentDefaultsHelpFormatter) 469 | parser.add_argument('--out', default='./temp_edge/', type=str) 470 | parser.add_argument( 471 | "--nmin", 472 | default=200, 473 | help="Minimum graph size", 474 | type=int) 475 | parser.add_argument( 476 | "--nmax", 477 | default=5000, 478 | help="Maximum graph size", 479 | type=int) 480 | parser.add_argument( 481 | "--ntrain", 482 | default=10000, 483 | help="Number of training graphs", 484 | type=int) 485 | parser.add_argument( 486 | "--epochs", 487 | default=1, 488 | help="Number of training epochs", 489 | type=int) 490 | parser.add_argument( 491 | "--print_rew", 492 | default=1000, 493 | help="Steps to take before printing the reward", 494 | type=int) 495 | parser.add_argument("--batch", default=8, help="Batch size", type=int) 496 | parser.add_argument("--hops", default=3, help="Number of hops", type=int) 497 | parser.add_argument( 498 | "--lr", 499 | default=0.001, 500 | help="Learning rate", 501 | type=float) 502 | parser.add_argument( 503 | "--gamma", 504 | default=0.9, 505 | help="Gamma, discount factor", 506 | type=float) 507 | parser.add_argument( 508 | "--coeff", 509 | default=0.1, 510 | help="Critic loss coefficient", 511 | type=float) 512 | parser.add_argument( 513 | "--units", 514 | default=5, 515 | help="Number of units in conv layers", 516 | type=int) 517 | parser.add_argument( 518 | "--dataset", 519 | default='delaunay', 520 | help="Dataset type: delaunay or suitesparse", 521 | type=str) 522 | 523 | torch.manual_seed(1) 524 | np.random.seed(2) 525 | 526 | args = parser.parse_args() 527 | print(args) 528 | outdir = args.out + '/' 529 | Path(outdir).mkdir(parents=True, exist_ok=True) 530 | 531 | n_min = args.nmin 532 | n_max = args.nmax 533 | n_train = args.ntrain 534 | episodes = args.epochs 535 | coeff = args.coeff 536 | print_loss = args.print_rew 537 | 538 | time_to_sample = args.batch 539 | hops = args.hops 540 | lr = args.lr 541 | gamma = args.gamma 542 | units = args.units 543 | dataset_type = args.dataset 544 | 545 | # Choose the dataset type according to the parameter 'dataset_type' 546 | if dataset_type == 'delaunay': 547 | dataset = delaunay_dataset_with_coarser(n_train, n_min, n_max) 548 | else: 549 | dataset = suitesparse_dataset_with_coarser(n_train, n_min, n_max) 550 | 551 | model = Model(units) 552 | #model.share_memory() 553 | print(model) 554 | print('Model parameters:', 555 | sum([w.nelement() for w in model.parameters()])) 556 | 557 | optimizer = torch.optim.Adam(model.parameters(), lr=lr) 558 | 559 | print('Start training') 560 | 561 | t0 = timeit.default_timer() 562 | training_loop(model, dataset, episodes, gamma, time_to_sample, coeff, optimizer, print_loss, hops) 563 | ttrain = timeit.default_timer() - t0 564 | 565 | print('Training took:', ttrain, 'seconds') 566 | 567 | # Saving the model 568 | if dataset_type == 'delaunay': 569 | torch.save(model.state_dict(), outdir + 'model_partitioning_delaunay') 570 | else: 571 | torch.save( 572 | model.state_dict(), 573 | outdir + 574 | 'model_partitioning_suitesparse') 575 | 576 | -------------------------------------------------------------------------------- /partitioning/scotch/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.2) 2 | project(SCOTCHWrapper CXX) 3 | 4 | set(CMAKE_CXX_STANDARD 11) 5 | set(CMAKE_CXX_STANDARD_REQUIRED on) 6 | 7 | list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake/Modules") 8 | list(APPEND CMAKE_PREFIX_PATH $ENV{SCOTCH_DIR} $ENV{SCOTCH_ROOT}) 9 | find_package(SCOTCH) 10 | 11 | add_library(SCOTCHWrapper SHARED SCOTCHWrapper.cpp) 12 | target_link_libraries(SCOTCHWrapper PUBLIC SCOTCH::scotch) 13 | -------------------------------------------------------------------------------- /partitioning/scotch/SCOTCHWrapper.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | 5 | #include 6 | 7 | #ifdef __cplusplus 8 | extern "C" { 9 | #endif 10 | 11 | void WRAPPER_SCOTCH_graphPart(int n, int* ptr, int* ind, int* part) { 12 | SCOTCH_Graph g; 13 | SCOTCH_graphInit(&g); 14 | std::vector ptr_nodiag(n+1), ind_nodiag(ptr[n]-ptr[0]); 15 | int nnz_nodiag = 0; 16 | ptr_nodiag[0] = 0; 17 | for (int i=0; i p(n); 34 | SCOTCH_graphPart(&g, 2, &strategy, p.data()); 35 | std::copy(p.begin(), p.end(), part); 36 | SCOTCH_graphExit(&g); 37 | SCOTCH_stratExit(&strategy); 38 | } 39 | 40 | void WRAPPER_SCOTCH_graphOrder(int n, int* ptr, int* ind, int* perm) { 41 | SCOTCH_Graph g; 42 | SCOTCH_graphInit(&g); 43 | std::vector ptr_nodiag(n+1), ind_nodiag(ptr[n]-ptr[0]); 44 | int nnz_nodiag = 0; 45 | ptr_nodiag[0] = 0; 46 | for (int i=0; i p(n), pi(n), sizes(n+1), tree(n); 64 | ierr = SCOTCH_graphOrder 65 | (&g, &strategy, p.data(), pi.data(), 66 | &nbsep, sizes.data(), tree.data()); 67 | if (ierr) 68 | std::cerr << "# ERROR: SCOTCH_graphOrder faile with ierr=" 69 | << ierr << std::endl; 70 | std::copy(pi.begin(), pi.end(), perm); 71 | SCOTCH_graphExit(&g); 72 | SCOTCH_stratExit(&strategy); 73 | } 74 | 75 | #ifdef __cplusplus 76 | } 77 | #endif 78 | -------------------------------------------------------------------------------- /partitioning/scotch/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export SCOTCH_DIR=$HOME/scotch-v6.1.0/src/libscotch/ 4 | 5 | rm -rf build 6 | mkdir build 7 | cd build 8 | cmake ../ 9 | make VERBOSE=1 10 | -------------------------------------------------------------------------------- /partitioning/scotch/cmake/Modules/FindSCOTCH.cmake: -------------------------------------------------------------------------------- 1 | # FindSCOTCH.cmake 2 | # 3 | # Finds the SCOTCH library. 4 | # 5 | # This module will define the following variables: 6 | # 7 | # SCOTCH_FOUND - System has found SCOTCH installation 8 | # SCOTCH_INCLUDE_DIR - Location of SCOTCH headers 9 | # SCOTCH_LIBRARIES - SCOTCH libraries 10 | # SCOTCH_USES_ILP64 - Whether SCOTCH was configured with ILP64 11 | # SCOTCH_USES_PTHREADS - Whether SCOTCH was configured with PThreads 12 | # 13 | # This module can handle the following COMPONENTS 14 | # 15 | # ilp64 - 64-bit index integers 16 | # pthreads - SMP parallelism via PThreads 17 | # metis - Has METIS compatibility layer 18 | # 19 | # This module will export the following targets if SCOTCH_FOUND 20 | # 21 | # SCOTCH::scotch 22 | # 23 | # 24 | # 25 | # 26 | # Proper usage: 27 | # 28 | # project( TEST_FIND_SCOTCH C ) 29 | # find_package( SCOTCH ) 30 | # 31 | # if( SCOTCH_FOUND ) 32 | # add_executable( test test.cxx ) 33 | # target_link_libraries( test SCOTCH::scotch ) 34 | # endif() 35 | # 36 | # 37 | # 38 | # 39 | # This module will use the following variables to change 40 | # default behaviour if set 41 | # 42 | # scotch_PREFIX 43 | # scotch_INCLUDE_DIR 44 | # scotch_LIBRARY_DIR 45 | # scotch_LIBRARIES 46 | 47 | #================================================================== 48 | # Copyright (c) 2018 The Regents of the University of California, 49 | # through Lawrence Berkeley National Laboratory. 50 | # 51 | # Author: David Williams-Young 52 | # 53 | # This file is part of cmake-modules. All rights reserved. 54 | # 55 | # Redistribution and use in source and binary forms, with or without 56 | # modification, are permitted provided that the following conditions are met: 57 | # 58 | # (1) Redistributions of source code must retain the above copyright notice, this 59 | # list of conditions and the following disclaimer. 60 | # (2) Redistributions in binary form must reproduce the above copyright notice, 61 | # this list of conditions and the following disclaimer in the documentation 62 | # and/or other materials provided with the distribution. 63 | # (3) Neither the name of the University of California, Lawrence Berkeley 64 | # National Laboratory, U.S. Dept. of Energy nor the names of its contributors may 65 | # be used to endorse or promote products derived from this software without 66 | # specific prior written permission. 67 | # 68 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 69 | # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 70 | # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 71 | # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR 72 | # ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 73 | # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 74 | # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 75 | # ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 76 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 77 | # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 78 | # 79 | # You are under no obligation whatsoever to provide any bug fixes, patches, or 80 | # upgrades to the features, functionality or performance of the source code 81 | # ("Enhancements") to anyone; however, if you choose to make your Enhancements 82 | # available either publicly, or directly to Lawrence Berkeley National 83 | # Laboratory, without imposing a separate written license agreement for such 84 | # Enhancements, then you hereby grant the following license: a non-exclusive, 85 | # royalty-free perpetual license to install, use, modify, prepare derivative 86 | # works, incorporate into other computer software, distribute, and sublicense 87 | # such enhancements or derivative works thereof, in binary and source code form. 88 | # 89 | #================================================================== 90 | 91 | cmake_minimum_required( VERSION 3.11 ) # Require CMake 3.11+ 92 | # Set up some auxillary vars if hints have been set 93 | 94 | if( scotch_PREFIX AND NOT scotch_INCLUDE_DIR ) 95 | set( scotch_INCLUDE_DIR ${scotch_PREFIX}/include ) 96 | endif() 97 | 98 | 99 | if( scotch_PREFIX AND NOT scotch_LIBRARY_DIR ) 100 | set( scotch_LIBRARY_DIR 101 | ${scotch_PREFIX}/lib 102 | ${scotch_PREFIX}/lib32 103 | ${scotch_PREFIX}/lib64 104 | ) 105 | endif() 106 | 107 | 108 | # Try to find the header 109 | find_path( SCOTCH_INCLUDE_DIR 110 | NAMES scotch.h 111 | HINTS ${scotch_PREFIX} 112 | PATHS ${scotch_INCLUDE_DIR} 113 | PATH_SUFFIXES include 114 | DOC "Location of SCOTCH header" 115 | ) 116 | 117 | # Try to find libraries if not already set 118 | if( NOT scotch_LIBRARIES ) 119 | 120 | find_library( SCOTCH_LIBRARY 121 | NAMES scotch 122 | HINTS ${scotch_PREFIX} 123 | PATHS ${scotch_LIBRARY_DIR} 124 | PATH_SUFFIXES lib lib64 lib32 125 | DOC "SCOTCH Library" 126 | ) 127 | 128 | find_library( SCOTCH_ERR_LIBRARY 129 | NAMES scotcherr 130 | HINTS ${scotch_PREFIX} 131 | PATHS ${scotch_LIBRARY_DIR} 132 | PATH_SUFFIXES lib lib64 lib32 133 | DOC "SCOTCH Error Libraries" 134 | ) 135 | 136 | find_library( SCOTCH_ERREXIT_LIBRARY 137 | NAMES scotcherrexit 138 | HINTS ${scotch_PREFIX} 139 | PATHS ${scotch_LIBRARY_DIR} 140 | PATH_SUFFIXES lib lib64 lib32 141 | DOC "SCOTCH Error-Exit Libraries" 142 | ) 143 | 144 | 145 | set( SCOTCH_LIBRARIES 146 | ${SCOTCH_LIBRARY} 147 | ${SCOTCH_ERR_LIBRARY} 148 | ${SCOTCH_ERREXIT_LIBRARY} ) 149 | 150 | if( "metis" IN_LIST SCOTCH_FIND_COMPONENTS ) 151 | 152 | find_library( SCOTCH_METIS_LIBRARY 153 | NAMES scotchmetis 154 | HINTS ${scotch_PREFIX} 155 | PATHS ${scotch_LIBRARY_DIR} 156 | PATH_SUFFIXES lib lib64 lib32 157 | DOC "SCOTCH-METIS compatibility Libraries" 158 | ) 159 | 160 | if( SCOTCH_METIS_LIBRARY ) 161 | list( APPEND SCOTCH_LIBRARIES ${SCOTCH_METIS_LIBRARY} ) 162 | set( SCOTCH_metis_FOUND TRUE ) 163 | endif() 164 | 165 | endif() 166 | 167 | 168 | else() 169 | 170 | # FIXME: Check if files exists at least? 171 | set( SCOTCH_LIBRARIES ${scotch_LIBRARIES} ) 172 | 173 | endif() 174 | 175 | # Check version 176 | if( EXISTS ${SCOTCH_INCLUDE_DIR}/scotch.h ) 177 | set( version_pattern 178 | "^#define[\t ]+SCOTCH_(VERSION|RELEASE|PATCHLEVEL)[\t ]+([0-9\\.]+)$" 179 | ) 180 | file( STRINGS ${SCOTCH_INCLUDE_DIR}/scotch.h scotch_version 181 | REGEX ${version_pattern} ) 182 | 183 | foreach( match ${scotch_version} ) 184 | 185 | if(SCOTCH_VERSION_STRING) 186 | set(SCOTCH_VERSION_STRING "${SCOTCH_VERSION_STRING}.") 187 | endif() 188 | 189 | string(REGEX REPLACE ${version_pattern} 190 | "${SCOTCH_VERSION_STRING}\\2" 191 | SCOTCH_VERSION_STRING ${match} 192 | ) 193 | 194 | set(SCOTCH_VERSION_${CMAKE_MATCH_1} ${CMAKE_MATCH_2}) 195 | 196 | endforeach() 197 | 198 | unset( scotch_version ) 199 | unset( version_pattern ) 200 | endif() 201 | 202 | # Check ILP64 203 | if( EXISTS ${SCOTCH_INCLUDE_DIR}/scotch.h ) 204 | 205 | set( idxwidth_pattern 206 | "^typedef[\t ]+(int64_t|int32_t)[\t ]SCOTCH_Idx\\;$" 207 | ) 208 | file( STRINGS ${SCOTCH_INCLUDE_DIR}/scotch.h scotch_idxwidth 209 | REGEX ${idxwidth_pattern} ) 210 | 211 | string( REGEX REPLACE ${idxwidth_pattern} 212 | "${SCOTCH_IDXWIDTH_STRING}\\1" 213 | SCOTCH_IDXWIDTH_STRING "${scotch_idxwidth}" ) 214 | 215 | if( ${SCOTCH_IDXWIDTH_STRING} MATCHES "int64_t" ) 216 | set( SCOTCH_USES_ILP64 TRUE ) 217 | else() 218 | set( SCOTCH_USES_ILP64 FALSE ) 219 | endif() 220 | 221 | unset( idxwidth_pattern ) 222 | unset( scotch_idxwidth ) 223 | unset( SCOTCH_IDXWIDTH_STRING ) 224 | 225 | endif() 226 | 227 | 228 | # Check Threads 229 | if( SCOTCH_LIBRARIES ) 230 | 231 | # FIXME: This assumes that threads are even installed 232 | set( CMAKE_THREAD_PREFER_PTHREAD ON ) 233 | find_package( Threads QUIET ) 234 | 235 | include( CMakePushCheckState ) 236 | 237 | cmake_push_check_state( RESET ) 238 | 239 | set( CMAKE_REQUIRED_LIBRARIES Threads::Threads ${SCOTCH_LIBRARIES} ) 240 | set( CMAKE_REQUIRED_QUIET ON ) 241 | 242 | include( CheckLibraryExists ) 243 | # check_library_exists( "" threadReduce "" 244 | # SCOTCH_USES_PTHREADS ) 245 | check_library_exists( "" pthread_create "" 246 | SCOTCH_USES_PTHREADS ) 247 | 248 | cmake_pop_check_state() 249 | 250 | endif() 251 | 252 | 253 | # Handle components 254 | if( SCOTCH_USES_ILP64 ) 255 | set( SCOTCH_ilp64_FOUND TRUE ) 256 | endif() 257 | 258 | if( SCOTCH_USES_PTHREADS ) 259 | set( SCOTCH_pthreads_FOUND TRUE ) 260 | endif() 261 | 262 | # Determine if we've found SCOTCH 263 | mark_as_advanced( SCOTCH_FOUND SCOTCH_INCLUDE_DIR SCOTCH_LIBRARIES ) 264 | 265 | include(FindPackageHandleStandardArgs) 266 | find_package_handle_standard_args( SCOTCH 267 | REQUIRED_VARS SCOTCH_LIBRARIES SCOTCH_INCLUDE_DIR 268 | VERSION_VAR SCOTCH_VERSION_STRING 269 | HANDLE_COMPONENTS 270 | ) 271 | 272 | # Export target 273 | if( SCOTCH_FOUND AND NOT TARGET SCOTCH::scotch ) 274 | 275 | add_library( SCOTCH::scotch INTERFACE IMPORTED ) 276 | set_target_properties( SCOTCH::scotch PROPERTIES 277 | INTERFACE_INCLUDE_DIRECTORIES "${SCOTCH_INCLUDE_DIR}" 278 | INTERFACE_LINK_LIBRARIES "${SCOTCH_LIBRARIES}" 279 | ) 280 | 281 | endif() 282 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | networkx==3.1 2 | numpy==1.25.2 3 | scipy==1.8.0 4 | torch==2.0.0+cpu 5 | torch_geometric==2.3.1 6 | -------------------------------------------------------------------------------- /separator/amd/AMDReordering.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | 6 | #if AMDIDXSIZE==64 7 | typedef int64_t AMDInt; 8 | #else 9 | typedef int32_t AMDInt; 10 | #endif 11 | 12 | #define FC_GLOBAL(name,NAME) name##_ 13 | #define AMDBAR_FC FC_GLOBAL(amdbar,AMDBAR) 14 | 15 | #ifdef __cplusplus 16 | extern "C" { 17 | #endif 18 | 19 | void AMDBAR_FC(AMDInt* N, AMDInt* PE, AMDInt* IW, AMDInt* LEN, 20 | AMDInt* IWLEN, AMDInt* PFREE, AMDInt* NV, 21 | AMDInt* NEXT, AMDInt* LAST, AMDInt* HEAD, 22 | AMDInt* ELEN, AMDInt* DEGREE, AMDInt* NCMPA, 23 | AMDInt* W, AMDInt* IOVFLO); 24 | 25 | /* 26 | * Input to this routine should be 0-based 27 | */ 28 | void WRAPPER_amd(AMDInt n, AMDInt* xadj, AMDInt* adjncy, 29 | AMDInt* perm, AMDInt* iperm) { 30 | AMDInt iovflo = std::numeric_limits::max(); 31 | AMDInt ncmpa = 0; 32 | AMDInt iwsize = 4*n; 33 | AMDInt nnz = xadj[n]; 34 | 35 | std::vector ptr(n+1); 36 | std::vector ind(nnz); 37 | for (AMDInt i=0; i<=n; i++) ptr[i] = xadj[i] + 1; 38 | for (AMDInt i=0; i iwork // iwsize 41 | (new AMDInt[iwsize + 4*n + n+1 + nnz + n + 1]); 42 | auto vtxdeg = iwork.get() + iwsize; // n 43 | auto qsize = vtxdeg + n; // n 44 | auto ecforw = qsize + n; // n 45 | auto marker = ecforw + n; // n 46 | auto nvtxs = marker + n; // n+1 47 | auto rowind = nvtxs + n+1; // nnz + n + 1 48 | for (AMDInt i=0; i 100: 41 | edge_info.append(g.edge_index) 42 | cluster = graclus(g.edge_index, num_nodes=g.num_nodes) 43 | info.append(cluster) 44 | g1 = avg_pool( 45 | cluster, 46 | Batch( 47 | batch=g.batch, 48 | x=g.x, 49 | edge_index=g.edge_index)) 50 | g = g1 51 | 52 | gnx = to_networkx(g, to_undirected=True) 53 | g = partition_metis(g, gnx) 54 | 55 | while len(info) > 0: 56 | cluster = info.pop() 57 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 58 | g.x = g.x[inverse] 59 | g.edge_index = edge_info.pop() 60 | gnx = to_networkx(g, to_undirected=True) 61 | va, vb = volumes(g) 62 | g = ac_eval_refine(ac, g, k, gnx, va, vb) 63 | return g 64 | 65 | # Refining the cut on the subgraph around the separator 66 | 67 | 68 | def ac_eval_refine(ac, graph_test, k, gnx, va, vb, perc=0.05): 69 | graph = graph_test.clone() 70 | g0 = graph_test.clone() 71 | n = separator(graph) 72 | data = k_hop_graph_cut(graph, k) 73 | graph_cut, positions = data[0], data[1] 74 | gnx_sub = to_networkx(graph_cut, to_undirected=True) 75 | i = 0 76 | 77 | peak_reward = 0 78 | peak_time = 0 79 | total_reward = 0 80 | actions = [] 81 | 82 | e = torch.ones(graph_cut.num_nodes, 1) 83 | nnz = graph.num_nodes 84 | sep = int(n) 85 | nodes_separator = torch.where((graph_cut.x[:, 2] == torch.tensor(1.)))[0] 86 | 87 | for i in range(int(2 * n)): 88 | with torch.no_grad(): 89 | policy = ac(graph_cut) 90 | 91 | probs = policy.view(-1).clone().detach().numpy() 92 | 93 | flip = np.argmax(probs) 94 | 95 | actions.append(flip) 96 | old_sep = sep * (1 / va + 1 / vb) 97 | 98 | graph_cut, a, b, s = change_vertex(graph_cut, flip, gnx_sub, va, vb) 99 | graph, _, _, _ = change_vertex( 100 | graph, positions[flip].item(), gnx, va, vb) 101 | 102 | if a == 1: 103 | va += 1 104 | sep -= 1 105 | elif a == -1: 106 | va -= 1 107 | sep += 1 108 | elif b == 1: 109 | vb += 1 110 | sep -= 1 111 | elif b == -1: 112 | vb -= 1 113 | sep += 1 114 | 115 | total_reward += old_sep - sep * (1 / va + 1 / vb) 116 | 117 | graph_cut.x[:, 5] = torch.true_divide(va, nnz) 118 | graph_cut.x[:, 6] = torch.true_divide(vb, nnz) 119 | 120 | nodes_separator = torch.where( 121 | (graph_cut.x[:, 2] == torch.tensor(1.)))[0] 122 | graph_cut = remove_update(graph_cut, gnx_sub, nodes_separator) 123 | 124 | if i > 1 and actions[-1] == actions[-2]: 125 | break 126 | if total_reward > peak_reward: 127 | peak_reward = total_reward 128 | peak_time = i + 1 129 | 130 | for t in range(peak_time): 131 | g0, _, _, _ = change_vertex( 132 | g0, positions[actions[t]].item(), gnx, va, vb) 133 | 134 | return g0 135 | 136 | # Update the nodes that are necessary to get a minimal separator 137 | 138 | 139 | def remove_update(gr, gnx, sep): 140 | graph = gr.clone() 141 | for ii in sep: 142 | i = ii.item() 143 | flagA, flagB = 0, 0 144 | for v in gnx[i]: 145 | if flagA == 1 and flagB == 1: 146 | break 147 | if graph.x[v, 0] == torch.tensor(1.): 148 | flagA = 1 149 | elif graph.x[v, 1] == torch.tensor(1.): 150 | flagB = 1 151 | if flagA == 1 and flagB == 1: 152 | graph.x[i, 4] = torch.tensor(1.) 153 | else: 154 | graph.x[i, 4] = torch.tensor(0.) 155 | return graph 156 | 157 | # Full valuation of the DRL model repeated for trials number of times. 158 | # Then the best separator is returned 159 | 160 | 161 | def ac_eval_coarse_full_trials(ac, graph, k, trials): 162 | graph_test = graph.clone() 163 | gg = ac_eval_coarse_full(ac, graph_test, k) 164 | ncut = normalized_separator(gg) 165 | for j in range(1, trials): 166 | gg1 = ac_eval_coarse_full(ac, graph_test, k) 167 | if normalized_separator(gg1) < ncut: 168 | ncut = normalized_separator(gg1) 169 | gg = gg1 170 | 171 | return gg 172 | 173 | # Change the feature of the selected vertex v 174 | 175 | 176 | def change_vertex(g, v, gnx, va, vb): 177 | a, b, s = 0, 0, 0 178 | if g.x[v, 2] == 0.: 179 | if g.x[v, 0] == 1.: 180 | a, s = -1, 1 181 | else: 182 | b, s = -1, 1 183 | # node v is in A or B, add it to the separator 184 | g.x[v, :3] = torch.tensor([0., 0., 1.]) 185 | 186 | return g, a, b, s 187 | # node v is in the separator 188 | 189 | for vj in gnx[v]: 190 | if g.x[vj, 0] == 1.: 191 | # node v is in the separator and connected to A, so add it 192 | # to A 193 | g.x[v, :3] = torch.tensor([1., 0., 0.]) 194 | a, s = 1, -1 195 | return g, a, b, s 196 | if g.x[vj, 1] == 1.: 197 | # node v is in the separator and connected to B, so add it 198 | # to B 199 | g.x[v, :3] = torch.tensor([0., 1., 0.]) 200 | b, s = 1, -1 201 | return g, a, b, s 202 | # node v is in the separator, but is not connected to A or B. Add 203 | # node v to A if the volume of A is less (or equal) to that of B, 204 | # or to B if the volume of B is less than that of A. 205 | if va <= vb: 206 | g.x[v, :3] = torch.tensor([1., 0., 0.]) 207 | s, a = -1, 1 208 | else: 209 | g.x[v, :3] = torch.tensor([0., 1., 0.]) 210 | s, b = -1, 1 211 | return g, a, b, s 212 | 213 | # Build a pytorch geometric graph with features [1,0,0] form a networkx graph 214 | 215 | 216 | def torch_from_graph(graph): 217 | adj_sparse = nx.to_scipy_sparse_array(graph, format='coo') 218 | row = adj_sparse.row 219 | col = adj_sparse.col 220 | 221 | one_hot = [] 222 | for i in range(graph.number_of_nodes()): 223 | one_hot.append([1., 0., 0.]) 224 | 225 | edges = torch.tensor([row, col], dtype=torch.long) 226 | nodes = torch.tensor(np.array(one_hot), dtype=torch.float) 227 | graph_torch = Data(x=nodes, edge_index=edges) 228 | 229 | return graph_torch 230 | 231 | # Build a pytorch geometric graph with features [1,0] form a sparse matrix 232 | 233 | 234 | def torch_from_sparse(adj_sparse): 235 | 236 | row = adj_sparse.row 237 | col = adj_sparse.col 238 | 239 | features = [] 240 | for i in range(adj_sparse.shape[0]): 241 | features.append([1., 0., 0.]) 242 | 243 | edges = torch.tensor([row, col], dtype=torch.long) 244 | nodes = torch.tensor(np.array(features), dtype=torch.float) 245 | graph_torch = Data(x=nodes, edge_index=edges) 246 | 247 | return graph_torch 248 | 249 | # Pytorch geometric Delaunay mesh with n random points in the unit square 250 | 251 | 252 | def random_delaunay_graph(n): 253 | points = np.random.random_sample((n, 2)) 254 | g = graph_delaunay_from_points(points) 255 | return torch_from_graph(g) 256 | 257 | # Networkx Delaunay mesh with n random points in the unit square 258 | 259 | 260 | def graph_delaunay_from_points(points): 261 | mesh = Delaunay(points, qhull_options="QJ") 262 | mesh_simp = mesh.simplices 263 | edges = [] 264 | for i in range(len(mesh_simp)): 265 | edges += combinations(mesh_simp[i], 2) 266 | e = list(set(edges)) 267 | return nx.Graph(e) 268 | 269 | # Number of vertices in the separator 270 | 271 | 272 | def separator(graph): 273 | sep = torch.where((graph.x == torch.tensor( 274 | [0., 0., 1.])).all(axis=-1))[0].shape[0] 275 | return sep 276 | 277 | # Normalized separator 278 | 279 | 280 | def normalized_separator(graph): 281 | da, db = volumes(graph) 282 | sep = torch.where((graph.x == torch.tensor( 283 | [0., 0., 1.])).all(axis=-1))[0].shape[0] 284 | if da == 0 or db == 0: 285 | return 10 286 | else: 287 | return sep * (1 / da + 1 / db) 288 | 289 | # Normalized separator for METIS 290 | 291 | 292 | def vertex_sep_metis(graph, gnx): 293 | sep, nodes1, nodes2 = nxmetis.vertex_separator(gnx) 294 | da = len(nodes1) 295 | db = len(nodes2) 296 | return len(sep) * (1 / da + 1 / db) 297 | 298 | # Subgraph around the separator 299 | 300 | 301 | def k_hop_graph_cut(graph, k): 302 | nei = torch.where((graph.x[:, 2] == torch.tensor(1.)))[0] 303 | data_cut = k_hop_subgraph( 304 | nei, 305 | k, 306 | graph.edge_index, 307 | relabel_nodes=True, 308 | num_nodes=graph.num_nodes) 309 | data_small = k_hop_subgraph( 310 | nei, 311 | k - 1, 312 | graph.edge_index, 313 | relabel_nodes=True, 314 | num_nodes=graph.num_nodes) 315 | nodes_boundary = list( 316 | set(data_cut[0].numpy()).difference(data_small[0].numpy())) 317 | boundary_features = torch.tensor([1. if i.item( 318 | ) in nodes_boundary else 0. for i in data_cut[0]]).reshape(data_cut[0].shape[0], 1) 319 | remove_f = [] 320 | for j in range(len(data_cut[0])): 321 | if graph.x[data_cut[0][j]][2] == torch.tensor(1.): 322 | neighbors, _, _, _ = k_hop_subgraph( 323 | [data_cut[0][j]], 1, graph.edge_index, relabel_nodes=True, num_nodes=graph.num_nodes) 324 | flagA, flagB = 0, 0 325 | for w in neighbors: 326 | if graph.x[w][0] == torch.tensor(1.): 327 | flagA = 1 328 | elif graph.x[w][1] == torch.tensor(1.): 329 | flagB = 1 330 | if flagA == 1 and flagB == 1: 331 | remove_f.append(1.) 332 | else: 333 | remove_f.append(0.) 334 | else: 335 | remove_f.append(0.) 336 | remove_features = torch.tensor(remove_f).reshape(len(remove_f), 1) 337 | va, vb = volumes(graph) 338 | e = torch.ones(data_cut[0].shape[0], 1) 339 | nnz = graph.num_nodes 340 | features = torch.cat((graph.x[data_cut[0]], 341 | boundary_features, 342 | remove_features, 343 | torch.true_divide(va, 344 | nnz) * e, 345 | torch.true_divide(vb, 346 | nnz) * e), 347 | 1) 348 | g_red = Batch( 349 | batch=torch.zeros( 350 | data_cut[0].shape[0], 351 | dtype=torch.long), 352 | x=features, 353 | edge_index=data_cut[1]) 354 | return g_red, data_cut[0] 355 | 356 | # Cardinalities of the partitions A and B 357 | 358 | 359 | def volumes(graph): 360 | ab = torch.sum(graph.x, dim=0) 361 | return ab[0].item(), ab[1].item() 362 | 363 | # Coarsen a pytorch geometric graph, then find the separator with METIS 364 | # and interpolate it back 365 | 366 | 367 | def partition_metis_refine(graph): 368 | cluster = graclus(graph.edge_index, num_nodes=graph.num_nodes) 369 | coarse_graph = avg_pool( 370 | cluster, 371 | Batch( 372 | batch=graph.batch, 373 | x=graph.x, 374 | edge_index=graph.edge_index)) 375 | coarse_graph_nx = to_networkx(coarse_graph, to_undirected=True) 376 | sep, A, B = nxmetis.vertex_separator(coarse_graph_nx) 377 | coarse_graph.x[sep] = torch.tensor([0., 0., 1.]) 378 | coarse_graph.x[A] = torch.tensor([1., 0., 0.]) 379 | coarse_graph.x[B] = torch.tensor([0., 1., 0.]) 380 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 381 | graph.x = coarse_graph.x[inverse] 382 | return graph 383 | 384 | # Separator of a pytorch geometric graph obtained with METIS 385 | 386 | 387 | def partition_metis(coarse_graph, coarse_graph_nx): 388 | sep, A, B = nxmetis.vertex_separator(coarse_graph_nx) 389 | coarse_graph.x[sep] = torch.tensor([0., 0., 1.]) 390 | coarse_graph.x[A] = torch.tensor([1., 0., 0.]) 391 | coarse_graph.x[B] = torch.tensor([0., 1., 0.]) 392 | return coarse_graph 393 | 394 | # Matrix ordering provided by SCOTCH 395 | 396 | 397 | def scotch_ordering(g): 398 | gnx = to_networkx(g, to_undirected=True) 399 | a = nx.to_scipy_sparse_array(gnx, format="coo", dtype=np.int32) 400 | a = scipy.sparse.csr_matrix((a.data, (a.row, a.col)), dtype=np.int32) 401 | n = g.num_nodes 402 | perm = np.zeros(n, dtype=np.int32) 403 | libscotch.WRAPPER_SCOTCH_graphOrder( 404 | ctypes.c_int(n), 405 | ctypes.c_void_p(a.indptr.ctypes.data), 406 | ctypes.c_void_p(a.indices.ctypes.data), 407 | ctypes.c_void_p(perm.ctypes.data) 408 | ) 409 | return perm.tolist() 410 | 411 | # Matrix ordering provided by COLAMD 412 | 413 | 414 | def amd_ordering(g): 415 | gnx = to_networkx(g, to_undirected=True) 416 | n = g.num_nodes 417 | a = nx.to_scipy_sparse_array(gnx, format="coo", dtype=np.int32) 418 | a = scipy.sparse.csr_matrix((a.data, (a.row, a.col)), dtype=np.int32) 419 | a += identity(n, dtype=np.int32) 420 | perm = np.zeros(n, dtype=np.int32) 421 | iperm = np.zeros(n, dtype=np.int32) 422 | libamd.WRAPPER_amd( 423 | ctypes.c_int(n), 424 | ctypes.c_void_p(a.indptr.ctypes.data), 425 | ctypes.c_void_p(a.indices.ctypes.data), 426 | ctypes.c_void_p(perm.ctypes.data), 427 | ctypes.c_void_p(iperm.ctypes.data) 428 | ) 429 | return perm.tolist() 430 | 431 | # Nested dissection ordering with the DRL algorithm 432 | 433 | 434 | def drl_nested_dissection(graph, nmin, hops, model, trials, lvl=0): 435 | g_stack = [graph] 436 | i_stack = [[i for i in range(graph.num_nodes)]] 437 | perm = [] 438 | i = 0 439 | while g_stack: 440 | g = g_stack.pop() 441 | idx = i_stack.pop() 442 | if g.num_nodes < nmin: 443 | if g.num_nodes > 0: 444 | p = amd_ordering(g) 445 | perm = [idx[i] for i in p] + perm 446 | else: 447 | g = ac_eval_coarse_full_trials(model, g, hops, trials) 448 | ia = torch.where(g.x[:, 0] == 1.)[0].tolist() 449 | ib = torch.where(g.x[:, 1] == 1.)[0].tolist() 450 | isep = torch.where(g.x[:, 2] == 1.)[0].tolist() 451 | ga_data = subgraph( 452 | ia, g.edge_index, relabel_nodes=True, num_nodes=g.num_nodes 453 | )[0] 454 | gb_data = subgraph( 455 | ib, g.edge_index, relabel_nodes=True, num_nodes=g.num_nodes 456 | )[0] 457 | ga = Batch( 458 | batch=torch.zeros(len(ia), 1), 459 | x=torch.zeros(len(ia), 3), edge_index=ga_data 460 | ) 461 | gb = Batch( 462 | batch=torch.zeros(len(ib), 1), 463 | x=torch.zeros(len(ib), 3), edge_index=gb_data 464 | ) 465 | g_stack.append(ga) 466 | i_stack.append([idx[i] for i in ia]) 467 | g_stack.append(gb) 468 | i_stack.append([idx[i] for i in ib]) 469 | perm = [idx[i] for i in isep] + perm 470 | i += 1 471 | return perm 472 | 473 | # Nested dissection ordering implemented with METIS 474 | 475 | 476 | def metis_nested_dissection(graph, nmin): 477 | g_stack = [graph] 478 | i_stack = [[i for i in range(graph.num_nodes)]] 479 | perm = [] 480 | i = 0 481 | while g_stack: 482 | g = g_stack.pop() 483 | idx = i_stack.pop() 484 | if g.num_nodes < nmin: 485 | if g.num_nodes > 0: 486 | p = amd_ordering(g) 487 | perm = [idx[i] for i in p] + perm 488 | else: 489 | isep, ia, ib = nxmetis.vertex_separator( 490 | to_networkx(g, to_undirected=True)) 491 | ga_data = subgraph( 492 | ia, g.edge_index, relabel_nodes=True, num_nodes=g.num_nodes 493 | )[0] 494 | gb_data = subgraph( 495 | ib, g.edge_index, relabel_nodes=True, num_nodes=g.num_nodes 496 | )[0] 497 | ga = Batch( 498 | batch=torch.zeros(len(ia), 1), 499 | x=torch.zeros(len(ia), 3), edge_index=ga_data 500 | ) 501 | gb = Batch( 502 | batch=torch.zeros(len(ib), 1), 503 | x=torch.zeros(len(ib), 3), edge_index=gb_data 504 | ) 505 | g_stack.append(ga) 506 | i_stack.append([idx[i] for i in ia]) 507 | g_stack.append(gb) 508 | i_stack.append([idx[i] for i in ib]) 509 | perm = [idx[i] for i in isep] + perm 510 | i += 1 511 | return perm 512 | 513 | # Deep neural network for the DRL agent 514 | 515 | 516 | class Model(torch.nn.Module): 517 | def __init__(self, units): 518 | super(Model, self).__init__() 519 | 520 | self.units = units 521 | self.common_layers = 1 522 | self.critic_layers = 1 523 | self.actor_layers = 1 524 | self.activation = torch.tanh 525 | 526 | self.conv_first = SAGEConv(7, self.units) 527 | self.conv_common = nn.ModuleList( 528 | [SAGEConv(self.units, self.units) 529 | for i in range(self.common_layers)] 530 | ) 531 | self.conv_actor = nn.ModuleList( 532 | [SAGEConv(self.units, 533 | 1 if i == self.actor_layers - 1 else self.units) 534 | for i in range(self.actor_layers)] 535 | ) 536 | self.conv_critic = nn.ModuleList( 537 | [SAGEConv(self.units, self.units) 538 | for i in range(self.critic_layers)] 539 | ) 540 | self.final_critic = nn.Linear(self.units, 1) 541 | 542 | def forward(self, graph): 543 | x, edge_index, batch = graph.x, graph.edge_index, graph.batch 544 | 545 | do_not_flip = torch.where(x[:, 3] != 0.) 546 | do_not_flip_2 = torch.where(x[:, 4] != 0.) 547 | 548 | x = self.activation(self.conv_first(x, edge_index)) 549 | for i in range(self.common_layers): 550 | x = self.activation(self.conv_common[i](x, edge_index)) 551 | 552 | x_actor = x 553 | for i in range(self.actor_layers): 554 | x_actor = self.conv_actor[i](x_actor, edge_index) 555 | if i < self.actor_layers - 1: 556 | x_actor = self.activation(x_actor) 557 | x_actor[do_not_flip] = torch.tensor(-np.Inf) 558 | x_actor[do_not_flip_2] = torch.tensor(-np.Inf) 559 | x_actor = torch.log_softmax(x_actor, dim=0) 560 | 561 | if not self.training: 562 | return x_actor 563 | 564 | x_critic = x.detach() 565 | for i in range(self.critic_layers): 566 | x_critic = self.conv_critic[i](x_critic, edge_index) 567 | if i < self.critic_layers - 1: 568 | x_critic = self.activation(x_critic) 569 | x_critic = self.final_critic(x_critic) 570 | x_critic = torch.tanh(global_mean_pool(x_critic, batch)) 571 | return x_actor, x_critic 572 | 573 | 574 | if __name__ == "__main__": 575 | parser = argparse.ArgumentParser( 576 | formatter_class=argparse.ArgumentDefaultsHelpFormatter) 577 | parser.add_argument('--out', default='./temp_edge/', type=str) 578 | parser.add_argument( 579 | "--nmin", 580 | default=100, 581 | help="Minimum graph size", 582 | type=int) 583 | parser.add_argument( 584 | "--nmax", 585 | default=50000, 586 | help="Maximum graph size", 587 | type=int) 588 | parser.add_argument( 589 | "--ntest", 590 | default=1000, 591 | help="Number of testing graphs", 592 | type=int) 593 | parser.add_argument("--hops", default=3, help="Number of hops", type=int) 594 | parser.add_argument( 595 | "--units", 596 | default=7, 597 | help="Number of units in conv layers", 598 | type=int) 599 | parser.add_argument( 600 | "--attempts", 601 | default=3, 602 | help="Number of attempt in the DRL", 603 | type=int) 604 | parser.add_argument( 605 | "--dataset", 606 | default='delaunay', 607 | help="Dataset type: delaunay, suitesparse, graded l, hole3, hole6", 608 | type=str) 609 | 610 | torch.manual_seed(1) 611 | np.random.seed(2) 612 | 613 | args = parser.parse_args() 614 | outdir = args.out + '/' 615 | Path(outdir).mkdir(parents=True, exist_ok=True) 616 | 617 | n_min = args.nmin 618 | n_max = args.nmax 619 | n_test = args.ntest 620 | 621 | hops = args.hops 622 | units = args.units 623 | trials = args.attempts 624 | dataset_type = args.dataset 625 | 626 | model = Model(units) 627 | if dataset_type == 'suitesparse': 628 | model.load_state_dict( 629 | torch.load('./temp_edge/model_separator_suitesparse')) 630 | else: 631 | model.load_state_dict( 632 | torch.load('./temp_edge/model_separator_delaunay')) 633 | model.eval() 634 | for p in model.parameters(): 635 | p.requires_grad = False 636 | print('Model loaded\n') 637 | 638 | nmin_nd = 100 639 | 640 | list_picked = [] 641 | i = 0 642 | while i < n_test: 643 | # Choose the dataset type according to the parameter 'dataset_type' 644 | if dataset_type == 'delaunay': 645 | n_nodes = np.random.choice(np.arange(n_min, n_max)) 646 | g = random_delaunay_graph(n_nodes) 647 | g.batch = torch.zeros(g.num_nodes) 648 | i += 1 649 | else: 650 | if len(list_picked) >= len( 651 | os.listdir( 652 | os.path.expanduser( 653 | 'drl-graph-partitioning/' + 654 | str(dataset_type) + 655 | '/'))): 656 | break 657 | graph = random.choice( 658 | os.listdir( 659 | os.path.expanduser( 660 | 'drl-graph-partitioning/' + 661 | str(dataset_type) + 662 | '/'))) 663 | if str(graph) not in list_picked: 664 | list_picked.append(str(graph)) 665 | matrix_sparse = mmread( 666 | os.path.expanduser( 667 | 'drl-graph-partitioning/' + 668 | str(dataset_type) + 669 | '/' + 670 | str(graph))) 671 | gnx = nx.from_scipy_sparse_matrix(matrix_sparse) 672 | if nx.number_connected_components(gnx) == 1 and gnx.number_of_nodes( 673 | ) > n_min and gnx.number_of_nodes() < n_max: 674 | g = torch_from_sparse(matrix_sparse) 675 | g.weight = torch.tensor([1] * g.num_edges) 676 | g.batch = torch.zeros(g.num_nodes) 677 | i += 1 678 | else: 679 | continue 680 | else: 681 | continue 682 | print('Graph:', i, ' Vertices:', g.num_nodes, ' Edges:', g.num_edges) 683 | 684 | gnx = to_networkx(g, to_undirected=True) 685 | a = nx.to_scipy_sparse_array( 686 | gnx, format='csc') + 10 * identity(g.num_nodes) 687 | a = scipy.sparse.csc_matrix(a, dtype=np.int32) 688 | # Compute the number of non-zero (nnz) in elements in the LU 689 | # factorization with DRL 690 | # Sometimes METIS may fail in computing the vertex separator on the 691 | # coarsest graph, producing an empty partition that affects the 692 | # computations on the finer interpolation levels 693 | try: 694 | p = drl_nested_dissection(g, nmin_nd, hops, model, trials, lvl=0) 695 | except ZeroDivisionError: 696 | continue 697 | 698 | 699 | aperm = scipy.sparse.csr_matrix(a[:, p][p, :], dtype=np.int32) 700 | lu = splu(aperm, permc_spec='NATURAL') 701 | nnz_drl = lu.L.count_nonzero() + lu.U.count_nonzero() 702 | 703 | # Compute the number of non-zero (nnz) elements in the LU factorization 704 | # with nested dissection with METIS 705 | 706 | p = metis_nested_dissection(g, nmin_nd) 707 | aperm = a[:, p][p, :] 708 | lu = splu(aperm, permc_spec='NATURAL') 709 | nnz_nd_metis = lu.L.count_nonzero() + lu.U.count_nonzero() 710 | 711 | # Compute the number of non-zero (nnz) elements in the LU factorization 712 | # with COLAMD (this is the default ordering for superlu) 713 | lu = splu(a, permc_spec='COLAMD') 714 | nnz_colamd = lu.L.count_nonzero() + lu.U.count_nonzero() 715 | 716 | # Compute the number of non-zero (nnz) elements in the LU factorization 717 | # with the built-in nested dissection ordering with METIS 718 | p = nxmetis.node_nested_dissection(gnx) 719 | aperm = a[:, p][p, :] 720 | lu = splu(aperm, permc_spec='NATURAL') 721 | nnz_metis = lu.L.count_nonzero() + lu.U.count_nonzero() 722 | 723 | # Compute the number of non-zero (nnz) elements in the LU factorization 724 | # with SCOTCH 725 | p = scotch_ordering(g) 726 | aperm = a[:, p][p, :] 727 | lu = splu(aperm, permc_spec='NATURAL') 728 | nnz_scotch = lu.L.count_nonzero() + lu.U.count_nonzero() 729 | print( 730 | 'NNZ: DRL:', 731 | nnz_drl, 732 | ' ND_METIS:', 733 | nnz_nd_metis, 734 | ' AMD:,', 735 | nnz_colamd, 736 | ' METIS:', 737 | nnz_metis, 738 | ' SCOTCH:', 739 | nnz_scotch) 740 | print('') 741 | print('Done') 742 | -------------------------------------------------------------------------------- /separator/drl_separator_test.py: -------------------------------------------------------------------------------- 1 | 2 | import argparse 3 | from pathlib import Path 4 | 5 | import networkx as nx 6 | import nxmetis 7 | 8 | import torch 9 | import torch.nn as nn 10 | import torch.multiprocessing as mp 11 | 12 | from torch_geometric.data import Data, DataLoader, Batch 13 | from torch_geometric.nn import SAGEConv, graclus, avg_pool, global_mean_pool 14 | from torch_geometric.utils import to_networkx, k_hop_subgraph, degree 15 | 16 | import numpy as np 17 | from numpy import random 18 | 19 | import scipy 20 | from scipy.sparse import coo_matrix, rand 21 | from scipy.io import mmread 22 | from scipy.spatial import Delaunay 23 | 24 | import copy 25 | import os 26 | from itertools import combinations 27 | 28 | # Full evaluation of the DRL model 29 | 30 | 31 | def ac_eval_coarse_full(ac, graph, k): 32 | g = graph.clone() 33 | info = [] 34 | edge_info = [] 35 | while g.num_nodes > 100: 36 | edge_info.append(g.edge_index) 37 | cluster = graclus(g.edge_index, num_nodes=g.num_nodes) 38 | info.append(cluster) 39 | g1 = avg_pool( 40 | cluster, 41 | Batch( 42 | batch=g.batch, 43 | x=g.x, 44 | edge_index=g.edge_index)) 45 | g = g1 46 | 47 | gnx = to_networkx(g, to_undirected=True) 48 | g = partition_metis(g, gnx) 49 | 50 | while len(info) > 0: 51 | cluster = info.pop() 52 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 53 | g.x = g.x[inverse] 54 | g.edge_index = edge_info.pop() 55 | gnx = to_networkx(g, to_undirected=True) 56 | va, vb = volumes(g) 57 | g = ac_eval_refine(ac, g, k, gnx, va, vb) 58 | return g 59 | 60 | # Refining the cut on the subgraph around the separator 61 | 62 | 63 | def ac_eval_refine(ac, graph_test, k, gnx, va, vb, perc=0.05): 64 | graph = graph_test.clone() 65 | g0 = graph_test.clone() 66 | n = separator(graph) 67 | data = k_hop_graph_cut(graph, k) 68 | graph_cut, positions = data[0], data[1] 69 | gnx_sub = to_networkx(graph_cut, to_undirected=True) 70 | i = 0 71 | 72 | peak_reward = 0 73 | peak_time = 0 74 | total_reward = 0 75 | actions = [] 76 | 77 | e = torch.ones(graph_cut.num_nodes, 1) 78 | nnz = graph.num_nodes 79 | sep = int(n) 80 | nodes_separator = torch.where((graph_cut.x[:, 2] == torch.tensor(1.)))[0] 81 | 82 | for i in range(int(2 * n)): 83 | with torch.no_grad(): 84 | policy = ac(graph_cut) 85 | 86 | probs = policy.view(-1).clone().detach().numpy() 87 | 88 | flip = np.argmax(probs) 89 | 90 | actions.append(flip) 91 | old_sep = sep * (1 / va + 1 / vb) 92 | 93 | graph_cut, a, b, s = change_vertex(graph_cut, flip, gnx_sub, va, vb) 94 | graph, _, _, _ = change_vertex( 95 | graph, positions[flip].item(), gnx, va, vb) 96 | 97 | if a == 1: 98 | va += 1 99 | sep -= 1 100 | elif a == -1: 101 | va -= 1 102 | sep += 1 103 | elif b == 1: 104 | vb += 1 105 | sep -= 1 106 | elif b == -1: 107 | vb -= 1 108 | sep += 1 109 | 110 | total_reward += old_sep - sep * (1 / va + 1 / vb) 111 | 112 | graph_cut.x[:, 5] = torch.true_divide(va, nnz) 113 | graph_cut.x[:, 6] = torch.true_divide(vb, nnz) 114 | 115 | nodes_separator = torch.where( 116 | (graph_cut.x[:, 2] == torch.tensor(1.)))[0] 117 | graph_cut = remove_update(graph_cut, gnx_sub, nodes_separator) 118 | 119 | if i > 1 and actions[-1] == actions[-2]: 120 | break 121 | if total_reward > peak_reward: 122 | peak_reward = total_reward 123 | peak_time = i + 1 124 | 125 | for t in range(peak_time): 126 | g0, _, _, _ = change_vertex( 127 | g0, positions[actions[t]].item(), gnx, va, vb) 128 | 129 | return g0 130 | 131 | # Update the nodes that are necessary to get a minimal separator 132 | 133 | 134 | def remove_update(gr, gnx, sep): 135 | graph = gr.clone() 136 | for ii in sep: 137 | i = ii.item() 138 | flagA, flagB = 0, 0 139 | for v in gnx[i]: 140 | if flagA == 1 and flagB == 1: 141 | break 142 | if graph.x[v, 0] == torch.tensor(1.): 143 | flagA = 1 144 | elif graph.x[v, 1] == torch.tensor(1.): 145 | flagB = 1 146 | if flagA == 1 and flagB == 1: 147 | graph.x[i, 4] = torch.tensor(1.) 148 | else: 149 | graph.x[i, 4] = torch.tensor(0.) 150 | return graph 151 | 152 | # Full valuation of the DRL model repeated for trials number of times. 153 | # Then the best separator is returned 154 | 155 | 156 | def ac_eval_coarse_full_trials(ac, graph, k, trials): 157 | graph_test = graph.clone() 158 | gg = ac_eval_coarse_full(ac, graph_test, k) 159 | ncut = normalized_separator(gg) 160 | for j in range(1, trials): 161 | gg1 = ac_eval_coarse_full(ac, graph_test, k) 162 | if normalized_separator(gg1) < ncut: 163 | ncut = normalized_separator(gg1) 164 | gg = gg1 165 | 166 | return gg 167 | 168 | # Change the feature of the selected vertex v 169 | 170 | 171 | def change_vertex(g, v, gnx, va, vb): 172 | a, b, s = 0, 0, 0 173 | if g.x[v, 2] == 0.: 174 | if g.x[v, 0] == 1.: 175 | a, s = -1, 1 176 | else: 177 | b, s = -1, 1 178 | # node v is in A or B, add it to the separator 179 | g.x[v, :3] = torch.tensor([0., 0., 1.]) 180 | 181 | return g, a, b, s 182 | # node v is in the separator 183 | 184 | for vj in gnx[v]: 185 | if g.x[vj, 0] == 1.: 186 | # node v is in the separator and connected to A, so add it 187 | # to A 188 | g.x[v, :3] = torch.tensor([1., 0., 0.]) 189 | a, s = 1, -1 190 | return g, a, b, s 191 | if g.x[vj, 1] == 1.: 192 | # node v is in the separator and connected to B, so add it 193 | # to B 194 | g.x[v, :3] = torch.tensor([0., 1., 0.]) 195 | b, s = 1, -1 196 | return g, a, b, s 197 | # node v is in the separator, but is not connected to A or B. Add 198 | # node v to A if the volume of A is less (or equal) to that of B, 199 | # or to B if the volume of B is less than that of A. 200 | if va <= vb: 201 | g.x[v, :3] = torch.tensor([1., 0., 0.]) 202 | s, a = -1, 1 203 | else: 204 | g.x[v, :3] = torch.tensor([0., 1., 0.]) 205 | s, b = -1, 1 206 | return g, a, b, s 207 | 208 | # Build a pytorch geometric graph with features [1,0,0] form a networkx graph 209 | 210 | 211 | def torch_from_graph(graph): 212 | adj_sparse = nx.to_scipy_sparse_array(graph, format='coo') 213 | row = adj_sparse.row 214 | col = adj_sparse.col 215 | 216 | one_hot = [] 217 | for i in range(graph.number_of_nodes()): 218 | one_hot.append([1., 0., 0.]) 219 | 220 | edges = torch.tensor([row, col], dtype=torch.long) 221 | nodes = torch.tensor(np.array(one_hot), dtype=torch.float) 222 | graph_torch = Data(x=nodes, edge_index=edges) 223 | 224 | return graph_torch 225 | 226 | # Build a pytorch geometric graph with features [1,0] form a sparse matrix 227 | 228 | 229 | def torch_from_sparse(adj_sparse): 230 | 231 | row = adj_sparse.row 232 | col = adj_sparse.col 233 | 234 | features = [] 235 | for i in range(adj_sparse.shape[0]): 236 | features.append([1., 0., 0.]) 237 | 238 | edges = torch.tensor([row, col], dtype=torch.long) 239 | nodes = torch.tensor(np.array(features), dtype=torch.float) 240 | graph_torch = Data(x=nodes, edge_index=edges) 241 | 242 | return graph_torch 243 | 244 | # Pytorch geometric Delaunay mesh with n random points in the unit square 245 | 246 | 247 | def random_delaunay_graph(n): 248 | points = np.random.random_sample((n, 2)) 249 | g = graph_delaunay_from_points(points) 250 | return torch_from_graph(g) 251 | 252 | # Networkx Delaunay mesh with n random points in the unit square 253 | 254 | 255 | def graph_delaunay_from_points(points): 256 | mesh = Delaunay(points, qhull_options="QJ") 257 | mesh_simp = mesh.simplices 258 | edges = [] 259 | for i in range(len(mesh_simp)): 260 | edges += combinations(mesh_simp[i], 2) 261 | e = list(set(edges)) 262 | return nx.Graph(e) 263 | 264 | # Number of vertices in the separator 265 | 266 | 267 | def separator(graph): 268 | sep = torch.where((graph.x == torch.tensor( 269 | [0., 0., 1.])).all(axis=-1))[0].shape[0] 270 | return sep 271 | 272 | # Normalized separator 273 | 274 | 275 | def normalized_separator(graph): 276 | da, db = volumes(graph) 277 | sep = torch.where((graph.x == torch.tensor( 278 | [0., 0., 1.])).all(axis=-1))[0].shape[0] 279 | if da == 0 or db == 0: 280 | return 10 281 | else: 282 | return sep * (1 / da + 1 / db) 283 | 284 | # Normalized separator for METIS 285 | 286 | 287 | def vertex_sep_metis(graph, gnx): 288 | sep, nodes1, nodes2 = nxmetis.vertex_separator(gnx) 289 | da = len(nodes1) 290 | db = len(nodes2) 291 | return len(sep) * (1 / da + 1 / db) 292 | 293 | # Subgraph around the separator 294 | 295 | 296 | def k_hop_graph_cut(graph, k): 297 | nei = torch.where((graph.x[:, 2] == torch.tensor(1.)))[0] 298 | data_cut = k_hop_subgraph( 299 | nei, 300 | k, 301 | graph.edge_index, 302 | relabel_nodes=True, 303 | num_nodes=graph.num_nodes) 304 | data_small = k_hop_subgraph( 305 | nei, 306 | k - 1, 307 | graph.edge_index, 308 | relabel_nodes=True, 309 | num_nodes=graph.num_nodes) 310 | nodes_boundary = list( 311 | set(data_cut[0].numpy()).difference(data_small[0].numpy())) 312 | boundary_features = torch.tensor([1. if i.item( 313 | ) in nodes_boundary else 0. for i in data_cut[0]]).reshape(data_cut[0].shape[0], 1) 314 | remove_f = [] 315 | for j in range(len(data_cut[0])): 316 | if graph.x[data_cut[0][j]][2] == torch.tensor(1.): 317 | neighbors, _, _, _ = k_hop_subgraph( 318 | [data_cut[0][j]], 1, graph.edge_index, relabel_nodes=True, num_nodes=graph.num_nodes) 319 | flagA, flagB = 0, 0 320 | for w in neighbors: 321 | if graph.x[w][0] == torch.tensor(1.): 322 | flagA = 1 323 | elif graph.x[w][1] == torch.tensor(1.): 324 | flagB = 1 325 | if flagA == 1 and flagB == 1: 326 | remove_f.append(1.) 327 | else: 328 | remove_f.append(0.) 329 | else: 330 | remove_f.append(0.) 331 | remove_features = torch.tensor(remove_f).reshape(len(remove_f), 1) 332 | va, vb = volumes(graph) 333 | e = torch.ones(data_cut[0].shape[0], 1) 334 | nnz = graph.num_nodes 335 | features = torch.cat((graph.x[data_cut[0]], 336 | boundary_features, 337 | remove_features, 338 | torch.true_divide(va, 339 | nnz) * e, 340 | torch.true_divide(vb, 341 | nnz) * e), 342 | 1) 343 | g_red = Batch( 344 | batch=torch.zeros( 345 | data_cut[0].shape[0], 346 | dtype=torch.long), 347 | x=features, 348 | edge_index=data_cut[1]) 349 | return g_red, data_cut[0] 350 | 351 | # Cardinalities of the partitions A and B 352 | 353 | 354 | def volumes(graph): 355 | ab = torch.sum(graph.x, dim=0) 356 | return ab[0].item(), ab[1].item() 357 | 358 | # Coarsen a pytorch geometric graph, then find the separator with METIS 359 | # and interpolate it back 360 | 361 | 362 | def partition_metis_refine(graph): 363 | cluster = graclus(graph.edge_index, num_nodes=graph.num_nodes) 364 | coarse_graph = avg_pool( 365 | cluster, 366 | Batch( 367 | batch=graph.batch, 368 | x=graph.x, 369 | edge_index=graph.edge_index)) 370 | coarse_graph_nx = to_networkx(coarse_graph, to_undirected=True) 371 | sep, A, B = nxmetis.vertex_separator(coarse_graph_nx) 372 | coarse_graph.x[sep] = torch.tensor([0., 0., 1.]) 373 | coarse_graph.x[A] = torch.tensor([1., 0., 0.]) 374 | coarse_graph.x[B] = torch.tensor([0., 1., 0.]) 375 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 376 | graph.x = coarse_graph.x[inverse] 377 | return graph 378 | 379 | # Separator of a pytorch geometric graph obtained with METIS 380 | 381 | 382 | def partition_metis(coarse_graph, coarse_graph_nx): 383 | sep, A, B = nxmetis.vertex_separator(coarse_graph_nx) 384 | coarse_graph.x[sep] = torch.tensor([0., 0., 1.]) 385 | coarse_graph.x[A] = torch.tensor([1., 0., 0.]) 386 | coarse_graph.x[B] = torch.tensor([0., 1., 0.]) 387 | return coarse_graph 388 | 389 | # Deep neural network for the DRL agent 390 | 391 | 392 | class Model(torch.nn.Module): 393 | def __init__(self, units): 394 | super(Model, self).__init__() 395 | 396 | self.units = units 397 | self.common_layers = 1 398 | self.critic_layers = 1 399 | self.actor_layers = 1 400 | self.activation = torch.tanh 401 | 402 | self.conv_first = SAGEConv(7, self.units) 403 | self.conv_common = nn.ModuleList( 404 | [SAGEConv(self.units, self.units) 405 | for i in range(self.common_layers)] 406 | ) 407 | self.conv_actor = nn.ModuleList( 408 | [SAGEConv(self.units, 409 | 1 if i == self.actor_layers - 1 else self.units) 410 | for i in range(self.actor_layers)] 411 | ) 412 | self.conv_critic = nn.ModuleList( 413 | [SAGEConv(self.units, self.units) 414 | for i in range(self.critic_layers)] 415 | ) 416 | self.final_critic = nn.Linear(self.units, 1) 417 | 418 | def forward(self, graph): 419 | x, edge_index, batch = graph.x, graph.edge_index, graph.batch 420 | 421 | do_not_flip = torch.where(x[:, 3] != 0.) 422 | do_not_flip_2 = torch.where(x[:, 4] != 0.) 423 | 424 | x = self.activation(self.conv_first(x, edge_index)) 425 | for i in range(self.common_layers): 426 | x = self.activation(self.conv_common[i](x, edge_index)) 427 | 428 | x_actor = x 429 | for i in range(self.actor_layers): 430 | x_actor = self.conv_actor[i](x_actor, edge_index) 431 | if i < self.actor_layers - 1: 432 | x_actor = self.activation(x_actor) 433 | x_actor[do_not_flip] = torch.tensor(-np.Inf) 434 | x_actor[do_not_flip_2] = torch.tensor(-np.Inf) 435 | x_actor = torch.log_softmax(x_actor, dim=0) 436 | 437 | if not self.training: 438 | return x_actor 439 | 440 | x_critic = x.detach() 441 | for i in range(self.critic_layers): 442 | x_critic = self.conv_critic[i](x_critic, edge_index) 443 | if i < self.critic_layers - 1: 444 | x_critic = self.activation(x_critic) 445 | x_critic = self.final_critic(x_critic) 446 | x_critic = torch.tanh(global_mean_pool(x_critic, batch)) 447 | return x_actor, x_critic 448 | 449 | 450 | if __name__ == "__main__": 451 | parser = argparse.ArgumentParser( 452 | formatter_class=argparse.ArgumentDefaultsHelpFormatter) 453 | parser.add_argument('--out', default='./temp_edge/', type=str) 454 | parser.add_argument( 455 | "--nmin", 456 | default=100, 457 | help="Minimum graph size", 458 | type=int) 459 | parser.add_argument( 460 | "--nmax", 461 | default=50000, 462 | help="Maximum graph size", 463 | type=int) 464 | parser.add_argument( 465 | "--ntest", 466 | default=1000, 467 | help="Number of testing graphs", 468 | type=int) 469 | parser.add_argument("--hops", default=3, help="Number of hops", type=int) 470 | parser.add_argument( 471 | "--units", 472 | default=7, 473 | help="Number of units in conv layers", 474 | type=int) 475 | parser.add_argument( 476 | "--attempts", 477 | default=3, 478 | help="Number of attempt in the DRL", 479 | type=int) 480 | parser.add_argument( 481 | "--dataset", 482 | default='delaunay', 483 | help="Dataset type: delaunay, suitesparse, graded l, hole3, hole6", 484 | type=str) 485 | 486 | torch.manual_seed(1) 487 | np.random.seed(2) 488 | 489 | args = parser.parse_args() 490 | outdir = args.out + '/' 491 | Path(outdir).mkdir(parents=True, exist_ok=True) 492 | 493 | n_min = args.nmin 494 | n_max = args.nmax 495 | n_test = args.ntest 496 | 497 | hops = args.hops 498 | units = args.units 499 | trials = args.attempts 500 | dataset_type = args.dataset 501 | 502 | model = Model(units) 503 | if dataset_type == 'suitesparse': 504 | model.load_state_dict( 505 | torch.load('./temp_edge/model_separator_suitesparse')) 506 | else: 507 | model.load_state_dict( 508 | torch.load('./temp_edge/model_separator_delaunay')) 509 | 510 | model.eval() 511 | for p in model.parameters(): 512 | p.requires_grad = False 513 | print('Model loaded\n') 514 | 515 | list_picked = [] 516 | i = 0 517 | while i < n_test: 518 | # Choose the dataset type according to the parameter 'dataset_type' 519 | if dataset_type == 'delaunay': 520 | n_nodes = np.random.choice(np.arange(n_min, n_max)) 521 | g = random_delaunay_graph(n_nodes) 522 | g.batch = torch.zeros(g.num_nodes) 523 | i += 1 524 | else: 525 | if len(list_picked) >= len( 526 | os.listdir( 527 | os.path.expanduser( 528 | 'drl-graph-partitioning/' + 529 | str(dataset_type) + 530 | '/'))): 531 | break 532 | graph = random.choice( 533 | os.listdir( 534 | os.path.expanduser( 535 | 'drl-graph-partitioning/' + 536 | str(dataset_type) + 537 | '/'))) 538 | if str(graph) not in list_picked: 539 | list_picked.append(str(graph)) 540 | matrix_sparse = mmread( 541 | os.path.expanduser( 542 | 'drl-graph-partitioning/' + 543 | str(dataset_type) + 544 | '/' + 545 | str(graph))) 546 | gnx = nx.from_scipy_sparse_matrix(matrix_sparse) 547 | if nx.number_connected_components(gnx) == 1 and gnx.number_of_nodes( 548 | ) > n_min and gnx.number_of_nodes() < n_max: 549 | g = torch_from_sparse(matrix_sparse) 550 | g.batch = torch.zeros(g.num_nodes) 551 | i += 1 552 | else: 553 | continue 554 | else: 555 | continue 556 | print('Graph:', i, ' Vertices:', g.num_nodes, ' Edges:', g.num_edges) 557 | 558 | # Normalized separator with DRL 559 | g1 = ac_eval_coarse_full_trials(model, g, hops, trials) 560 | A, B = volumes(g1) 561 | 562 | # Normalized separator with METIS 563 | a, b, c = nxmetis.vertex_separator(to_networkx(g, to_undirected=True)) 564 | 565 | # Sometimes METIS may fail in computing the vertex separator on the 566 | # coarsest graph, producing an empty partition that affects the 567 | # computations on the finer interpolation levels 568 | if 0 in [A, B, len(b), len(c)]: 569 | continue 570 | 571 | print('NS DRL:', np.round(normalized_separator(g1), 5), 572 | ' NS METIS:', np.round(len(a) * (1 / len(b) + 1 / len(c)), 5)) 573 | print('') 574 | print('Done') 575 | -------------------------------------------------------------------------------- /separator/drl_separator_train.py: -------------------------------------------------------------------------------- 1 | 2 | import argparse 3 | from pathlib import Path 4 | 5 | import networkx as nx 6 | import nxmetis 7 | 8 | import torch 9 | import torch.nn as nn 10 | import torch.multiprocessing as mp 11 | 12 | from torch_geometric.data import Data, DataLoader, Batch 13 | from torch_geometric.nn import SAGEConv, graclus, avg_pool, global_mean_pool 14 | from torch_geometric.utils import to_networkx, k_hop_subgraph, degree 15 | 16 | import numpy as np 17 | from numpy import random 18 | 19 | import scipy 20 | from scipy.sparse import coo_matrix, rand 21 | from scipy.io import mmread 22 | from scipy.spatial import Delaunay 23 | 24 | import copy 25 | import timeit 26 | import os 27 | from itertools import combinations 28 | 29 | 30 | # Networkx geometric Delaunay mesh with n random points in the unit square 31 | def graph_delaunay_from_points(points): 32 | mesh = Delaunay(points, qhull_options="QJ") 33 | mesh_simp = mesh.simplices 34 | edges = [] 35 | for i in range(len(mesh_simp)): 36 | edges += combinations(mesh_simp[i], 2) 37 | e = list(set(edges)) 38 | return nx.Graph(e) 39 | 40 | # Change the feature of the selected vertex v 41 | 42 | 43 | def change_vertex(g, v): 44 | da, db = volumes(g) 45 | gnx = to_networkx(g, to_undirected=True) 46 | if g.x[v, 2] == 0.: 47 | # node v is in A or B, add it to the separator 48 | g.x[v, :3] = torch.tensor([0., 0., 1.]) 49 | return g 50 | # node v is in the separator 51 | 52 | for vj in gnx[v]: 53 | if g.x[vj, 0] == 1.: 54 | # node v is in the separator and connected to A, so add it 55 | # to A 56 | g.x[v, :3] = torch.tensor([1., 0., 0.]) 57 | return g 58 | if g.x[vj, 1] == 1.: 59 | # node v is in the separator and connected to B, so add it 60 | # to B 61 | g.x[v, :3] = torch.tensor([0., 1., 0.]) 62 | return g 63 | # node v is in the separator, but is not connected to A or B. Add 64 | # node v to A if the volume of A is less (or equal) to that of B, 65 | # or to B if the volume of B is less than that of A. 66 | g.x[v, :3] = torch.tensor([1., 0., 0.]) if da <= db \ 67 | else torch.tensor([0., 1., 0.]) 68 | return g 69 | 70 | # Reward to train the DRL agent 71 | 72 | 73 | def reward_separator(state, vertex): 74 | new_state = state.clone() 75 | new_state = change_vertex(new_state, vertex) 76 | return normalized_separator(state) - normalized_separator(new_state) 77 | 78 | # Build a pytorch geometric graph with features [1,0,0] form a networkx graph 79 | 80 | 81 | def torch_from_graph(graph): 82 | adj_sparse = nx.to_scipy_sparse_array(graph, format='coo') 83 | row = adj_sparse.row 84 | col = adj_sparse.col 85 | 86 | one_hot = [] 87 | for i in range(graph.number_of_nodes()): 88 | one_hot.append([1., 0., 0.]) 89 | 90 | edges = torch.tensor([row, col], dtype=torch.long) 91 | nodes = torch.tensor(np.array(one_hot), dtype=torch.float) 92 | graph_torch = Data(x=nodes, edge_index=edges) 93 | 94 | return graph_torch 95 | 96 | # Build a pytorch geometric graph with features [1,0] form a sparse matrix 97 | 98 | 99 | def torch_from_sparse(adj_sparse): 100 | 101 | row = adj_sparse.row 102 | col = adj_sparse.col 103 | 104 | features = [] 105 | for i in range(adj_sparse.shape[0]): 106 | features.append([1., 0., 0.]) 107 | 108 | edges = torch.tensor([row, col], dtype=torch.long) 109 | nodes = torch.tensor(np.array(features), dtype=torch.float) 110 | graph_torch = Data(x=nodes, edge_index=edges) 111 | 112 | return graph_torch 113 | 114 | # Training dataset made of Delaunay graphs generated from random points in 115 | # the unit square and their coarser graphs 116 | 117 | 118 | def delaunay_dataset_with_coarser(n, n_min, n_max): 119 | dataset = [] 120 | while len(dataset) < n: 121 | number_nodes = np.random.choice(np.arange(n_min, n_max + 1, 2)) 122 | g = random_delaunay_graph(number_nodes) 123 | dataset.append(g) 124 | while g.num_nodes > 200: 125 | cluster = graclus(g.edge_index) 126 | coarse_graph = avg_pool( 127 | cluster, 128 | Batch( 129 | batch=torch.zeros( 130 | g.num_nodes), 131 | x=g.x, 132 | edge_index=g.edge_index)) 133 | g1 = Data(x=coarse_graph.x, edge_index=coarse_graph.edge_index) 134 | dataset.append(g1) 135 | g = g1 136 | 137 | loader = DataLoader(dataset, batch_size=1, shuffle=True) 138 | return loader 139 | 140 | # Training dataset made of SuiteSparse graphs and their coarser graphs 141 | 142 | 143 | def suitesparse_dataset_with_coarser(n, n_min, n_max): 144 | dataset, picked = [], [] 145 | for graph in os.listdir(os.path.expanduser( 146 | 'drl-graph-partitioning/suitesparse_train/')): 147 | 148 | if len(dataset) > n or len(picked) >= len(os.listdir( 149 | os.path.expanduser('drl-graph-partitioning/suitesparse_train/'))): 150 | break 151 | picked.append(str(graph)) 152 | # print(str(graph)) 153 | matrix_sparse = mmread( 154 | os.path.expanduser( 155 | 'drl-graph-partitioning/suitesparse_train/' + 156 | str(graph))) 157 | gnx = nx.from_scipy_sparse_matrix(matrix_sparse) 158 | if nx.number_connected_components(gnx) == 1 and gnx.number_of_nodes( 159 | ) > n_min and gnx.number_of_nodes() < n_max: 160 | g = torch_from_sparse(matrix_sparse) 161 | g.weight = torch.tensor([1] * g.num_edges) 162 | dataset.append(g) 163 | while g.num_nodes > 200: 164 | cluster = graclus(g.edge_index) 165 | coarse_graph = avg_pool( 166 | cluster, 167 | Batch( 168 | batch=torch.zeros( 169 | g.num_nodes), 170 | x=g.x, 171 | edge_index=g.edge_index)) 172 | g1 = Data(x=coarse_graph.x, edge_index=coarse_graph.edge_index) 173 | dataset.append(g1) 174 | g = g1 175 | 176 | loader = DataLoader(dataset, batch_size=1, shuffle=True) 177 | return loader 178 | 179 | # Number of vertices in the separator 180 | 181 | 182 | def separator(graph): 183 | sep = torch.where((graph.x == torch.tensor( 184 | [0., 0., 1.])).all(axis=-1))[0].shape[0] 185 | return sep 186 | 187 | # Normalized separator 188 | 189 | 190 | def normalized_separator(graph): 191 | da, db = volumes(graph) 192 | sep = torch.where((graph.x == torch.tensor( 193 | [0., 0., 1.])).all(axis=-1))[0].shape[0] 194 | if da == 0 or db == 0: 195 | return 10 196 | else: 197 | return sep * (1 / da + 1 / db) 198 | 199 | # Cardinalities of the partitions A and B 200 | 201 | 202 | def volumes(graph): 203 | ab = torch.sum(graph.x, dim=0) 204 | return ab[0].item(), ab[1].item() 205 | 206 | # Subgraph around the separator 207 | 208 | 209 | def k_hop_graph_cut(graph, k): 210 | nei = torch.where((graph.x == torch.tensor([0., 0., 1.])).all(axis=-1))[0] 211 | data_cut = k_hop_subgraph( 212 | nei, 213 | k, 214 | graph.edge_index, 215 | relabel_nodes=True, 216 | num_nodes=graph.num_nodes) 217 | data_small = k_hop_subgraph( 218 | nei, 219 | k - 1, 220 | graph.edge_index, 221 | relabel_nodes=True, 222 | num_nodes=graph.num_nodes) 223 | nodes_boundary = list( 224 | set(data_cut[0].numpy()).difference(data_small[0].numpy())) 225 | b_f = [] 226 | for i in data_cut[0]: 227 | if i.item() in nodes_boundary: 228 | b_f.append(1.) 229 | else: 230 | b_f.append(0.) 231 | boundary_features = torch.tensor(b_f).reshape(len(b_f), 1) 232 | remove_f = [] 233 | for j in range(len(data_cut[0])): 234 | if graph.x[data_cut[0][j]][2] == torch.tensor(1.): 235 | neighbors, _, _, _ = k_hop_subgraph( 236 | [data_cut[0][j]], 1, graph.edge_index, relabel_nodes=True) 237 | flagA, flagB = 0, 0 238 | for w in neighbors: 239 | if graph.x[w][0] == torch.tensor(1.): 240 | flagA = 1 241 | elif graph.x[w][1] == torch.tensor(1.): 242 | flagB = 1 243 | if flagA == 1 and flagB == 1: 244 | remove_f.append(1.) 245 | else: 246 | remove_f.append(0.) 247 | else: 248 | remove_f.append(0.) 249 | remove_features = torch.tensor(remove_f).reshape(len(remove_f), 1) 250 | va, vb = volumes(graph) 251 | e = torch.ones(data_cut[0].shape[0], 1) 252 | nnz = graph.num_nodes 253 | features = torch.cat((graph.x[data_cut[0]], 254 | boundary_features, 255 | remove_features, 256 | torch.true_divide(va, 257 | nnz) * e, 258 | torch.true_divide(vb, 259 | nnz) * e), 260 | 1) 261 | g_red = Batch( 262 | batch=torch.zeros( 263 | data_cut[0].shape[0], 264 | dtype=torch.long), 265 | x=features, 266 | edge_index=data_cut[1]) 267 | return g_red, data_cut[0] 268 | 269 | # Update the nodes that are necessary to get a minimal separator 270 | 271 | 272 | def remove_update(gr): 273 | graph = gr.clone() 274 | gnx = to_networkx(graph, to_undirected=True) 275 | for i in range(graph.num_nodes): 276 | flagA, flagB = 0, 0 277 | if graph.x[i, 2] == torch.tensor(1.): 278 | for v in gnx[i]: 279 | if flagA == 1 and flagB == 1: 280 | break 281 | if graph.x[v, 0] == torch.tensor(1.): 282 | flagA = 1 283 | elif graph.x[v, 1] == torch.tensor(1.): 284 | flagB = 1 285 | if flagA == 1 and flagB == 1: 286 | graph.x[i, 4] = torch.tensor(1.) 287 | else: 288 | graph.x[i, 4] = torch.tensor(0.) 289 | return graph 290 | 291 | # Pytorch geometric Delaunay mesh with n random points in the unit square 292 | 293 | 294 | def random_delaunay_graph(n): 295 | points = np.random.random_sample((n, 2)) 296 | g = graph_delaunay_from_points(points) 297 | return torch_from_graph(g) 298 | 299 | # Coarsen a pytorch geometric graph, then find the cut with METIS and 300 | # interpolate it back 301 | 302 | 303 | def partition_metis_refine(graph): 304 | cluster = graclus(graph.edge_index) 305 | coarse_graph = avg_pool( 306 | cluster, 307 | Batch( 308 | batch=graph.batch, 309 | x=graph.x, 310 | edge_index=graph.edge_index)) 311 | coarse_graph_nx = to_networkx(coarse_graph, to_undirected=True) 312 | sep, A, B = nxmetis.vertex_separator(coarse_graph_nx) 313 | coarse_graph.x[sep] = torch.tensor([0., 0., 1.]) 314 | coarse_graph.x[A] = torch.tensor([1., 0., 0.]) 315 | coarse_graph.x[B] = torch.tensor([0., 1., 0.]) 316 | _, inverse = torch.unique(cluster, sorted=True, return_inverse=True) 317 | graph.x = coarse_graph.x[inverse] 318 | return graph 319 | 320 | # Training loop 321 | 322 | 323 | def training_loop( 324 | model, 325 | training_dataset, 326 | episodes, 327 | gamma, 328 | time_to_sample, 329 | coeff, 330 | optimizer, 331 | print_loss, 332 | hops): 333 | 334 | rew_partial = 0 335 | p = 0 336 | # Here start the main loop for training 337 | for i in range(episodes): 338 | 339 | for graph in training_dataset: 340 | 341 | start_all = partition_metis_refine(graph) 342 | 343 | data = k_hop_graph_cut(start_all, hops) 344 | graph_cut, positions = data[0], data[1] 345 | len_episode = 2 * separator(start_all) 346 | start = graph_cut 347 | time = 0 348 | 349 | rews, vals, logprobs = [], [], [] 350 | 351 | # Here starts the episode related to the graph "start" 352 | while time < len_episode: 353 | 354 | # we evaluate the A2C agent on the graph 355 | policy, values = model(start) 356 | probs = policy.view(-1) 357 | action = torch.distributions.Categorical( 358 | logits=probs).sample().detach().item() 359 | # compute the reward associated with this action 360 | rew = reward_separator(start_all, positions[action].item()) 361 | # Collect all the rewards in this episode 362 | rew_partial += rew 363 | 364 | # Collect the log-probability of the chosen action 365 | logprobs.append(policy.view(-1)[action]) 366 | # Collect the value of the chosen action 367 | vals.append(values) 368 | # Collect the reward 369 | rews.append(rew) 370 | 371 | new_state = start.clone() 372 | new_state_orig = start_all.clone() 373 | # we flip the vertex returned by the policy 374 | new_state = change_vertex(new_state, action) 375 | new_state_orig = change_vertex( 376 | new_state_orig, positions[action].item()) 377 | # Update the state 378 | start = new_state 379 | start_all = new_state_orig 380 | 381 | va, vb = volumes(start_all) 382 | 383 | nnz = start_all.num_nodes 384 | start.x[:, 5] = torch.true_divide(va, nnz) 385 | start.x[:, 6] = torch.true_divide(vb, nnz) 386 | 387 | start_up = start.clone() 388 | 389 | start_up = remove_update(start) 390 | start = start_up 391 | 392 | time += 1 393 | 394 | # After time_to_sample episods we update the loss 395 | if i % time_to_sample == 0 or time == len_episode: 396 | 397 | logprobs = torch.stack(logprobs).flip(dims=(0,)).view(-1) 398 | vals = torch.stack(vals).flip(dims=(0,)).view(-1) 399 | rews = torch.tensor(rews).flip(dims=(0,)).view(-1) 400 | 401 | # Compute the advantage 402 | R = [] 403 | R_partial = torch.tensor([0.]) 404 | for j in range(rews.shape[0]): 405 | R_partial = rews[j] + gamma * R_partial 406 | R.append(R_partial) 407 | 408 | R = torch.stack(R).view(-1) 409 | advantage = R - vals.detach() 410 | 411 | # Actor loss 412 | actor_loss = (-1 * logprobs * advantage) 413 | 414 | # Critic loss 415 | critic_loss = torch.pow(R - vals, 2) 416 | 417 | # Finally we update the loss 418 | optimizer.zero_grad() 419 | 420 | loss = torch.mean( 421 | actor_loss) + torch.tensor(coeff) * torch.mean(critic_loss) 422 | 423 | rews, vals, logprobs = [], [], [] 424 | 425 | loss.backward() 426 | 427 | optimizer.step() 428 | if p % print_loss == 0: 429 | print('graph:', p, 430 | 'reward:', rew_partial) 431 | rew_partial = 0 432 | p += 1 433 | 434 | return model 435 | 436 | # Deep neural network for the DRL agent 437 | 438 | 439 | class Model(torch.nn.Module): 440 | def __init__(self, units): 441 | super(Model, self).__init__() 442 | 443 | self.units = units 444 | self.common_layers = 1 445 | self.critic_layers = 1 446 | self.actor_layers = 1 447 | self.activation = torch.tanh 448 | 449 | self.conv_first = SAGEConv(7, self.units) 450 | self.conv_common = nn.ModuleList( 451 | [SAGEConv(self.units, self.units) 452 | for i in range(self.common_layers)] 453 | ) 454 | self.conv_actor = nn.ModuleList( 455 | [SAGEConv(self.units, 456 | 1 if i == self.actor_layers - 1 else self.units) 457 | for i in range(self.actor_layers)] 458 | ) 459 | self.conv_critic = nn.ModuleList( 460 | [SAGEConv(self.units, self.units) 461 | for i in range(self.critic_layers)] 462 | ) 463 | self.final_critic = nn.Linear(self.units, 1) 464 | 465 | def forward(self, graph): 466 | x, edge_index, batch = graph.x, graph.edge_index, graph.batch 467 | 468 | do_not_flip = torch.where(x[:, 3] != 0.) 469 | do_not_flip_2 = torch.where(x[:, 4] != 0.) 470 | 471 | x = self.activation(self.conv_first(x, edge_index)) 472 | for i in range(self.common_layers): 473 | x = self.activation(self.conv_common[i](x, edge_index)) 474 | 475 | x_actor = x 476 | for i in range(self.actor_layers): 477 | x_actor = self.conv_actor[i](x_actor, edge_index) 478 | if i < self.actor_layers - 1: 479 | x_actor = self.activation(x_actor) 480 | x_actor[do_not_flip] = torch.tensor(-np.Inf) 481 | x_actor[do_not_flip_2] = torch.tensor(-np.Inf) 482 | x_actor = torch.log_softmax(x_actor, dim=0) 483 | 484 | if not self.training: 485 | return x_actor 486 | 487 | x_critic = x.detach() 488 | for i in range(self.critic_layers): 489 | x_critic = self.conv_critic[i](x_critic, edge_index) 490 | if i < self.critic_layers - 1: 491 | x_critic = self.activation(x_critic) 492 | x_critic = self.final_critic(x_critic) 493 | x_critic = torch.tanh(global_mean_pool(x_critic, batch)) 494 | return x_actor, x_critic 495 | 496 | 497 | if __name__ == "__main__": 498 | parser = argparse.ArgumentParser( 499 | formatter_class=argparse.ArgumentDefaultsHelpFormatter) 500 | parser.add_argument('--out', default='./temp_edge/', type=str) 501 | parser.add_argument( 502 | "--nmin", 503 | default=200, 504 | help="Minimum graph size", 505 | type=int) 506 | parser.add_argument( 507 | "--nmax", 508 | default=5000, 509 | help="Maximum graph size", 510 | type=int) 511 | parser.add_argument( 512 | "--ntrain", 513 | default=1000, 514 | help="Number of training graphs", 515 | type=int) 516 | parser.add_argument( 517 | "--epochs", 518 | default=1, 519 | help="Number of training epochs", 520 | type=int) 521 | parser.add_argument( 522 | "--print_rew", 523 | default=1000, 524 | help="Steps at which print reward", 525 | type=int) 526 | parser.add_argument("--batch", default=8, help="Batch size", type=int) 527 | parser.add_argument("--hops", default=3, help="Number of hops", type=int) 528 | parser.add_argument( 529 | "--lr", 530 | default=0.001, 531 | help="Learning rate", 532 | type=float) 533 | parser.add_argument( 534 | "--gamma", 535 | default=0.9, 536 | help="Gamma, discount factor", 537 | type=float) 538 | parser.add_argument( 539 | "--coeff", 540 | default=0.1, 541 | help="Critic loss coefficient", 542 | type=float) 543 | parser.add_argument( 544 | "--units", 545 | default=7, 546 | help="Number of units in conv layers", 547 | type=int) 548 | parser.add_argument( 549 | "--dataset", 550 | default='delaunay', 551 | help="Dataset type: delaunay or suitesparse", 552 | type=str) 553 | 554 | torch.manual_seed(1) 555 | np.random.seed(2) 556 | 557 | args = parser.parse_args() 558 | outdir = args.out + '/' 559 | Path(outdir).mkdir(parents=True, exist_ok=True) 560 | 561 | n_min = args.nmin 562 | n_max = args.nmax 563 | n_train = args.ntrain 564 | episodes = args.epochs 565 | coeff = args.coeff 566 | print_loss = args.print_rew 567 | 568 | time_to_sample = args.batch 569 | hops = args.hops 570 | lr = args.lr 571 | gamma = args.gamma 572 | units = args.units 573 | dataset_type = args.dataset 574 | 575 | # Choose the dataset type according to the parameter 'dataset_type' 576 | if dataset_type == 'delaunay': 577 | dataset = delaunay_dataset_with_coarser(n_train, n_min, n_max) 578 | else: 579 | dataset = suitesparse_dataset_with_coarser(n_train, n_min, n_max) 580 | 581 | model = Model(units) 582 | model.share_memory() 583 | print(model) 584 | print('Model parameters:', 585 | sum([w.nelement() for w in model.parameters()])) 586 | 587 | optimizer = torch.optim.Adam(model.parameters(), lr=lr) 588 | 589 | # Start the training 590 | print('Start training') 591 | t0 = timeit.default_timer() 592 | training_loop(model, dataset, episodes, gamma, time_to_sample, coeff, 593 | optimizer, print_loss, hops) 594 | ttrain = timeit.default_timer() - t0 595 | print('Training took:', ttrain, 'seconds') 596 | 597 | # Saving the model 598 | if dataset_type == 'delaunay': 599 | torch.save(model.state_dict(), outdir + 'model_separator_delaunay') 600 | else: 601 | torch.save( 602 | model.state_dict(), 603 | outdir + 604 | 'model_separator_suitesparse') 605 | -------------------------------------------------------------------------------- /separator/scotch/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.2) 2 | project(SCOTCHWrapper CXX) 3 | 4 | set(CMAKE_CXX_STANDARD 11) 5 | set(CMAKE_CXX_STANDARD_REQUIRED on) 6 | 7 | list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake/Modules") 8 | list(APPEND CMAKE_PREFIX_PATH $ENV{SCOTCH_DIR} $ENV{SCOTCH_ROOT}) 9 | find_package(SCOTCH) 10 | 11 | add_library(SCOTCHWrapper SHARED SCOTCHWrapper.cpp) 12 | target_link_libraries(SCOTCHWrapper PUBLIC SCOTCH::scotch) 13 | -------------------------------------------------------------------------------- /separator/scotch/SCOTCHWrapper.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | 5 | #include 6 | 7 | #ifdef __cplusplus 8 | extern "C" { 9 | #endif 10 | 11 | void WRAPPER_SCOTCH_graphPart(int n, int* ptr, int* ind, int* part) { 12 | SCOTCH_Graph g; 13 | SCOTCH_graphInit(&g); 14 | std::vector ptr_nodiag(n+1), ind_nodiag(ptr[n]-ptr[0]); 15 | int nnz_nodiag = 0; 16 | ptr_nodiag[0] = 0; 17 | for (int i=0; i p(n); 34 | SCOTCH_graphPart(&g, 2, &strategy, p.data()); 35 | std::copy(p.begin(), p.end(), part); 36 | SCOTCH_graphExit(&g); 37 | SCOTCH_stratExit(&strategy); 38 | } 39 | 40 | void WRAPPER_SCOTCH_graphOrder(int n, int* ptr, int* ind, int* perm) { 41 | SCOTCH_Graph g; 42 | SCOTCH_graphInit(&g); 43 | std::vector ptr_nodiag(n+1), ind_nodiag(ptr[n]-ptr[0]); 44 | int nnz_nodiag = 0; 45 | ptr_nodiag[0] = 0; 46 | for (int i=0; i p(n), pi(n), sizes(n+1), tree(n); 64 | ierr = SCOTCH_graphOrder 65 | (&g, &strategy, p.data(), pi.data(), 66 | &nbsep, sizes.data(), tree.data()); 67 | if (ierr) 68 | std::cerr << "# ERROR: SCOTCH_graphOrder faile with ierr=" 69 | << ierr << std::endl; 70 | std::copy(pi.begin(), pi.end(), perm); 71 | SCOTCH_graphExit(&g); 72 | SCOTCH_stratExit(&strategy); 73 | } 74 | 75 | #ifdef __cplusplus 76 | } 77 | #endif 78 | -------------------------------------------------------------------------------- /separator/scotch/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export SCOTCH_DIR=$HOME/scotch-v6.1.0/src/libscotch/ 4 | 5 | rm -rf build 6 | mkdir build 7 | cd build 8 | cmake ../ 9 | make VERBOSE=1 10 | -------------------------------------------------------------------------------- /separator/scotch/cmake/Modules/FindSCOTCH.cmake: -------------------------------------------------------------------------------- 1 | # FindSCOTCH.cmake 2 | # 3 | # Finds the SCOTCH library. 4 | # 5 | # This module will define the following variables: 6 | # 7 | # SCOTCH_FOUND - System has found SCOTCH installation 8 | # SCOTCH_INCLUDE_DIR - Location of SCOTCH headers 9 | # SCOTCH_LIBRARIES - SCOTCH libraries 10 | # SCOTCH_USES_ILP64 - Whether SCOTCH was configured with ILP64 11 | # SCOTCH_USES_PTHREADS - Whether SCOTCH was configured with PThreads 12 | # 13 | # This module can handle the following COMPONENTS 14 | # 15 | # ilp64 - 64-bit index integers 16 | # pthreads - SMP parallelism via PThreads 17 | # metis - Has METIS compatibility layer 18 | # 19 | # This module will export the following targets if SCOTCH_FOUND 20 | # 21 | # SCOTCH::scotch 22 | # 23 | # 24 | # 25 | # 26 | # Proper usage: 27 | # 28 | # project( TEST_FIND_SCOTCH C ) 29 | # find_package( SCOTCH ) 30 | # 31 | # if( SCOTCH_FOUND ) 32 | # add_executable( test test.cxx ) 33 | # target_link_libraries( test SCOTCH::scotch ) 34 | # endif() 35 | # 36 | # 37 | # 38 | # 39 | # This module will use the following variables to change 40 | # default behaviour if set 41 | # 42 | # scotch_PREFIX 43 | # scotch_INCLUDE_DIR 44 | # scotch_LIBRARY_DIR 45 | # scotch_LIBRARIES 46 | 47 | #================================================================== 48 | # Copyright (c) 2018 The Regents of the University of California, 49 | # through Lawrence Berkeley National Laboratory. 50 | # 51 | # Author: David Williams-Young 52 | # 53 | # This file is part of cmake-modules. All rights reserved. 54 | # 55 | # Redistribution and use in source and binary forms, with or without 56 | # modification, are permitted provided that the following conditions are met: 57 | # 58 | # (1) Redistributions of source code must retain the above copyright notice, this 59 | # list of conditions and the following disclaimer. 60 | # (2) Redistributions in binary form must reproduce the above copyright notice, 61 | # this list of conditions and the following disclaimer in the documentation 62 | # and/or other materials provided with the distribution. 63 | # (3) Neither the name of the University of California, Lawrence Berkeley 64 | # National Laboratory, U.S. Dept. of Energy nor the names of its contributors may 65 | # be used to endorse or promote products derived from this software without 66 | # specific prior written permission. 67 | # 68 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 69 | # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 70 | # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 71 | # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR 72 | # ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 73 | # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 74 | # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 75 | # ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 76 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 77 | # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 78 | # 79 | # You are under no obligation whatsoever to provide any bug fixes, patches, or 80 | # upgrades to the features, functionality or performance of the source code 81 | # ("Enhancements") to anyone; however, if you choose to make your Enhancements 82 | # available either publicly, or directly to Lawrence Berkeley National 83 | # Laboratory, without imposing a separate written license agreement for such 84 | # Enhancements, then you hereby grant the following license: a non-exclusive, 85 | # royalty-free perpetual license to install, use, modify, prepare derivative 86 | # works, incorporate into other computer software, distribute, and sublicense 87 | # such enhancements or derivative works thereof, in binary and source code form. 88 | # 89 | #================================================================== 90 | 91 | cmake_minimum_required( VERSION 3.11 ) # Require CMake 3.11+ 92 | # Set up some auxillary vars if hints have been set 93 | 94 | if( scotch_PREFIX AND NOT scotch_INCLUDE_DIR ) 95 | set( scotch_INCLUDE_DIR ${scotch_PREFIX}/include ) 96 | endif() 97 | 98 | 99 | if( scotch_PREFIX AND NOT scotch_LIBRARY_DIR ) 100 | set( scotch_LIBRARY_DIR 101 | ${scotch_PREFIX}/lib 102 | ${scotch_PREFIX}/lib32 103 | ${scotch_PREFIX}/lib64 104 | ) 105 | endif() 106 | 107 | 108 | # Try to find the header 109 | find_path( SCOTCH_INCLUDE_DIR 110 | NAMES scotch.h 111 | HINTS ${scotch_PREFIX} 112 | PATHS ${scotch_INCLUDE_DIR} 113 | PATH_SUFFIXES include 114 | DOC "Location of SCOTCH header" 115 | ) 116 | 117 | # Try to find libraries if not already set 118 | if( NOT scotch_LIBRARIES ) 119 | 120 | find_library( SCOTCH_LIBRARY 121 | NAMES scotch 122 | HINTS ${scotch_PREFIX} 123 | PATHS ${scotch_LIBRARY_DIR} 124 | PATH_SUFFIXES lib lib64 lib32 125 | DOC "SCOTCH Library" 126 | ) 127 | 128 | find_library( SCOTCH_ERR_LIBRARY 129 | NAMES scotcherr 130 | HINTS ${scotch_PREFIX} 131 | PATHS ${scotch_LIBRARY_DIR} 132 | PATH_SUFFIXES lib lib64 lib32 133 | DOC "SCOTCH Error Libraries" 134 | ) 135 | 136 | find_library( SCOTCH_ERREXIT_LIBRARY 137 | NAMES scotcherrexit 138 | HINTS ${scotch_PREFIX} 139 | PATHS ${scotch_LIBRARY_DIR} 140 | PATH_SUFFIXES lib lib64 lib32 141 | DOC "SCOTCH Error-Exit Libraries" 142 | ) 143 | 144 | 145 | set( SCOTCH_LIBRARIES 146 | ${SCOTCH_LIBRARY} 147 | ${SCOTCH_ERR_LIBRARY} 148 | ${SCOTCH_ERREXIT_LIBRARY} ) 149 | 150 | if( "metis" IN_LIST SCOTCH_FIND_COMPONENTS ) 151 | 152 | find_library( SCOTCH_METIS_LIBRARY 153 | NAMES scotchmetis 154 | HINTS ${scotch_PREFIX} 155 | PATHS ${scotch_LIBRARY_DIR} 156 | PATH_SUFFIXES lib lib64 lib32 157 | DOC "SCOTCH-METIS compatibility Libraries" 158 | ) 159 | 160 | if( SCOTCH_METIS_LIBRARY ) 161 | list( APPEND SCOTCH_LIBRARIES ${SCOTCH_METIS_LIBRARY} ) 162 | set( SCOTCH_metis_FOUND TRUE ) 163 | endif() 164 | 165 | endif() 166 | 167 | 168 | else() 169 | 170 | # FIXME: Check if files exists at least? 171 | set( SCOTCH_LIBRARIES ${scotch_LIBRARIES} ) 172 | 173 | endif() 174 | 175 | # Check version 176 | if( EXISTS ${SCOTCH_INCLUDE_DIR}/scotch.h ) 177 | set( version_pattern 178 | "^#define[\t ]+SCOTCH_(VERSION|RELEASE|PATCHLEVEL)[\t ]+([0-9\\.]+)$" 179 | ) 180 | file( STRINGS ${SCOTCH_INCLUDE_DIR}/scotch.h scotch_version 181 | REGEX ${version_pattern} ) 182 | 183 | foreach( match ${scotch_version} ) 184 | 185 | if(SCOTCH_VERSION_STRING) 186 | set(SCOTCH_VERSION_STRING "${SCOTCH_VERSION_STRING}.") 187 | endif() 188 | 189 | string(REGEX REPLACE ${version_pattern} 190 | "${SCOTCH_VERSION_STRING}\\2" 191 | SCOTCH_VERSION_STRING ${match} 192 | ) 193 | 194 | set(SCOTCH_VERSION_${CMAKE_MATCH_1} ${CMAKE_MATCH_2}) 195 | 196 | endforeach() 197 | 198 | unset( scotch_version ) 199 | unset( version_pattern ) 200 | endif() 201 | 202 | # Check ILP64 203 | if( EXISTS ${SCOTCH_INCLUDE_DIR}/scotch.h ) 204 | 205 | set( idxwidth_pattern 206 | "^typedef[\t ]+(int64_t|int32_t)[\t ]SCOTCH_Idx\\;$" 207 | ) 208 | file( STRINGS ${SCOTCH_INCLUDE_DIR}/scotch.h scotch_idxwidth 209 | REGEX ${idxwidth_pattern} ) 210 | 211 | string( REGEX REPLACE ${idxwidth_pattern} 212 | "${SCOTCH_IDXWIDTH_STRING}\\1" 213 | SCOTCH_IDXWIDTH_STRING "${scotch_idxwidth}" ) 214 | 215 | if( ${SCOTCH_IDXWIDTH_STRING} MATCHES "int64_t" ) 216 | set( SCOTCH_USES_ILP64 TRUE ) 217 | else() 218 | set( SCOTCH_USES_ILP64 FALSE ) 219 | endif() 220 | 221 | unset( idxwidth_pattern ) 222 | unset( scotch_idxwidth ) 223 | unset( SCOTCH_IDXWIDTH_STRING ) 224 | 225 | endif() 226 | 227 | 228 | # Check Threads 229 | if( SCOTCH_LIBRARIES ) 230 | 231 | # FIXME: This assumes that threads are even installed 232 | set( CMAKE_THREAD_PREFER_PTHREAD ON ) 233 | find_package( Threads QUIET ) 234 | 235 | include( CMakePushCheckState ) 236 | 237 | cmake_push_check_state( RESET ) 238 | 239 | set( CMAKE_REQUIRED_LIBRARIES Threads::Threads ${SCOTCH_LIBRARIES} ) 240 | set( CMAKE_REQUIRED_QUIET ON ) 241 | 242 | include( CheckLibraryExists ) 243 | # check_library_exists( "" threadReduce "" 244 | # SCOTCH_USES_PTHREADS ) 245 | check_library_exists( "" pthread_create "" 246 | SCOTCH_USES_PTHREADS ) 247 | 248 | cmake_pop_check_state() 249 | 250 | endif() 251 | 252 | 253 | # Handle components 254 | if( SCOTCH_USES_ILP64 ) 255 | set( SCOTCH_ilp64_FOUND TRUE ) 256 | endif() 257 | 258 | if( SCOTCH_USES_PTHREADS ) 259 | set( SCOTCH_pthreads_FOUND TRUE ) 260 | endif() 261 | 262 | # Determine if we've found SCOTCH 263 | mark_as_advanced( SCOTCH_FOUND SCOTCH_INCLUDE_DIR SCOTCH_LIBRARIES ) 264 | 265 | include(FindPackageHandleStandardArgs) 266 | find_package_handle_standard_args( SCOTCH 267 | REQUIRED_VARS SCOTCH_LIBRARIES SCOTCH_INCLUDE_DIR 268 | VERSION_VAR SCOTCH_VERSION_STRING 269 | HANDLE_COMPONENTS 270 | ) 271 | 272 | # Export target 273 | if( SCOTCH_FOUND AND NOT TARGET SCOTCH::scotch ) 274 | 275 | add_library( SCOTCH::scotch INTERFACE IMPORTED ) 276 | set_target_properties( SCOTCH::scotch PROPERTIES 277 | INTERFACE_INCLUDE_DIRECTORIES "${SCOTCH_INCLUDE_DIR}" 278 | INTERFACE_LINK_LIBRARIES "${SCOTCH_LIBRARIES}" 279 | ) 280 | 281 | endif() 282 | --------------------------------------------------------------------------------