├── .gitignore ├── GraphConvWat-model.png ├── LICENSE ├── README.md ├── batch_run.sh ├── data └── .gitkeep ├── evaluation ├── HO_eval.ipynb ├── baselines.ipynb ├── distribution_of_sensitivities.ipynb ├── eval-sens_graco.ipynb ├── get_diameter.py ├── learning_curves.ipynb ├── plot_Taylor_diag.py ├── plot_Taylor_diag_for_sensors.py ├── plot_Taylor_diags.sh ├── plot_WDS_topo.py ├── plot_WDS_topo_with_sensitivity.py ├── plot_WDS_topo_with_sensors.py ├── plot_adjacency_matrix.py ├── plot_ecdf.py ├── plot_swarm_plot.py ├── plot_swarms.sh ├── process_Taylor_metrics.ipynb ├── sensor_placement_proto.ipynb └── taylorDiagram.py ├── experiments ├── hyperparams │ └── db │ │ ├── db_anytown_doe_pumpfed_1.yaml │ │ ├── db_ctown_doe_pumpfed_1.yaml │ │ └── db_richmond_doe_pumpfed_1.yaml ├── logs │ └── .gitkeep └── models │ └── .gitkeep ├── generate_dta.py ├── hyperopt.py ├── model ├── anytown.py ├── ctown.py └── richmond.py ├── test_Taylor_metrics.py ├── test_Taylor_metrics_for_sensor_placement.py ├── test_relative_error.py ├── train.py ├── utils ├── DataReader.py ├── EarlyStopping.py ├── MeanPredictor.py ├── Metrics.py ├── SensorInstaller.py ├── baselines.py ├── dataloader.py ├── envs │ ├── conda_env-cpu.yml │ └── conda_env-cuda.yml └── graph_utils.py └── water_networks ├── anytown.inp ├── ctown.inp └── richmond.inp /.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore 2 | .ipynb_checkpoints 3 | __pycache__ 4 | data/* 5 | experiments/logs/* 6 | experiments/models/* 7 | *events* 8 | *.db 9 | *.h5 10 | *.pkl 11 | *.swp 12 | *.rpt 13 | *.zip 14 | *.pdf 15 | *.csv 16 | 17 | # Allow 18 | !data/.gitkeep 19 | !experiments/logs/.gitkeep 20 | !experiments/models/.gitkeep 21 | -------------------------------------------------------------------------------- /GraphConvWat-model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BME-SmartLab/GraphConvWat/be97b45fbc7dfdba22bb1ee406424a7c568120e5/GraphConvWat-model.png -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Speech Technology and Smart Interactions Laboratory 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![](GraphConvWat-model.png) 2 | 3 | # GraphConvWat -- Graph Convolution on Water Networks 4 | GitHub repository for the paper: "Reconstructing nodal pressure in water distribution systems with graph neural networks". (under submission, preprint available at arXiv: [https://arxiv.org/abs/2104.13619](https://arxiv.org/abs/2104.13619)) 5 | 6 | ## Repo structure 7 | ``` 8 | . 9 | ├── data - dir for data generated by generate_dta.py 10 | ├── evaluation - scripts for evaluating and plotting the results 11 | ├── experiments - parameters for datagen, trained model weights, etc. 12 | ├── model - graph neural network topologies 13 | ├── utils - auxiliary classes and scripts & pip reqs. 14 | ├── water_networks - water network topologies in EPANET-compatible format 15 | ├── generate_dta.py - multithread scene generation 16 | ├── hyperopt.py - hyperparameter optimization 17 | ├── LICENSE 18 | ├── README.md 19 | ├── test_Taylor_metrics.py - calculating metrics for Taylor-diagrams 20 | └── train.py - training of GraphConvWat 21 | ``` 22 | 23 | ## Citing 24 | ### ...the preprint 25 | ``` 26 | @misc{Hajgato2021, 27 | author = {Hajgat{\'{o}}, Gergely and Gyires-T{\'{o}}th, B{\'{a}}lint and Pa{\'{a}}l, Gy{\"{o}}rgy}, 28 | title = {Reconstructing nodal pressures in water distribution systems with graph neural networks}, 29 | year = {2021}, 30 | month = apr, 31 | archiveprefix = {arXiv}, 32 | eprint = {2104.13619}, 33 | } 34 | ``` 35 | 36 | ### ...the repository 37 | ``` 38 | @misc{graphconvwat, 39 | author = {Hajgat{\'{o}}, Gergely and Gyires-T{\'{o}}th, B{\'{a}}lint and Pa{\'{a}}l, Gy{\"{o}}rgy}, 40 | title = {{GraphConvWat}}, 41 | year = {2021}, 42 | publisher = {GitHub} 43 | journal = {GitHub repository}, 44 | organization = {SmartLab, Budapest University of Technology and Economics}, 45 | howpublished = {\url{https://github.com/BME-SmartLab/GraphConvWat}}, 46 | } 47 | ``` 48 | -------------------------------------------------------------------------------- /batch_run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #for idx in {1..3} 3 | #do 4 | # for deploy in master dist hydrodist hds hdvar 5 | # do 6 | # python train.py --epoch 1000 --adj binary --tag ms --deploy $deploy --wds anytown --budget 1 --batch 200 7 | # python train.py --epoch 1000 --adj binary --tag ms --deploy $deploy --wds ctown --budget 5 --batch 120 8 | # python train.py --epoch 1000 --adj binary --tag ms --deploy $deploy --wds richmond --budget 10 --batch 50 9 | # done 10 | #done 11 | #for idx in {1..15} 12 | #do 13 | # python train.py --epoch 1000 --adj binary --tag ms --deploy xrandom --deterministic --wds anytown --budget 1 --batch 200 14 | # python train.py --epoch 1000 --adj binary --tag ms --deploy xrandom --deterministic --wds ctown --budget 5 --batch 120 15 | # python train.py --epoch 1000 --adj binary --tag ms --deploy xrandom --deterministic --wds richmond --budget 10 --batch 50 16 | #done 17 | 18 | for idx in {1..3} 19 | do 20 | for deploy in master dist hydrodist hds hdvar 21 | do 22 | python test_Taylor_metrics_for_sensor_placement.py --runid $idx --adj binary --tag ms --deploy $deploy --wds anytown --budget 1 --batch 200 23 | python test_Taylor_metrics_for_sensor_placement.py --runid $idx --adj binary --tag ms --deploy $deploy --wds ctown --budget 5 --batch 120 24 | python test_Taylor_metrics_for_sensor_placement.py --runid $idx --adj binary --tag ms --deploy $deploy --wds richmond --budget 10 --batch 50 25 | done 26 | done 27 | for idx in {1..15} 28 | do 29 | python test_Taylor_metrics_for_sensor_placement.py --runid $idx --adj binary --tag ms --deploy xrandom --wds anytown --budget 1 --batch 200 30 | python test_Taylor_metrics_for_sensor_placement.py --runid $idx --adj binary --tag ms --deploy xrandom --wds ctown --budget 5 --batch 120 31 | python test_Taylor_metrics_for_sensor_placement.py --runid $idx --adj binary --tag ms --deploy xrandom --wds richmond --budget 10 --batch 50 32 | done 33 | -------------------------------------------------------------------------------- /data/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BME-SmartLab/GraphConvWat/be97b45fbc7dfdba22bb1ee406424a7c568120e5/data/.gitkeep -------------------------------------------------------------------------------- /evaluation/HO_eval.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import os\n", 10 | "import optuna\n", 11 | "import pandas as pd\n", 12 | "import seaborn as sns" 13 | ] 14 | }, 15 | { 16 | "cell_type": "code", 17 | "execution_count": null, 18 | "metadata": {}, 19 | "outputs": [], 20 | "source": [ 21 | "!ls ../experiments/hyperparams" 22 | ] 23 | }, 24 | { 25 | "cell_type": "code", 26 | "execution_count": null, 27 | "metadata": {}, 28 | "outputs": [], 29 | "source": [ 30 | "db_path = 'sqlite:///../experiments/hyperparams/anytown_ho-0.05.db'\n", 31 | "studies = optuna.get_all_study_summaries(storage=db_path)\n", 32 | "study_names = []\n", 33 | "for study in studies:\n", 34 | " study_names.append(study.study_name)\n", 35 | "print(study_names)" 36 | ] 37 | }, 38 | { 39 | "cell_type": "code", 40 | "execution_count": null, 41 | "metadata": {}, 42 | "outputs": [], 43 | "source": [ 44 | "df_dict = dict()\n", 45 | "for obsrat in [0.05, 0.1, 0.2, 0.4, 0.8]:\n", 46 | " db_path = 'sqlite:///../experiments/hyperparams/anytown_ho-'+str(obsrat)+'.db'\n", 47 | " study = optuna.load_study(\n", 48 | " study_name = 'v4',\n", 49 | " storage = db_path\n", 50 | " )\n", 51 | " df = study.trials_dataframe()\n", 52 | " df_dict[str(obsrat)] = df" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": null, 58 | "metadata": {}, 59 | "outputs": [], 60 | "source": [ 61 | "print(df_dict.keys())" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": {}, 67 | "source": [ 68 | "# 0.05" 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": null, 74 | "metadata": {}, 75 | "outputs": [], 76 | "source": [ 77 | "df = df_dict['0.05']\n", 78 | "df.drop(index=df.nlargest(5, 'value').index, inplace=True)\n", 79 | "df.drop(df.index[df['params_adjacency'] == 'pruned'], inplace=True)\n", 80 | "sns.swarmplot(\n", 81 | " data = df,\n", 82 | " x = 'params_adjacency',\n", 83 | " y = 'value',\n", 84 | " hue = 'params_n_layers'\n", 85 | " )\n", 86 | "df.nsmallest(5, 'value')" 87 | ] 88 | }, 89 | { 90 | "cell_type": "markdown", 91 | "metadata": {}, 92 | "source": [ 93 | "# 0.1" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": null, 99 | "metadata": {}, 100 | "outputs": [], 101 | "source": [ 102 | "df = df_dict['0.1']\n", 103 | "df.drop(index=df.nlargest(5, 'value').index, inplace=True)\n", 104 | "df.drop(df.index[df['params_adjacency'] == 'pruned'], inplace=True)\n", 105 | "sns.swarmplot(\n", 106 | " data = df,\n", 107 | " x = 'params_adjacency',\n", 108 | " y = 'value',\n", 109 | " hue = 'params_n_layers'\n", 110 | " )\n", 111 | "df.nsmallest(5, 'value')" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "# 0.2" 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": null, 124 | "metadata": {}, 125 | "outputs": [], 126 | "source": [ 127 | "df = df_dict['0.2']\n", 128 | "df.drop(index=df.nlargest(5, 'value').index, inplace=True)\n", 129 | "df.drop(df.index[df['params_adjacency'] == 'pruned'], inplace=True)\n", 130 | "sns.swarmplot(\n", 131 | " data = df,\n", 132 | " x = 'params_adjacency',\n", 133 | " y = 'value',\n", 134 | " hue = 'params_n_layers'\n", 135 | " )\n", 136 | "df.nsmallest(5, 'value')" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": {}, 142 | "source": [ 143 | "# 0.4" 144 | ] 145 | }, 146 | { 147 | "cell_type": "code", 148 | "execution_count": null, 149 | "metadata": {}, 150 | "outputs": [], 151 | "source": [ 152 | "df = df_dict['0.4']\n", 153 | "df.drop(index=df.nlargest(15, 'value').index, inplace=True)\n", 154 | "df.drop(df.index[df['params_adjacency'] == 'pruned'], inplace=True)\n", 155 | "sns.swarmplot(\n", 156 | " data = df,\n", 157 | " x = 'params_adjacency',\n", 158 | " y = 'value',\n", 159 | " hue = 'params_n_layers'\n", 160 | " )\n", 161 | "df.nsmallest(5, 'value')" 162 | ] 163 | }, 164 | { 165 | "cell_type": "markdown", 166 | "metadata": {}, 167 | "source": [ 168 | "# 0.8" 169 | ] 170 | }, 171 | { 172 | "cell_type": "code", 173 | "execution_count": null, 174 | "metadata": {}, 175 | "outputs": [], 176 | "source": [ 177 | "df = df_dict['0.8']\n", 178 | "df.drop(index=df.nlargest(10, 'value').index, inplace=True)\n", 179 | "df.drop(df.index[df['params_adjacency'] == 'pruned'], inplace=True)\n", 180 | "sns.swarmplot(\n", 181 | " data = df,\n", 182 | " x = 'params_adjacency',\n", 183 | " y = 'value',\n", 184 | " hue = 'params_n_layers'\n", 185 | " )\n", 186 | "df.nsmallest(5, 'value')" 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "execution_count": null, 192 | "metadata": {}, 193 | "outputs": [], 194 | "source": [] 195 | } 196 | ], 197 | "metadata": { 198 | "kernelspec": { 199 | "display_name": "Python 3", 200 | "language": "python", 201 | "name": "python3" 202 | }, 203 | "language_info": { 204 | "codemirror_mode": { 205 | "name": "ipython", 206 | "version": 3 207 | }, 208 | "file_extension": ".py", 209 | "mimetype": "text/x-python", 210 | "name": "python", 211 | "nbconvert_exporter": "python", 212 | "pygments_lexer": "ipython3", 213 | "version": "3.8.8" 214 | } 215 | }, 216 | "nbformat": 4, 217 | "nbformat_minor": 4 218 | } 219 | -------------------------------------------------------------------------------- /evaluation/baselines.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "id": "dc39008f", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "import os\n", 11 | "import networkx as nx\n", 12 | "import numpy as np\n", 13 | "from epynet import Network\n", 14 | "\n", 15 | "import sys\n", 16 | "sys.path.insert(0, os.path.join('..', 'utils'))\n", 17 | "from graph_utils import get_nx_graph\n", 18 | "from DataReader import DataReader\n", 19 | "from baselines import interpolated_regularization\n", 20 | "\n", 21 | "import matplotlib.pyplot as plt\n", 22 | "%matplotlib inline" 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "execution_count": 2, 28 | "id": "a3bfb12b", 29 | "metadata": {}, 30 | "outputs": [], 31 | "source": [ 32 | "wds_id = 'anytown'\n", 33 | "obsrat = .1" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": 3, 39 | "id": "85cc29cb", 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "path_to_data = os.path.join('..', 'data', 'db_'+wds_id+'_doe_pumpfed_1')\n", 44 | "path_to_wds = os.path.join('..', 'water_networks', wds_id+'.inp')" 45 | ] 46 | }, 47 | { 48 | "cell_type": "markdown", 49 | "id": "25f33d26", 50 | "metadata": {}, 51 | "source": [ 52 | "# Loading data\n", 53 | "### Loading graph" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 4, 59 | "id": "f405d33f", 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "wds = Network(path_to_wds)\n", 64 | "G_unweighted = get_nx_graph(wds, mode='binary')\n", 65 | "L_unweighted = np.array(nx.linalg.laplacianmatrix.laplacian_matrix(G_unweighted).todense())\n", 66 | "L_unweighted_normalized = np.array(nx.linalg.laplacianmatrix.normalized_laplacian_matrix(G_unweighted).todense())\n", 67 | "G_weighted = get_nx_graph(wds, mode='weighted')\n", 68 | "L_weighted = np.array(nx.linalg.laplacianmatrix.laplacian_matrix(G_weighted).todense())\n", 69 | "L_weighted_normalized = np.array(nx.linalg.laplacianmatrix.normalized_laplacian_matrix(G_weighted).todense())" 70 | ] 71 | }, 72 | { 73 | "cell_type": "markdown", 74 | "id": "d49c8c6f", 75 | "metadata": {}, 76 | "source": [ 77 | "### Loading signal" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": 5, 83 | "id": "9c46c896", 84 | "metadata": {}, 85 | "outputs": [], 86 | "source": [ 87 | "reader = DataReader(path_to_data, n_junc=len(wds.junctions.uid), obsrat=obsrat, seed=1234)\n", 88 | "X_complete, _, _ = reader.read_data(\n", 89 | " dataset = 'tst',\n", 90 | " varname = 'junc_heads',\n", 91 | " rescale = 'standardize',\n", 92 | " cover = False\n", 93 | ")\n", 94 | "X_sparse, bias, scale = reader.read_data(\n", 95 | " dataset = 'tst',\n", 96 | " varname = 'junc_heads',\n", 97 | " rescale = 'standardize',\n", 98 | " cover = True\n", 99 | ")" 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "id": "996e030f", 105 | "metadata": {}, 106 | "source": [ 107 | "# Graph signal processing\n", 108 | "### Smoothness" 109 | ] 110 | }, 111 | { 112 | "cell_type": "code", 113 | "execution_count": 6, 114 | "id": "688226b4", 115 | "metadata": {}, 116 | "outputs": [], 117 | "source": [ 118 | "X = X_complete[:,:,0].T\n", 119 | "smoothness_unweighted = np.dot(X.T, np.dot(L_unweighted, X)).trace()\n", 120 | "smoothness_weighted = np.dot(X.T, np.dot(L_weighted, X)).trace()\n", 121 | "smoothness_unweighted_normalized = np.dot(X.T, np.dot(L_unweighted_normalized, X)).trace()\n", 122 | "smoothness_weighted_normalized = np.dot(X.T, np.dot(L_weighted_normalized, X)).trace()" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": 7, 128 | "id": "4a016b4c", 129 | "metadata": {}, 130 | "outputs": [ 131 | { 132 | "name": "stdout", 133 | "output_type": "stream", 134 | "text": [ 135 | "Smoothness with unweighted Laplacian: 14182.\n", 136 | "Smoothness with weighted Laplacian: 2665.\n", 137 | "Smoothness with normalized unweighted Laplacian: 4786.\n", 138 | "Smoothness with normalized weighted Laplacian: 5191.\n" 139 | ] 140 | } 141 | ], 142 | "source": [ 143 | "print('Smoothness with unweighted Laplacian: {:.0f}.'.format(smoothness_unweighted))\n", 144 | "print('Smoothness with weighted Laplacian: {:.0f}.'.format(smoothness_weighted))\n", 145 | "print('Smoothness with normalized unweighted Laplacian: {:.0f}.'.format(smoothness_unweighted_normalized))\n", 146 | "print('Smoothness with normalized weighted Laplacian: {:.0f}.'.format(smoothness_weighted_normalized))" 147 | ] 148 | }, 149 | { 150 | "cell_type": "markdown", 151 | "id": "a3fb1f54", 152 | "metadata": {}, 153 | "source": [ 154 | "### Spectrum" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": 8, 160 | "id": "5f2f1229", 161 | "metadata": {}, 162 | "outputs": [], 163 | "source": [ 164 | "eigvals_weighted = np.linalg.eigvals(L_weighted_normalized).real\n", 165 | "eigvals_unweighted = np.linalg.eigvals(L_unweighted_normalized).real" 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "execution_count": 9, 171 | "id": "16410b80", 172 | "metadata": {}, 173 | "outputs": [ 174 | { 175 | "data": { 176 | "text/plain": [ 177 | "" 178 | ] 179 | }, 180 | "execution_count": 9, 181 | "metadata": {}, 182 | "output_type": "execute_result" 183 | }, 184 | { 185 | "data": { 186 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXoAAAD4CAYAAADiry33AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAAARgUlEQVR4nO3dfYhdd53H8fdnY/OHtehqplXyYLpLWKxiaxlipWLbPyypD2QFFxKkuqJkKw3oIrJZ/7DLLgsFWXd9qGazGqJgWxZsNKzpEyJbtVQyLdU2rXVDzG5nU0xspXVVKNHv/nFP9Dq9M3OS3MlMfvN+wWXO+T2c+7uHy2fO/OY8pKqQJLXrjxZ7AJKkhWXQS1LjDHpJapxBL0mNM+glqXEvWuwBjLJq1apav379Yg9Dks4ZDz744M+qamJU3ZIM+vXr1zM1NbXYw5Ckc0aS/56tzqkbSWqcQS9JjTPoJalxBr0kNc6gl6TGGfSS1Lh5gz7J2iTfTvJ4koNJPjyiTZJ8JsmhJD9McvlQ3aYkT3R1O8b9ASRJc+tzRH8C+GhVvQa4ArgxySUz2lwHbOhe24AvACRZAdzS1V8CbB3RV5K0gOYN+qp6qqoe6pZ/ATwOrJ7RbDPwlRp4AHhZklcBG4FDVXW4qp4Hbu/aSpLOklO6MjbJeuANwPdnVK0Gnhxan+7KRpW/cZZtb2Pw1wDr1q07lWEtK+t3fLN32yM3v/2cez9J49c76JO8BPga8JGqem5m9YguNUf5CwurdgG7ACYnJ33s1ZgZ2NLy1Svok5zHIOS/WlV3jGgyDawdWl8DHAVWzlIuSTpL+px1E+BLwONV9alZmu0D3tudfXMF8GxVPQUcADYkuTjJSmBL11aSdJb0OaK/ErgeeCTJw13Zx4F1AFW1E9gPvA04BPwKeH9XdyLJduBuYAWwu6oOjvMDSJLmNm/QV9V3GT3XPtymgBtnqdvP4BeBJGkReGWsJDXOoJekxhn0ktQ4g16SGmfQS1LjDHpJapxBL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalx8z54JMlu4B3Asap63Yj6jwHvGdrea4CJqnomyRHgF8BvgBNVNTmugUuS+ulzRL8H2DRbZVV9sqouq6rLgL8F/rOqnhlqck1Xb8hL0iKYN+ir6j7gmfnadbYCt53RiCRJYzW2OfokL2Zw5P+1oeIC7knyYJJt43ovSVJ/887Rn4J3At+bMW1zZVUdTXIhcG+SH3V/IbxA94tgG8C6devGOCxJWt7GedbNFmZM21TV0e7nMWAvsHG2zlW1q6omq2pyYmJijMOSpOVtLEGf5KXAVcA3hsrOT3LByWXgWuDRcbyfJKm/PqdX3gZcDaxKMg3cBJwHUFU7u2bvAu6pql8Odb0I2Jvk5PvcWlV3jW/okqQ+5g36qtrao80eBqdhDpcdBi493YFJksbDK2MlqXEGvSQ1zqCXpMYZ9JLUOINekho3zitjJS2A9Tu+2bvtkZvfvoAj0bnKI3pJapxBL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalxfR4luBt4B3Csql43ov5qBs+K/UlXdEdV/X1Xtwn4NLAC+GJV3TyeYUvS6VmO9w7qc0S/B9g0T5vvVNVl3etkyK8AbgGuAy4Btia55EwGK0k6dfMGfVXdBzxzGtveCByqqsNV9TxwO7D5NLYjSToD45qjf1OSHyS5M8lru7LVwJNDbaa7spGSbEsylWTq+PHjYxqWJGkcQf8Q8OqquhT4LPD1rjwj2tZsG6mqXVU1WVWTExMTYxiWJAnG8OCRqnpuaHl/ks8nWcXgCH7tUNM1wNEzfT9JWgyn+0/cpfDP3zM+ok/yyiTpljd223waOABsSHJxkpXAFmDfmb6fJOnU9Dm98jbgamBVkmngJuA8gKraCbwb+FCSE8CvgS1VVcCJJNuBuxmcXrm7qg4uyKeQJM1q3qCvqq3z1H8O+NwsdfuB/ac3NEnSOHhlrCQ1zqCXpMYZ9JLUOINekhpn0EtS4874gilJWgxL4UKkc4VH9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuPmDfoku5McS/LoLPXvSfLD7nV/kkuH6o4keSTJw0mmxjlwSVI/fY7o9wCb5qj/CXBVVb0e+Adg14z6a6rqsqqaPL0hSpLORJ9nxt6XZP0c9fcPrT4ArBnDuCRJYzLu2xR/ALhzaL2Ae5IU8K9VNfNo/3eSbAO2Aaxbt27Mw5K0VHm74YU3tqBPcg2DoH/zUPGVVXU0yYXAvUl+VFX3jerf/RLYBTA5OVnjGpckLXdjOesmyeuBLwKbq+rpk+VVdbT7eQzYC2wcx/tJkvo746BPsg64A7i+qn48VH5+kgtOLgPXAiPP3JEkLZx5p26S3AZcDaxKMg3cBJwHUFU7gU8ArwA+nwTgRHeGzUXA3q7sRcCtVXXXAnwGSdIc+px1s3We+g8CHxxRfhi49IU9JElnkw8HlzQWnj2zdHkLBElqnEEvSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGud59JL+gOfDt8cjeklqnEEvSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1Lj+jxKcDfwDuBYVb1uRH2ATwNvA34F/GVVPdTVberqVgBfrKqbxzh2SXPwwied1OeIfg+waY7664AN3Wsb8AWAJCuAW7r6S4CtSS45k8FKkk7dvEFfVfcBz8zRZDPwlRp4AHhZklcBG4FDVXW4qp4Hbu/aSpLOonHM0a8Gnhxan+7KZisfKcm2JFNJpo4fPz6GYUmSYDxBnxFlNUf5SFW1q6omq2pyYmJiDMOSJMF47l45DawdWl8DHAVWzlIuSTqLxnFEvw94bwauAJ6tqqeAA8CGJBcnWQls6dpKks6iPqdX3gZcDaxKMg3cBJwHUFU7gf0MTq08xOD0yvd3dSeSbAfuZnB65e6qOrgAn0GSNId5g76qts5TX8CNs9TtZ/CLQJK0SLwyVpIaZ9BLUuMMeklqnEEvSY0z6CWpcQa9JDXOoJekxo3jFgiSejiV+8OD94jX+HhEL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhrn6ZXSKfI0SZ1rPKKXpMYZ9JLUuF5Bn2RTkieSHEqyY0T9x5I83L0eTfKbJC/v6o4keaSrmxr3B5Akza3PM2NXALcAbwWmgQNJ9lXVYyfbVNUngU927d8J/HVVPTO0mWuq6mdjHbkkqZc+R/QbgUNVdbiqngduBzbP0X4rcNs4BidJOnN9gn418OTQ+nRX9gJJXgxsAr42VFzAPUkeTLJttjdJsi3JVJKp48eP9xiWJKmPPkGfEWU1S9t3At+bMW1zZVVdDlwH3JjkLaM6VtWuqpqsqsmJiYkew5Ik9dEn6KeBtUPra4Cjs7Tdwoxpm6o62v08BuxlMBUkSTpL+gT9AWBDkouTrGQQ5vtmNkryUuAq4BtDZecnueDkMnAt8Og4Bi5J6mfes26q6kSS7cDdwApgd1UdTHJDV7+za/ou4J6q+uVQ94uAvUlOvtetVXXXOD+AJGluvW6BUFX7gf0zynbOWN8D7JlRdhi49IxGKC0Qb2Wg5cIrYyWpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1LjDHpJapxBL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalxvYI+yaYkTyQ5lGTHiPqrkzyb5OHu9Ym+fSVJC2veRwkmWQHcArwVmAYOJNlXVY/NaPqdqnrHafaVTpuPBJTm1ueIfiNwqKoOV9XzwO3A5p7bP5O+kqQx6BP0q4Enh9anu7KZ3pTkB0nuTPLaU+xLkm1JppJMHT9+vMewJEl99An6jCirGesPAa+uqkuBzwJfP4W+g8KqXVU1WVWTExMTPYYlSeqjT9BPA2uH1tcAR4cbVNVzVfV/3fJ+4Lwkq/r0lSQtrD5BfwDYkOTiJCuBLcC+4QZJXpkk3fLGbrtP9+krSVpY8551U1UnkmwH7gZWALur6mCSG7r6ncC7gQ8lOQH8GthSVQWM7LtAn0WSNMK8QQ+/m47ZP6Ns59Dy54DP9e0rSTp7vDJWkhpn0EtS4wx6SWpcrzl66WzwVgbSwvCIXpIaZ9BLUuOcutGCOJVpGKdgpIXlEb0kNc6gl6TGGfSS1DiDXpIaZ9BLUuMMeklqnEEvSY0z6CWpcQa9JDXOoJekxvUK+iSbkjyR5FCSHSPq35Pkh93r/iSXDtUdSfJIkoeTTI1z8JKk+c17r5skK4BbgLcC08CBJPuq6rGhZj8Brqqqnye5DtgFvHGo/pqq+tkYxy1J6qnPEf1G4FBVHa6q54Hbgc3DDarq/qr6ebf6ALBmvMOUJJ2uPkG/GnhyaH26K5vNB4A7h9YLuCfJg0m2zdYpybYkU0mmjh8/3mNYkqQ++tymOCPKamTD5BoGQf/moeIrq+pokguBe5P8qKrue8EGq3YxmPJhcnJy5PYlSaeuzxH9NLB2aH0NcHRmoySvB74IbK6qp0+WV9XR7ucxYC+DqSBJ0lnSJ+gPABuSXJxkJbAF2DfcIMk64A7g+qr68VD5+UkuOLkMXAs8Oq7BS5LmN+/UTVWdSLIduBtYAeyuqoNJbujqdwKfAF4BfD4JwImqmgQuAvZ2ZS8Cbq2quxbkk0iSRur1KMGq2g/sn1G2c2j5g8AHR/Q7DFw6s1ySdPZ4ZawkNc6gl6TGGfSS1DiDXpIa1+ufsRq/9Tu+2bvtkZvfvoAjkdQ6j+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalxBr0kNc6gl6TGGfSS1LheQZ9kU5InkhxKsmNEfZJ8pqv/YZLL+/aVJC2seYM+yQrgFuA64BJga5JLZjS7DtjQvbYBXziFvpKkBdTnNsUbgUPd819JcjuwGXhsqM1m4CtVVcADSV6W5FXA+h59z2nebljSUpdBNs/RIHk3sKl7ADhJrgfeWFXbh9r8B3BzVX23W/8W8DcMgn7OvkPb2MbgrwGAPwOeOLOP9gdWAT8b4/Za4X4Zzf0ymvtltKWyX15dVROjKvoc0WdE2czfDrO16dN3UFi1C9jVYzynLMlUVU0uxLbPZe6X0dwvo7lfRjsX9kufoJ8G1g6trwGO9myzskdfSdIC6nPWzQFgQ5KLk6wEtgD7ZrTZB7y3O/vmCuDZqnqqZ19J0gKa94i+qk4k2Q7cDawAdlfVwSQ3dPU7gf3A24BDwK+A98/Vd0E+ydwWZEqoAe6X0dwvo7lfRlvy+2Xef8ZKks5tXhkrSY0z6CWpcc0HvbdgGC3JkSSPJHk4ydRij2exJNmd5FiSR4fKXp7k3iT/1f3848Uc42KYZb/8XZL/7b4zDyd522KO8WxLsjbJt5M8nuRgkg935Uv++9J00HsLhnldU1WXLfVzgBfYHmDTjLIdwLeqagPwrW59udnDC/cLwD9335nLqmr/WR7TYjsBfLSqXgNcAdzY5cmS/740HfQM3b6hqp4HTt6CQQKgqu4DnplRvBn4crf8ZeDPz+aYloJZ9suyVlVPVdVD3fIvgMeB1ZwD35fWg3418OTQ+nRXpsEVyvckebC7/YR+76LuOhC6nxcu8niWku3dHWp3L8UpirMlyXrgDcD3OQe+L60Hfe9bMCxDV1bV5QymtW5M8pbFHpCWvC8AfwpcBjwF/NOijmaRJHkJ8DXgI1X13GKPp4/Wg77P7RuWpao62v08BuxlMM2lgZ92d1+l+3lskcezJFTVT6vqN1X1W+DfWIbfmSTnMQj5r1bVHV3xkv++tB703oJhhCTnJ7ng5DJwLfDo3L2WlX3A+7rl9wHfWMSxLBknw6zzLpbZdyZJgC8Bj1fVp4aqlvz3pfkrY7tTwP6F39+C4R8Xd0SLL8mfMDiKh8FtMG5drvslyW3A1QxuNftT4Cbg68C/A+uA/wH+oqqW1T8mZ9kvVzOYtingCPBXJ+eml4Mkbwa+AzwC/LYr/jiDefol/X1pPuglablrfepGkpY9g16SGmfQS1LjDHpJapxBL0mNM+glqXEGvSQ17v8Bi8Z8w1dHdwoAAAAASUVORK5CYII=\n", 187 | "text/plain": [ 188 | "
" 189 | ] 190 | }, 191 | "metadata": { 192 | "needs_background": "light" 193 | }, 194 | "output_type": "display_data" 195 | } 196 | ], 197 | "source": [ 198 | "plt.bar(np.arange(len(eigvals_weighted)), eigvals_weighted)" 199 | ] 200 | }, 201 | { 202 | "cell_type": "code", 203 | "execution_count": 10, 204 | "id": "1bf4e94f", 205 | "metadata": {}, 206 | "outputs": [ 207 | { 208 | "data": { 209 | "text/plain": [ 210 | "" 211 | ] 212 | }, 213 | "execution_count": 10, 214 | "metadata": {}, 215 | "output_type": "execute_result" 216 | }, 217 | { 218 | "data": { 219 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXoAAAD4CAYAAADiry33AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAAAQNUlEQVR4nO3df6hfd33H8efLtP3DruhmrlWa1nQjDOswXblEpWLTPyxJp2SCg2RFRSx3lQa2MWTd/miHMBBkv1yrIXMhCmvLQKNhS9uIjMVZOnIjtT/UuhC79Zpibhup9QeU6Ht/3BP8evu993vS+725N5/7fMDle87nx/l+vocvr5x87jmfm6pCktSuV630ACRJy8ugl6TGGfSS1DiDXpIaZ9BLUuMuWukBDLN+/frauHHjSg9Dki4Yx44de66qJobVrcqg37hxI9PT0ys9DEm6YCT534XqnLqRpMYZ9JLUuJFTN0n2Ae8BTlXV7wyp/xhwy8Dx3gxMVNXpJE8DLwI/B85U1eS4Bi5J6qfPFf1+YNtClVX1yaq6tqquBf4C+M+qOj3Q5Mau3pCXpBUwMuir6ghwelS7zi7gviWNSJI0VmObo0/yauau/L8wUFzA4STHkkyN6D+VZDrJ9Ozs7LiGJUlr3jh/Gfte4Ovzpm2ur6rrgO3A7UnetVDnqtpbVZNVNTkxMfRWUEnSKzDOoN/JvGmbqjrZvZ4CDgBbxvh+kqQexhL0SV4D3AB8eaDs0iSXnd0GbgKeGMf7SZL663N75X3AVmB9khngLuBigKra0zV7H3C4qn4y0PVy4ECSs+9zb1U9OL6hazXbeMe/92779Cd+bxlHImlk0FfVrh5t9jN3G+Zg2Qlg8ysdmCRpPHwyVpIaZ9BLUuMMeklq3Kpcplhr07n8Ahf8Ja7Ul1f0ktQ4g16SGmfQS1LjnKPXBc+5fWlxXtFLUuMMeklqnEEvSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1LjDHpJapxBL0mNc60baZXzD61rqbyil6TGjQz6JPuSnEryxAL1W5O8kOTR7ufOgbptSZ5KcjzJHeMcuCSpnz5X9PuBbSPafK2qru1+Pg6QZB1wD7AduAbYleSapQxWknTuRgZ9VR0BTr+CY28BjlfViap6Cbgf2PEKjiNJWoJxzdG/I8k3kzyQ5C1d2RXAMwNtZrqyoZJMJZlOMj07OzumYUmSxhH03wDeVFWbgX8EvtSVZ0jbWuggVbW3qiaranJiYmIMw5IkwRhur6yqHw1sH0ry6STrmbuCv3Kg6Qbg5FLfT1I/3paps5Z8RZ/kDUnSbW/pjvk8cBTYlOTqJJcAO4GDS30/SdK5GXlFn+Q+YCuwPskMcBdwMUBV7QHeD3w0yRngZ8DOqirgTJLdwEPAOmBfVT25LJ9CkrSgkUFfVbtG1N8N3L1A3SHg0CsbmiRpHHwyVpIaZ9BLUuNc1EzSr/BunfYY9JLGwn8gVi+nbiSpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1LjDHpJapxBL0mNM+glqXEGvSQ1ziUQJK0ol05Yfl7RS1LjDHpJapxBL0mNM+glqXEGvSQ1zqCXpMaNDPok+5KcSvLEAvW3JHms+3k4yeaBuqeTPJ7k0STT4xy4JKmfPvfR7wfuBj6/QP33gBuq6odJtgN7gbcN1N9YVc8taZSSNI/33/c3Muir6kiSjYvUPzyw+wiwYQzjkiSNybjn6D8CPDCwX8DhJMeSTC3WMclUkukk07Ozs2MeliStXWNbAiHJjcwF/TsHiq+vqpNJXg98Jcl3qurIsP5VtZe5aR8mJydrXOOSpLVuLFf0Sd4KfBbYUVXPny2vqpPd6yngALBlHO8nSepvyUGf5Crgi8AHquq7A+WXJrns7DZwEzD0zh1J0vIZOXWT5D5gK7A+yQxwF3AxQFXtAe4EXgd8OgnAmaqaBC4HDnRlFwH3VtWDy/AZJEmL6HPXza4R9bcCtw4pPwFsfnkPSdL55Hr0ktaUtXj/vUsgSFLjDHpJapxBL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcSyBIUg8X8tIJXtFLUuMMeklqnEEvSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGucDU5K0jFbDg1Yjr+iT7EtyKskTC9QnyaeSHE/yWJLrBuq2JXmqq7tjnAOXJPXTZ+pmP7BtkfrtwKbuZwr4DECSdcA9Xf01wK4k1yxlsJKkczcy6KvqCHB6kSY7gM/XnEeA1yZ5I7AFOF5VJ6rqJeD+rq0k6Twaxy9jrwCeGdif6coWKh8qyVSS6STTs7OzYxiWJAnGE/QZUlaLlA9VVXurarKqJicmJsYwLEkSjOeumxngyoH9DcBJ4JIFyiVJ59E4rugPAh/s7r55O/BCVT0LHAU2Jbk6ySXAzq6tJOk8GnlFn+Q+YCuwPskMcBdwMUBV7QEOATcDx4GfAh/u6s4k2Q08BKwD9lXVk8vwGSRJixgZ9FW1a0R9AbcvUHeIuX8IJEkrxCUQJKlxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuNcj146T85lXXJYvrXJtfZ4RS9JjTPoJalxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuO8j146R94PrwuNV/SS1DiDXpIaZ9BLUuMMeklqnEEvSY0z6CWpcb1ur0yyDfgHYB3w2ar6xLz6jwG3DBzzzcBEVZ1O8jTwIvBz4ExVTY5p7NKSeJuk1oqRQZ9kHXAP8G5gBjia5GBVfetsm6r6JPDJrv17gT+tqtMDh7mxqp4b68glSb30mbrZAhyvqhNV9RJwP7Bjkfa7gPvGMThJ0tL1CforgGcG9me6spdJ8mpgG/CFgeICDic5lmRqoTdJMpVkOsn07Oxsj2FJkvroE/QZUlYLtH0v8PV50zbXV9V1wHbg9iTvGtaxqvZW1WRVTU5MTPQYliSpjz5BPwNcObC/ATi5QNudzJu2qaqT3esp4ABzU0GSpPOkT9AfBTYluTrJJcyF+cH5jZK8BrgB+PJA2aVJLju7DdwEPDGOgUuS+hl5101VnUmyG3iIudsr91XVk0lu6+r3dE3fBxyuqp8MdL8cOJDk7HvdW1UPjvMDSJIW1+s++qo6BByaV7Zn3v5+YP+8shPA5iWNUJK0JD4ZK0mNM+glqXEGvSQ1zqCXpMb5N2O1qHNZ+MtFv6TVySt6SWqcQS9JjTPoJalxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuMMeklqnE/GrhE+4SqtXV7RS1LjDHpJapxBL0mNM+glqXEGvSQ1zqCXpMb1Cvok25I8leR4kjuG1G9N8kKSR7ufO/v2lSQtr5H30SdZB9wDvBuYAY4mOVhV35rX9GtV9Z5X2FeStEz6PDC1BTheVScAktwP7AD6hPVS+moIH3ySdK76TN1cATwzsD/Tlc33jiTfTPJAkrecY19J0jLpc0WfIWU1b/8bwJuq6sdJbga+BGzq2XfuTZIpYArgqquu6jEsSVIffa7oZ4ArB/Y3ACcHG1TVj6rqx932IeDiJOv79B04xt6qmqyqyYmJiXP4CJKkxfQJ+qPApiRXJ7kE2AkcHGyQ5A1J0m1v6Y77fJ++kqTlNXLqpqrOJNkNPASsA/ZV1ZNJbuvq9wDvBz6a5AzwM2BnVRUwtO8yfRZJ0hC9linupmMOzSvbM7B9N3B3376SpPPHJ2MlqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWpcr/voNX6uQinpfPGKXpIaZ9BLUuMMeklqnEEvSY0z6CWpcQa9JDXOoJekxhn0ktQ4g16SGmfQS1LjDHpJapxBL0mNM+glqXEGvSQ1rlfQJ9mW5Kkkx5PcMaT+liSPdT8PJ9k8UPd0kseTPJpkepyDlySNNnI9+iTrgHuAdwMzwNEkB6vqWwPNvgfcUFU/TLId2Au8baD+xqp6bozjliT11OeKfgtwvKpOVNVLwP3AjsEGVfVwVf2w230E2DDeYUqSXqk+QX8F8MzA/kxXtpCPAA8M7BdwOMmxJFMLdUoylWQ6yfTs7GyPYUmS+ujzpwQzpKyGNkxuZC7o3zlQfH1VnUzyeuArSb5TVUdedsCqvcxN+TA5OTn0+JKkc9cn6GeAKwf2NwAn5zdK8lbgs8D2qnr+bHlVnexeTyU5wNxU0MuC/kLl336VtNr1CfqjwKYkVwPfB3YCfzjYIMlVwBeBD1TVdwfKLwVeVVUvdts3AR8f1+DHycCW1KqRQV9VZ5LsBh4C1gH7qurJJLd19XuAO4HXAZ9OAnCmqiaBy4EDXdlFwL1V9eCyfBJJ0lB9ruipqkPAoXllewa2bwVuHdLvBLB5frkk6fzxyVhJapxBL0mNM+glqXEGvSQ1zqCXpMYZ9JLUOINekhpn0EtS4wx6SWqcQS9JjTPoJalxBr0kNc6gl6TGGfSS1DiDXpIaZ9BLUuMMeklqnEEvSY0z6CWpcQa9JDXOoJekxvUK+iTbkjyV5HiSO4bUJ8mnuvrHklzXt68kaXmNDPok64B7gO3ANcCuJNfMa7Yd2NT9TAGfOYe+kqRl1OeKfgtwvKpOVNVLwP3AjnltdgCfrzmPAK9N8saefSVJyyhVtXiD5P3Atqq6tdv/APC2qto90ObfgE9U1X91+18F/hzYOKrvwDGmmPvfAMBvA08t7aP9ivXAc2M8Xis8L8N5XobzvAy3Ws7Lm6pqYljFRT06Z0jZ/H8dFmrTp+9cYdVeYG+P8ZyzJNNVNbkcx76QeV6G87wM53kZ7kI4L32Cfga4cmB/A3CyZ5tLevSVJC2jPnP0R4FNSa5OcgmwEzg4r81B4IPd3TdvB16oqmd79pUkLaORV/RVdSbJbuAhYB2wr6qeTHJbV78HOATcDBwHfgp8eLG+y/JJFrcsU0IN8LwM53kZzvMy3Ko/LyN/GStJurD5ZKwkNc6gl6TGNR/0LsEwXJKnkzye5NEk0ys9npWSZF+SU0meGCj7jSRfSfI/3euvr+QYV8IC5+Wvkny/+848muTmlRzj+ZbkyiT/keTbSZ5M8sdd+ar/vjQd9C7BMNKNVXXtar8HeJntB7bNK7sD+GpVbQK+2u2vNft5+XkB+LvuO3NtVR06z2NaaWeAP6uqNwNvB27v8mTVf1+aDnpcgkEjVNUR4PS84h3A57rtzwG/fz7HtBoscF7WtKp6tqq+0W2/CHwbuIIL4PvSetBfATwzsD/TlWnuCeXDSY51y0/oly7vngOhe339Co9nNdndrVC7bzVOUZwvSTYCvwv8NxfA96X1oO+9BMMadH1VXcfctNbtSd610gPSqvcZ4LeAa4Fngb9Z0dGskCS/BnwB+JOq+tFKj6eP1oO+z/INa1JVnexeTwEHmJvm0pwfdKuv0r2eWuHxrApV9YOq+nlV/QL4J9bgdybJxcyF/L9U1Re74lX/fWk96F2CYYgklya57Ow2cBPwxOK91pSDwIe67Q8BX17BsawaZ8Os8z7W2HcmSYB/Br5dVX87ULXqvy/NPxnb3QL29/xyCYa/XtkRrbwkv8ncVTzMLYNx71o9L0nuA7Yyt9TsD4C7gC8B/wpcBfwf8AdVtaZ+MbnAednK3LRNAU8Df3R2bnotSPJO4GvA48AvuuK/ZG6eflV/X5oPekla61qfupGkNc+gl6TGGfSS1DiDXpIaZ9BLUuMMeklqnEEvSY37f1jhAGfM8dlGAAAAAElFTkSuQmCC\n", 220 | "text/plain": [ 221 | "
" 222 | ] 223 | }, 224 | "metadata": { 225 | "needs_background": "light" 226 | }, 227 | "output_type": "display_data" 228 | } 229 | ], 230 | "source": [ 231 | "plt.bar(np.arange(len(eigvals_weighted)), eigvals_unweighted)" 232 | ] 233 | }, 234 | { 235 | "cell_type": "markdown", 236 | "id": "57cb97cd", 237 | "metadata": {}, 238 | "source": [ 239 | "# Signal reconstruction\n", 240 | "### Linear regression\n", 241 | "Based on the paper of Belkin et al.: [https://doi.org/10.1007/978-3-540-27819-1_43](https://doi.org/10.1007/978-3-540-27819-1_43)." 242 | ] 243 | }, 244 | { 245 | "cell_type": "code", 246 | "execution_count": 11, 247 | "id": "9a4d2ed3", 248 | "metadata": {}, 249 | "outputs": [], 250 | "source": [ 251 | "X_hat = interpolated_regularization(L_weighted, X_sparse)" 252 | ] 253 | }, 254 | { 255 | "cell_type": "markdown", 256 | "id": "4e022827", 257 | "metadata": {}, 258 | "source": [ 259 | "### Evaluation" 260 | ] 261 | }, 262 | { 263 | "cell_type": "code", 264 | "execution_count": 12, 265 | "id": "748a8ac1", 266 | "metadata": {}, 267 | "outputs": [], 268 | "source": [ 269 | "idx_off = np.where(X_sparse[0,:,1] == 0)\n", 270 | "X_hat = X_hat*scale+bias\n", 271 | "Y = X_complete[:,idx_off,0].squeeze(1)*scale+bias\n", 272 | "Y_hat = X_hat[:,idx_off].squeeze(1)" 273 | ] 274 | }, 275 | { 276 | "cell_type": "code", 277 | "execution_count": 13, 278 | "id": "8f87b285", 279 | "metadata": {}, 280 | "outputs": [ 281 | { 282 | "name": "stdout", 283 | "output_type": "stream", 284 | "text": [ 285 | "4.908458039650916\n", 286 | "-1.1823486870213553\n", 287 | "15.88118888143159\n" 288 | ] 289 | } 290 | ], 291 | "source": [ 292 | "print(np.linalg.norm(Y-Y_hat)/np.shape(X_sparse)[0])\n", 293 | "print(np.mean(Y-Y_hat))\n", 294 | "print(np.std(Y-Y_hat))" 295 | ] 296 | } 297 | ], 298 | "metadata": { 299 | "kernelspec": { 300 | "display_name": "Python 3 (ipykernel)", 301 | "language": "python", 302 | "name": "python3" 303 | }, 304 | "language_info": { 305 | "codemirror_mode": { 306 | "name": "ipython", 307 | "version": 3 308 | }, 309 | "file_extension": ".py", 310 | "mimetype": "text/x-python", 311 | "name": "python", 312 | "nbconvert_exporter": "python", 313 | "pygments_lexer": "ipython3", 314 | "version": "3.9.7" 315 | } 316 | }, 317 | "nbformat": 4, 318 | "nbformat_minor": 5 319 | } 320 | -------------------------------------------------------------------------------- /evaluation/get_diameter.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import os 3 | import argparse 4 | import networkx as nx 5 | from epynet import Network 6 | 7 | import sys 8 | sys.path.insert(0, os.path.join('..', 'utils')) 9 | from graph_utils import get_nx_graph 10 | 11 | # ----- ----- ----- ----- ----- ----- 12 | # Command line arguments 13 | # ----- ----- ----- ----- ----- ----- 14 | parser = argparse.ArgumentParser() 15 | parser.add_argument( 16 | '--wds', 17 | default = 'anytown', 18 | type = str 19 | ) 20 | args = parser.parse_args() 21 | 22 | wds_path = os.path.join('..', 'water_networks', args.wds+'.inp') 23 | wds = Network(wds_path) 24 | 25 | G = get_nx_graph(wds) 26 | d_G = 0 27 | measure = nx.algorithms.distance_measures.diameter 28 | if nx.number_connected_components(G) == 1: 29 | d_G = measure(G) 30 | else: 31 | print('Found disconnected components. Check whether EPANET simulation works.') 32 | for component in nx.connected_components(G): 33 | d_G += measure(G.subgraph(component)) 34 | 35 | print('The diameter of {} is {}.'.format(args.wds, d_G)) 36 | -------------------------------------------------------------------------------- /evaluation/learning_curves.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "welsh-madrid", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "import os\n", 11 | "import glob\n", 12 | "import pandas as pd\n", 13 | "import plotly.express as px\n", 14 | "import panel as pn\n", 15 | "import param\n", 16 | "pn.extension('plotly')" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": null, 22 | "id": "british-wonder", 23 | "metadata": {}, 24 | "outputs": [], 25 | "source": [ 26 | "path_to_logs= os.path.join('..', 'experiments', 'logs') \n", 27 | "filenames = [\n", 28 | " os.path.split(f)[-1] \n", 29 | " for f in glob.glob(os.path.join(path_to_logs, '*.csv')) \n", 30 | " if (\"tst\" not in f) \n", 31 | " ] \n", 32 | "filenames.sort()" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": null, 38 | "id": "encouraging-transformation", 39 | "metadata": {}, 40 | "outputs": [], 41 | "source": [ 42 | "def get_wds_names(): \n", 43 | " wds_names = set() \n", 44 | " for fn in filenames:\n", 45 | " wds_names.add(fn.split('-')[0]) \n", 46 | " return wds_names\n", 47 | "\n", 48 | "def load_wds_results(filenames):\n", 49 | " log_dict = dict() \n", 50 | " for fn in filenames:\n", 51 | " df = pd.read_csv(os.path.join(path_to_logs, fn), index_col=0)\n", 52 | " log_dict[fn[:-4]] = df \n", 53 | " return log_dict\n", 54 | "\n", 55 | "def plot_training_curve(run_id):\n", 56 | " df = log_dict[run_id]\n", 57 | " fig = px.scatter(df, y=['trn_loss', 'vld_loss'], log_y=True)\n", 58 | " return fig" 59 | ] 60 | }, 61 | { 62 | "cell_type": "code", 63 | "execution_count": null, 64 | "id": "tribal-swaziland", 65 | "metadata": {}, 66 | "outputs": [], 67 | "source": [ 68 | "log_dict = load_wds_results(filenames) " 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": null, 74 | "id": "excessive-diana", 75 | "metadata": {}, 76 | "outputs": [], 77 | "source": [ 78 | "kw = dict(run_id=sorted(list(log_dict.keys())))\n", 79 | "i = pn.interact(plot_training_curve, **kw)\n", 80 | "i.pprint()" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": null, 86 | "id": "proprietary-vermont", 87 | "metadata": {}, 88 | "outputs": [], 89 | "source": [ 90 | "p = pn.Column(i[0][0], i[1][0])" 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": null, 96 | "id": "promising-balloon", 97 | "metadata": {}, 98 | "outputs": [], 99 | "source": [ 100 | "p" 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": null, 106 | "id": "talented-traffic", 107 | "metadata": {}, 108 | "outputs": [], 109 | "source": [] 110 | } 111 | ], 112 | "metadata": { 113 | "kernelspec": { 114 | "display_name": "Python 3", 115 | "language": "python", 116 | "name": "python3" 117 | }, 118 | "language_info": { 119 | "codemirror_mode": { 120 | "name": "ipython", 121 | "version": 3 122 | }, 123 | "file_extension": ".py", 124 | "mimetype": "text/x-python", 125 | "name": "python", 126 | "nbconvert_exporter": "python", 127 | "pygments_lexer": "ipython3", 128 | "version": "3.8.8" 129 | } 130 | }, 131 | "nbformat": 4, 132 | "nbformat_minor": 5 133 | } 134 | -------------------------------------------------------------------------------- /evaluation/plot_Taylor_diag.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import argparse 3 | import os 4 | import numpy as np 5 | from scipy.spatial import ConvexHull, convex_hull_plot_2d 6 | import pandas as pd 7 | import matplotlib.pyplot as plt 8 | 9 | from taylorDiagram import TaylorDiagram 10 | 11 | # ----- ----- ----- ----- ----- ----- 12 | # Command line arguments 13 | # ----- ----- ----- ----- ----- ----- 14 | parser = argparse.ArgumentParser() 15 | parser.add_argument( 16 | '--wds', 17 | default = 'anytown', 18 | type = str 19 | ) 20 | parser.add_argument( 21 | '--extend', 22 | default = None, 23 | type = float 24 | ) 25 | parser.add_argument( 26 | '--smin', 27 | default = 0, 28 | type = float 29 | ) 30 | parser.add_argument( 31 | '--smax', 32 | default = 1.2, 33 | type = float 34 | ) 35 | parser.add_argument( 36 | '--legend', 37 | action = 'store_true' 38 | ) 39 | parser.add_argument( 40 | '--fill', 41 | action = 'store_true' 42 | ) 43 | parser.add_argument( 44 | '--individual', 45 | action = 'store_true' 46 | ) 47 | parser.add_argument( 48 | '--nocenter', 49 | action = 'store_true' 50 | ) 51 | parser.add_argument( 52 | '--savepdf', 53 | action = 'store_true' 54 | ) 55 | args = parser.parse_args() 56 | 57 | # ----- ----- ----- ----- ----- ----- 58 | # DB loading 59 | # ----- ----- ----- ----- ----- ----- 60 | df = pd.read_csv(os.path.join('..', 'experiments', 'Taylor_metrics_processed.csv')) 61 | 62 | wds = args.wds 63 | std_ref = df.loc[ 64 | (df['wds'] == wds) & 65 | (df['obs_rat'] == .05) & 66 | (df['model'] == 'orig'), 'sigma_true'].tolist()[0] 67 | 68 | # ----- ----- ----- ----- ----- ----- 69 | # Plot assembly 70 | # ----- ----- ----- ----- ----- ----- 71 | fig = plt.figure() 72 | dia = TaylorDiagram(std_ref/std_ref, fig=fig, label='reference', extend=args.extend, srange=(args.smin, args.smax)) 73 | dia.samplePoints[0].set_color('r') 74 | dia.samplePoints[0].set_marker('P') 75 | cmap = plt.get_cmap('tab10') 76 | 77 | obs_ratios = [.05, .1, .2, .4, .8] 78 | if args.individual: 79 | for i, obs_rat in enumerate(obs_ratios): 80 | model = 'naive' 81 | df_plot = df.loc[ 82 | (df['wds'] == wds) & 83 | (df['obs_rat'] == obs_rat) & 84 | (df['model'] == model)] 85 | naive_sigma = df_plot['sigma_pred'].to_numpy()/std_ref 86 | naive_rho = df_plot['corr_coeff'].to_numpy() 87 | model = 'gcn' 88 | df_plot = df.loc[ 89 | (df['wds'] == wds) & 90 | (df['obs_rat'] == obs_rat) & 91 | (df['model'] == model)] 92 | gcn_sigma = df_plot['sigma_pred'].to_numpy()/std_ref 93 | gcn_rho = df_plot['corr_coeff'].to_numpy() 94 | model = 'interp' 95 | df_plot = df.loc[ 96 | (df['wds'] == wds) & 97 | (df['obs_rat'] == obs_rat) & 98 | (df['model'] == model)] 99 | interp_sigma = df_plot['sigma_pred'].to_numpy()/std_ref 100 | interp_rho = df_plot['corr_coeff'].to_numpy() 101 | 102 | pt_alpha = .5 103 | fill_alpha = .2 104 | color = cmap(i) 105 | dia.add_sample(gcn_sigma, gcn_rho, 106 | marker = 'o', 107 | ms = 5, 108 | ls = '', 109 | mfc = color, 110 | mec = 'none', 111 | alpha = pt_alpha, 112 | #label = 'ChebConv-'+str(obs_rat) 113 | ) 114 | dia.add_sample(naive_sigma, naive_rho, 115 | marker = 's', 116 | ms = 5, 117 | ls = '', 118 | mfc = color, 119 | mec = 'none', 120 | alpha = pt_alpha, 121 | #label = 'naive-'+str(obs_rat) 122 | ) 123 | dia.add_sample(interp_sigma, interp_rho, 124 | marker = '*', 125 | ms = 5, 126 | ls = '', 127 | mfc = color, 128 | mec = 'none', 129 | alpha = pt_alpha, 130 | #label = 'naive-'+str(obs_rat) 131 | ) 132 | if args.fill: 133 | points = np.array([np.arccos(gcn_rho), gcn_sigma]).T 134 | hull = ConvexHull(points) 135 | for simplex in hull.simplices: 136 | dia.ax.plot(points[simplex, 0], points[simplex, 1], '--', alpha=fill_alpha) 137 | dia.ax.fill(points[hull.vertices, 0], points[hull.vertices, 1], alpha=fill_alpha) 138 | 139 | points = np.array([np.arccos(naive_rho), naive_sigma]).T 140 | hull = ConvexHull(points) 141 | for simplex in hull.simplices: 142 | dia.ax.plot(points[simplex, 0], points[simplex, 1], '--', alpha=fill_alpha) 143 | dia.ax.fill(points[hull.vertices, 0], points[hull.vertices, 1], alpha=fill_alpha) 144 | 145 | if not args.nocenter: 146 | for i, obs_rat in enumerate(obs_ratios): 147 | model = 'gcn' 148 | df_plot = df.loc[ 149 | (df['wds'] == wds) & 150 | (df['obs_rat'] == obs_rat) & 151 | (df['model'] == model)] 152 | gcn_sigma = df_plot['sigma_pred'].to_numpy()/std_ref 153 | gcn_rho = df_plot['corr_coeff'].to_numpy() 154 | dia.add_sample(gcn_sigma.mean(), gcn_rho.mean(), 155 | marker = 'o', 156 | ms = 10, 157 | ls = '', 158 | mfc = 'none', 159 | mec = cmap(i), 160 | mew = 3, 161 | label = 'GraphConvWat@OR='+str(obs_rat) 162 | ) 163 | 164 | for i, obs_rat in enumerate(obs_ratios): 165 | model = 'naive' 166 | df_plot = df.loc[ 167 | (df['wds'] == wds) & 168 | (df['obs_rat'] == obs_rat) & 169 | (df['model'] == model)] 170 | naive_sigma = df_plot['sigma_pred'].to_numpy()/std_ref 171 | naive_rho = df_plot['corr_coeff'].to_numpy() 172 | dia.add_sample(naive_sigma.mean(), naive_rho.mean(), 173 | marker = 's', 174 | ms = 10, 175 | ls = '', 176 | mfc = 'none', 177 | mec = cmap(i), 178 | mew = 3, 179 | label = 'Naive model@OR='+str(obs_rat) 180 | ) 181 | 182 | for i, obs_rat in enumerate(obs_ratios): 183 | model = 'interp' 184 | df_plot = df.loc[ 185 | (df['wds'] == wds) & 186 | (df['obs_rat'] == obs_rat) & 187 | (df['adjacency'] == 'weighted') & 188 | (df['model'] == model)] 189 | naive_sigma = df_plot['sigma_pred'].to_numpy()/std_ref 190 | naive_rho = df_plot['corr_coeff'].to_numpy() 191 | dia.add_sample(naive_sigma.mean(), naive_rho.mean(), 192 | marker = '*', 193 | ms = 10, 194 | ls = '', 195 | mfc = 'none', 196 | mec = cmap(i), 197 | mew = 3, 198 | label = 'Interpolated regularization@OR='+str(obs_rat) 199 | ) 200 | 201 | contours = dia.add_contours(levels=6, colors='0.5', linestyles='dashed', alpha=.8, linewidths=1) 202 | plt.clabel(contours, inline=1, fontsize=10, fmt='%.2f') 203 | 204 | dia.add_grid() 205 | dia._ax.axis[:].major_ticks.set_tick_out(True) 206 | dia._ax.axis['left'].label.set_text('Normalized standard deviation') 207 | 208 | if args.legend: 209 | fig_leg = plt.figure() 210 | fig_leg.legend( 211 | dia.samplePoints, 212 | [p.get_label() for p in dia.samplePoints], 213 | numpoints=1, prop=dict(size='small'), loc='upper right', framealpha=.5 214 | ) 215 | fmt = 'pdf' 216 | fname = 'taylor-legend.'+fmt 217 | fig_leg.savefig(fname, format=fmt) 218 | fig.tight_layout() 219 | 220 | # ----- ----- ----- ----- ----- ----- 221 | # Diagram export 222 | # ----- ----- ----- ----- ----- ----- 223 | if args.savepdf: 224 | fmt = 'pdf' 225 | fname = 'taylor-'+args.wds+'.'+fmt 226 | fig.savefig(fname, format=fmt) 227 | else: 228 | plt.show() 229 | -------------------------------------------------------------------------------- /evaluation/plot_Taylor_diag_for_sensors.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import argparse 3 | import os 4 | import numpy as np 5 | import pandas as pd 6 | import matplotlib.pyplot as plt 7 | 8 | from taylorDiagram import TaylorDiagram 9 | 10 | # ----- ----- ----- ----- ----- ----- 11 | # Command line arguments 12 | # ----- ----- ----- ----- ----- ----- 13 | parser = argparse.ArgumentParser() 14 | parser.add_argument( 15 | '--wds', 16 | default = 'anytown', 17 | type = str 18 | ) 19 | parser.add_argument( 20 | '--tag', 21 | default = 'ms', 22 | type = str 23 | ) 24 | parser.add_argument( 25 | '--smin', 26 | default = .15, 27 | type = float 28 | ) 29 | parser.add_argument( 30 | '--smax', 31 | default = 1.05, 32 | type = float 33 | ) 34 | parser.add_argument( 35 | '--mean', 36 | action = 'store_true' 37 | ) 38 | args = parser.parse_args() 39 | 40 | # ----- ----- ----- ----- ----- ----- 41 | # DB loading 42 | # ----- ----- ----- ----- ----- ----- 43 | df = pd.read_csv(os.path.join('..', 'experiments', 'Taylor_metrics_processed.csv'), index_col=0) 44 | df = df.loc[(df['tag'] == args.tag) & (df['wds'] == args.wds)] 45 | 46 | # ----- ----- ----- ----- ----- ----- 47 | # Plot assembly 48 | # ----- ----- ----- ----- ----- ----- 49 | fig = plt.figure() 50 | dia = TaylorDiagram( 51 | 1, 52 | fig = fig, 53 | label = 'reference', 54 | srange = (args.smin, args.smax) 55 | ) 56 | dia.samplePoints[0].set_color('r') 57 | dia.samplePoints[0].set_marker('P') 58 | cmap = plt.get_cmap('Paired') 59 | 60 | def add_samples(dia, df, color, marker, mean=False): 61 | if mean: 62 | df = df.mean() 63 | sigma = df['sigma_pred'] / df['sigma_true'] 64 | rho = df['corr_coeff'] 65 | dia.add_sample(sigma, rho, 66 | marker = marker, 67 | ms = 10, 68 | ls = '', 69 | mec = color, 70 | mew = 2, 71 | mfc = 'none', 72 | ) 73 | else: 74 | for idx_dst, row in df.iterrows(): 75 | sigma = row['sigma_pred'] / row['sigma_true'] 76 | rho = row['corr_coeff'] 77 | dia.add_sample(sigma, rho, 78 | marker = marker, 79 | ms = 10, 80 | ls = '', 81 | mec = color, 82 | mew = 2, 83 | mfc = 'none', 84 | ) 85 | 86 | seeds = [1, 8, 5266, 739, 88867] 87 | #seeds = [88867] 88 | add_samples(dia, df.loc[df['placement'] == 'master'], 'k', 'o', mean=args.mean) 89 | add_samples(dia, df.loc[df['placement'] == 'dist'], 'k', '+', mean=args.mean) 90 | add_samples(dia, df.loc[df['placement'] == 'hydrodist'], 'k', '*', mean=args.mean) 91 | add_samples(dia, df.loc[df['placement'] == 'hds'], 'k', 'x', mean=args.mean) 92 | for i, seed in enumerate(seeds): 93 | mask = (df['placement'] == 'random') & (df['seed'] == seed) 94 | add_samples(dia, df.loc[mask], cmap(i+5), 's', mean=args.mean) 95 | 96 | contours = dia.add_contours( 97 | levels = 19, 98 | colors = '0.5', 99 | linestyles ='dashed', 100 | alpha = .8, 101 | linewidths = 1 102 | ) 103 | plt.clabel(contours, inline=1, fontsize=10, fmt='%.2f') 104 | dia.add_grid() 105 | dia._ax.axis[:].major_ticks.set_tick_out(True) 106 | dia._ax.axis['left'].label.set_text('Normalized standard deviation') 107 | 108 | plt.show() 109 | -------------------------------------------------------------------------------- /evaluation/plot_Taylor_diags.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | python plot_Taylor_diag.py --savepdf --wds anytown --smin 0. --smax 1.05 3 | python plot_Taylor_diag.py --savepdf --wds ctown --smin 0.15 --smax 1.05 4 | python plot_Taylor_diag.py --savepdf --wds richmond --smin 0.15 --smax 1.05 5 | -------------------------------------------------------------------------------- /evaluation/plot_WDS_topo.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import argparse 3 | import os 4 | import numpy as np 5 | import pandas as pd 6 | import seaborn as sns 7 | from matplotlib import collections as mc 8 | import matplotlib.pyplot as plt 9 | 10 | from epynet import Network 11 | 12 | # ----- ----- ----- ----- ----- ----- 13 | # Command line arguments 14 | # ----- ----- ----- ----- ----- ----- 15 | parser = argparse.ArgumentParser() 16 | parser.add_argument( 17 | '--wds', 18 | default = 'anytown', 19 | type = str 20 | ) 21 | parser.add_argument( 22 | '--obsrat', 23 | default = .0, 24 | type = float 25 | ) 26 | parser.add_argument( 27 | '--seed', 28 | default = None, 29 | type = int 30 | ) 31 | parser.add_argument( 32 | '--savepdf', 33 | action = 'store_true' 34 | ) 35 | args = parser.parse_args() 36 | 37 | wds = Network(os.path.join('..', 'water_networks', args.wds+'.inp')) 38 | wds.solve() 39 | 40 | def get_node_df(elements, get_head=False): 41 | data = [] 42 | for junc in elements: 43 | ser = pd.Series({ 44 | 'uid': junc.uid, 45 | 'x': junc.coordinates[0], 46 | 'y': junc.coordinates[1], 47 | }) 48 | if get_head: 49 | ser['head'] = junc.head 50 | data.append(ser) 51 | data = pd.DataFrame(data) 52 | if get_head: 53 | data['head'] = (data['head'] - data['head'].min()) / (data['head'].max()-data['head'].min()) 54 | return data 55 | 56 | def get_elem_df(elements, nodes): 57 | data= [] 58 | df = pd.DataFrame(data) 59 | if elements: 60 | for elem in elements: 61 | ser = pd.Series({ 62 | 'uid': elem.uid, 63 | 'x1': nodes.loc[nodes['uid'] == elem.from_node.uid, 'x'].values, 64 | 'y1': nodes.loc[nodes['uid'] == elem.from_node.uid, 'y'].values, 65 | 'x2': nodes.loc[nodes['uid'] == elem.to_node.uid, 'x'].values, 66 | 'y2': nodes.loc[nodes['uid'] == elem.to_node.uid, 'y'].values, 67 | }) 68 | data.append(ser) 69 | df = pd.DataFrame(data) 70 | df['x1'] = df['x1'].str[0] 71 | df['y1'] = df['y1'].str[0] 72 | df['x2'] = df['x2'].str[0] 73 | df['y2'] = df['y2'].str[0] 74 | df['center_x'] = (df['x1']+df['x2']) / 2 75 | df['center_y'] = (df['y1']+df['y2']) / 2 76 | df['orient'] = np.degrees(np.arctan((df['y2']-df['y1'])/(df['x2']-df['x1']))) + 90 77 | return df 78 | 79 | def build_lc_from(df): 80 | line_collection = [] 81 | for elem_id in df['uid']: 82 | line_collection.append([ 83 | (df.loc[df['uid'] == elem_id, 'x1'].values[0], 84 | df.loc[df['uid'] == elem_id, 'y1'].values[0]), 85 | (df.loc[df['uid'] == elem_id, 'x2'].values[0], 86 | df.loc[df['uid'] == elem_id, 'y2'].values[0]) 87 | ]) 88 | return line_collection 89 | nodes = get_node_df(wds.nodes, get_head=True) 90 | juncs = get_node_df(wds.junctions, get_head=True) 91 | tanks = get_node_df(wds.tanks) 92 | reservoirs = get_node_df(wds.reservoirs) 93 | pipes = get_elem_df(wds.pipes, nodes) 94 | pumps = get_elem_df(wds.pumps, nodes) 95 | valves= get_elem_df(wds.valves, nodes) 96 | pipe_collection = build_lc_from(pipes) 97 | pump_collection = build_lc_from(pumps) 98 | if not valves.empty: 99 | valve_collection = build_lc_from(valves) 100 | 101 | mew = .5 102 | fig, ax = plt.subplots() 103 | lc = mc.LineCollection(pipe_collection, linewidths=mew, color='k') 104 | ax.add_collection(lc) 105 | lc = mc.LineCollection(pump_collection, linewidths=mew, color='k') 106 | ax.add_collection(lc) 107 | if not valves.empty: 108 | lc = mc.LineCollection(valve_collection, linewidths=mew, color='k') 109 | ax.add_collection(lc) 110 | 111 | cmap = plt.get_cmap('plasma') 112 | juncs['head'] *= 1.5 # emphasize differences in graphical abstract 113 | colors = [] 114 | signal = [] 115 | for _, junc in juncs.iterrows(): 116 | np.random.seed(args.seed) 117 | if np.random.rand() < args.obsrat: 118 | color = cmap(junc['head']) 119 | signal.append(junc['head']) 120 | else: 121 | color = (1.,1.,1.,1.) 122 | signal.append(np.nan) 123 | colors.append(color) 124 | ax.plot(junc['x'], junc['y'], 'ko', mfc=color, mec='k', ms=3, mew=mew) 125 | for _, tank in tanks.iterrows(): 126 | ax.plot(tank['x'], tank['y'], marker=7, mfc='k', mec='k', ms=7, mew=mew) 127 | for _, reservoir in reservoirs.iterrows(): 128 | ax.plot(reservoir['x'], reservoir['y'], marker='o', mfc='k', mec='k', ms=3, mew=mew) 129 | ax.plot(pumps['center_x'], pumps['center_y'], 'ko', ms=7, mfc='white', mew=mew) 130 | for _, pump in pumps.iterrows(): 131 | ax.plot(pump['center_x'], pump['center_y'], 132 | marker=(3, 0, pump['orient']), 133 | color='k', 134 | ms=5 135 | ) 136 | ax.autoscale() 137 | ax.axis('off') 138 | plt.tight_layout() 139 | 140 | # ----- ----- ----- ----- ----- ----- 141 | # Diagram export 142 | # ----- ----- ----- ----- ----- ----- 143 | if args.savepdf: 144 | fmt = 'pdf' 145 | fname = 'topo-'+args.wds+'.'+fmt 146 | fig.savefig(fname, format=fmt, bbox_inches='tight') 147 | else: 148 | plt.show() 149 | 150 | signal = np.expand_dims(np.array(signal).T, axis=1) 151 | ax = sns.heatmap( 152 | signal, 153 | vmin = 0., 154 | vmax = 1., 155 | cmap = 'plasma', 156 | cbar = False, 157 | square = True, 158 | linewidth = 1, 159 | linecolor = 'k', 160 | xticklabels = False, 161 | yticklabels = False, 162 | ) 163 | 164 | if args.savepdf: 165 | fmt = 'pdf' 166 | fname = 'signal-'+args.wds+'.'+fmt 167 | fig.savefig(fname, format=fmt, bbox_inches='tight') 168 | else: 169 | plt.show() 170 | -------------------------------------------------------------------------------- /evaluation/plot_WDS_topo_with_sensitivity.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import argparse 3 | import os 4 | import sys 5 | import numpy as np 6 | import pandas as pd 7 | import seaborn as sns 8 | from matplotlib import collections as mc 9 | import matplotlib.pyplot as plt 10 | 11 | from epynet import Network 12 | 13 | sys.path.insert(0, os.path.join('..')) 14 | from utils.graph_utils import get_nx_graph, get_sensitivity_matrix 15 | from utils.SensorInstaller import SensorInstaller 16 | from utils.DataReader import DataReader 17 | 18 | # ----- ----- ----- ----- ----- ----- 19 | # Command line arguments 20 | # ----- ----- ----- ----- ----- ----- 21 | parser = argparse.ArgumentParser() 22 | parser.add_argument( 23 | '--wds', 24 | default = 'anytown', 25 | type = str 26 | ) 27 | parser.add_argument( 28 | '--nodesize', 29 | default = 7, 30 | type = int, 31 | help = "Size of nodes on the plot." 32 | ) 33 | parser.add_argument( 34 | '--perturb', 35 | default = None, 36 | type = int 37 | ) 38 | args = parser.parse_args() 39 | 40 | pathToRoot = os.path.join(os.path.dirname(os.path.realpath(__file__)), '..') 41 | pathToModels = os.path.join(pathToRoot, 'experiments', 'models') 42 | pathToDB = os.path.join(pathToRoot, 'data', 'db_' + args.wds +'_doe_pumpfed_1') 43 | 44 | wds = Network(os.path.join('..', 'water_networks', args.wds+'.inp')) 45 | wds.solve() 46 | 47 | print('Calculating nodal sensitivity to demand change...\n') 48 | ptb = np.max(wds.junctions.basedemand) / 100 49 | if args.perturb: 50 | G = get_nx_graph(wds) 51 | reader = DataReader( 52 | pathToDB, 53 | n_junc = len(wds.junctions), 54 | node_order = np.array(list(G.nodes))-1 55 | ) 56 | demands, _, _ = reader.read_data( 57 | dataset = 'trn', 58 | varname = 'junc_demands', 59 | rescale = None, 60 | cover = False 61 | ) 62 | demands = pd.Series(demands[args.perturb,:,0], index=wds.junctions.uid) 63 | wds.junctions.basedemand = demands 64 | 65 | S = get_sensitivity_matrix(wds, ptb) 66 | 67 | def get_node_df(elements, get_head=False): 68 | data = [] 69 | for junc in elements: 70 | ser = pd.Series({ 71 | 'uid': junc.uid, 72 | 'x': junc.coordinates[0], 73 | 'y': junc.coordinates[1], 74 | }) 75 | if get_head: 76 | ser['head'] = junc.head 77 | data.append(ser) 78 | data = pd.DataFrame(data) 79 | if get_head: 80 | data['head'] = (data['head'] - data['head'].min()) / (data['head'].max()-data['head'].min()) 81 | return data 82 | 83 | def get_elem_df(elements, nodes): 84 | data= [] 85 | df = pd.DataFrame(data) 86 | if elements: 87 | for elem in elements: 88 | ser = pd.Series({ 89 | 'uid': elem.uid, 90 | 'x1': nodes.loc[nodes['uid'] == elem.from_node.uid, 'x'].values, 91 | 'y1': nodes.loc[nodes['uid'] == elem.from_node.uid, 'y'].values, 92 | 'x2': nodes.loc[nodes['uid'] == elem.to_node.uid, 'x'].values, 93 | 'y2': nodes.loc[nodes['uid'] == elem.to_node.uid, 'y'].values, 94 | }) 95 | data.append(ser) 96 | df = pd.DataFrame(data) 97 | df['x1'] = df['x1'].str[0] 98 | df['y1'] = df['y1'].str[0] 99 | df['x2'] = df['x2'].str[0] 100 | df['y2'] = df['y2'].str[0] 101 | df['center_x'] = (df['x1']+df['x2']) / 2 102 | df['center_y'] = (df['y1']+df['y2']) / 2 103 | df['orient'] = np.degrees(np.arctan((df['y2']-df['y1'])/(df['x2']-df['x1']))) + 90 104 | return df 105 | 106 | def build_lc_from(df): 107 | line_collection = [] 108 | for elem_id in df['uid']: 109 | line_collection.append([ 110 | (df.loc[df['uid'] == elem_id, 'x1'].values[0], 111 | df.loc[df['uid'] == elem_id, 'y1'].values[0]), 112 | (df.loc[df['uid'] == elem_id, 'x2'].values[0], 113 | df.loc[df['uid'] == elem_id, 'y2'].values[0]) 114 | ]) 115 | return line_collection 116 | 117 | nodes = get_node_df(wds.nodes, get_head=True) 118 | juncs = get_node_df(wds.junctions, get_head=True) 119 | tanks = get_node_df(wds.tanks) 120 | reservoirs = get_node_df(wds.reservoirs) 121 | pipes = get_elem_df(wds.pipes, nodes) 122 | pumps = get_elem_df(wds.pumps, nodes) 123 | valves= get_elem_df(wds.valves, nodes) 124 | pipe_collection = build_lc_from(pipes) 125 | pump_collection = build_lc_from(pumps) 126 | if not valves.empty: 127 | valve_collection = build_lc_from(valves) 128 | 129 | mew = .5 130 | fig, ax = plt.subplots() 131 | lc = mc.LineCollection(pipe_collection, linewidths=mew, color='k') 132 | ax.add_collection(lc) 133 | lc = mc.LineCollection(pump_collection, linewidths=mew, color='k') 134 | ax.add_collection(lc) 135 | if not valves.empty: 136 | lc = mc.LineCollection(valve_collection, linewidths=mew, color='k') 137 | ax.add_collection(lc) 138 | 139 | nodal_s = np.sum(np.abs(S), axis=0) 140 | nodal_s = (nodal_s-nodal_s.min()) / nodal_s.max() 141 | colors = [] 142 | cmap = plt.get_cmap('plasma') 143 | for idx, junc in juncs.iterrows(): 144 | color = cmap(nodal_s[idx]) 145 | colors.append(color) 146 | ax.plot(junc['x'], junc['y'], 'ko', mfc=color, mec='k', ms=args.nodesize, mew=mew) 147 | 148 | for _, tank in tanks.iterrows(): 149 | ax.plot(tank['x'], tank['y'], marker=7, mfc='k', mec='k', ms=7, mew=mew) 150 | for _, reservoir in reservoirs.iterrows(): 151 | ax.plot(reservoir['x'], reservoir['y'], marker='o', mfc='k', mec='k', ms=3, mew=mew) 152 | ax.plot(pumps['center_x'], pumps['center_y'], 'ko', ms=7, mfc='white', mew=mew) 153 | for _, pump in pumps.iterrows(): 154 | ax.plot(pump['center_x'], pump['center_y'], 155 | marker=(3, 0, pump['orient']), 156 | color='k', 157 | ms=5 158 | ) 159 | ax.autoscale() 160 | ax.axis('off') 161 | plt.tight_layout() 162 | plt.show() 163 | -------------------------------------------------------------------------------- /evaluation/plot_WDS_topo_with_sensors.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import argparse 3 | import os 4 | import sys 5 | import numpy as np 6 | import pandas as pd 7 | import seaborn as sns 8 | from matplotlib import collections as mc 9 | import matplotlib.pyplot as plt 10 | 11 | from epynet import Network 12 | 13 | sys.path.insert(0, os.path.join('..')) 14 | from utils.graph_utils import get_nx_graph, get_sensitivity_matrix 15 | from utils.SensorInstaller import SensorInstaller 16 | 17 | # ----- ----- ----- ----- ----- ----- 18 | # Command line arguments 19 | # ----- ----- ----- ----- ----- ----- 20 | parser = argparse.ArgumentParser() 21 | parser.add_argument( 22 | '--wds', 23 | default = 'anytown', 24 | type = str 25 | ) 26 | parser.add_argument( 27 | '--obsrat', 28 | default = .1, 29 | type = float 30 | ) 31 | parser.add_argument( 32 | '--sensorfile', 33 | default = None, 34 | type = str, 35 | help = "Filename for defining sensor nodes directly." 36 | ) 37 | parser.add_argument( 38 | '--deploy', 39 | default = 'random', 40 | choices = ['random', 'dist', 'hydrodist', 'hds'], 41 | type = str, 42 | help = "Method of sensor deployment." 43 | ) 44 | parser.add_argument( 45 | '--seed', 46 | default = None, 47 | type = int 48 | ) 49 | parser.add_argument( 50 | '--nodesize', 51 | default = 7, 52 | type = int, 53 | help = "Size of nodes on the plot." 54 | ) 55 | parser.add_argument( 56 | '--savepdf', 57 | action = 'store_true' 58 | ) 59 | args = parser.parse_args() 60 | 61 | pathToRoot = os.path.join(os.path.dirname(os.path.realpath(__file__)), '..') 62 | pathToModels = os.path.join(pathToRoot, 'experiments', 'models') 63 | 64 | wds = Network(os.path.join('..', 'water_networks', args.wds+'.inp')) 65 | wds.solve() 66 | 67 | sensor_budget = int(len(wds.junctions) * args.obsrat) 68 | print('Deploying {} sensors...\n'.format(sensor_budget)) 69 | 70 | sensor_shop = SensorInstaller(wds) 71 | 72 | if args.deploy == 'random': 73 | sensor_shop.deploy_by_random( 74 | sensor_budget = sensor_budget, 75 | seed = args.seed 76 | ) 77 | elif args.deploy == 'dist': 78 | sensor_shop.deploy_by_shortest_path( 79 | sensor_budget = sensor_budget, 80 | weight_by = 'length' 81 | ) 82 | elif args.deploy == 'hydrodist': 83 | sensor_shop.deploy_by_shortest_path( 84 | sensor_budget = sensor_budget, 85 | weight_by = 'iweight' 86 | ) 87 | elif args.deploy == 'hds': 88 | print('Calculating nodal sensitivity to demand change...\n') 89 | ptb = np.max(wds.junctions.basedemand) / 100 90 | S = get_sensitivity_matrix(wds, ptb) 91 | sensor_shop.deploy_by_shortest_path_with_sensitivity( 92 | sensor_budget = sensor_budget, 93 | sensitivity_matrix = S, 94 | weight_by = 'iweight' 95 | ) 96 | else: 97 | print('Sensor deployment technique is unknown.\n') 98 | raise 99 | 100 | if args.sensorfile: 101 | sensor_nodes = np.loadtxt( 102 | os.path.join(pathToModels, args.sensorfile+'.csv'), 103 | dtype = np.int32 104 | ) 105 | sensor_shop.set_sensor_nodes(sensor_nodes) 106 | 107 | def get_node_df(elements, get_head=False): 108 | data = [] 109 | for junc in elements: 110 | ser = pd.Series({ 111 | 'uid': junc.uid, 112 | 'x': junc.coordinates[0], 113 | 'y': junc.coordinates[1], 114 | }) 115 | if get_head: 116 | ser['head'] = junc.head 117 | data.append(ser) 118 | data = pd.DataFrame(data) 119 | if get_head: 120 | data['head'] = (data['head'] - data['head'].min()) / (data['head'].max()-data['head'].min()) 121 | return data 122 | 123 | def get_elem_df(elements, nodes): 124 | data= [] 125 | df = pd.DataFrame(data) 126 | if elements: 127 | for elem in elements: 128 | ser = pd.Series({ 129 | 'uid': elem.uid, 130 | 'x1': nodes.loc[nodes['uid'] == elem.from_node.uid, 'x'].values, 131 | 'y1': nodes.loc[nodes['uid'] == elem.from_node.uid, 'y'].values, 132 | 'x2': nodes.loc[nodes['uid'] == elem.to_node.uid, 'x'].values, 133 | 'y2': nodes.loc[nodes['uid'] == elem.to_node.uid, 'y'].values, 134 | }) 135 | data.append(ser) 136 | df = pd.DataFrame(data) 137 | df['x1'] = df['x1'].str[0] 138 | df['y1'] = df['y1'].str[0] 139 | df['x2'] = df['x2'].str[0] 140 | df['y2'] = df['y2'].str[0] 141 | df['center_x'] = (df['x1']+df['x2']) / 2 142 | df['center_y'] = (df['y1']+df['y2']) / 2 143 | df['orient'] = np.degrees(np.arctan((df['y2']-df['y1'])/(df['x2']-df['x1']))) + 90 144 | return df 145 | 146 | def build_lc_from(df): 147 | line_collection = [] 148 | for elem_id in df['uid']: 149 | line_collection.append([ 150 | (df.loc[df['uid'] == elem_id, 'x1'].values[0], 151 | df.loc[df['uid'] == elem_id, 'y1'].values[0]), 152 | (df.loc[df['uid'] == elem_id, 'x2'].values[0], 153 | df.loc[df['uid'] == elem_id, 'y2'].values[0]) 154 | ]) 155 | return line_collection 156 | 157 | nodes = get_node_df(wds.nodes, get_head=True) 158 | juncs = get_node_df(wds.junctions, get_head=True) 159 | tanks = get_node_df(wds.tanks) 160 | reservoirs = get_node_df(wds.reservoirs) 161 | pipes = get_elem_df(wds.pipes, nodes) 162 | pumps = get_elem_df(wds.pumps, nodes) 163 | valves= get_elem_df(wds.valves, nodes) 164 | pipe_collection = build_lc_from(pipes) 165 | pump_collection = build_lc_from(pumps) 166 | if not valves.empty: 167 | valve_collection = build_lc_from(valves) 168 | 169 | mew = .5 170 | fig, ax = plt.subplots() 171 | lc = mc.LineCollection(pipe_collection, linewidths=mew, color='k') 172 | ax.add_collection(lc) 173 | lc = mc.LineCollection(pump_collection, linewidths=mew, color='k') 174 | ax.add_collection(lc) 175 | if not valves.empty: 176 | lc = mc.LineCollection(valve_collection, linewidths=mew, color='k') 177 | ax.add_collection(lc) 178 | 179 | colors = [] 180 | for idx, junc in juncs.iterrows(): 181 | if sensor_shop.signal_mask()[idx]: 182 | color = (.0,.0,1.,1.) 183 | elif sensor_shop.master_node_mask()[idx]: 184 | color = (1.,0.,0.,1.) 185 | else: 186 | color = (1.,1.,1.,1.) 187 | colors.append(color) 188 | ax.plot(junc['x'], junc['y'], 'ko', mfc=color, mec='k', ms=args.nodesize, mew=mew) 189 | 190 | for _, tank in tanks.iterrows(): 191 | ax.plot(tank['x'], tank['y'], marker=7, mfc='k', mec='k', ms=7, mew=mew) 192 | for _, reservoir in reservoirs.iterrows(): 193 | ax.plot(reservoir['x'], reservoir['y'], marker='o', mfc='k', mec='k', ms=3, mew=mew) 194 | ax.plot(pumps['center_x'], pumps['center_y'], 'ko', ms=7, mfc='white', mew=mew) 195 | for _, pump in pumps.iterrows(): 196 | ax.plot(pump['center_x'], pump['center_y'], 197 | marker=(3, 0, pump['orient']), 198 | color='k', 199 | ms=5 200 | ) 201 | ax.autoscale() 202 | ax.axis('off') 203 | plt.tight_layout() 204 | 205 | # ----- ----- ----- ----- ----- ----- 206 | # Diagram export 207 | # ----- ----- ----- ----- ----- ----- 208 | if args.savepdf: 209 | fmt = 'pdf' 210 | fname = 'topo-'+args.wds+'.'+fmt 211 | fig.savefig(fname, format=fmt, bbox_inches='tight') 212 | else: 213 | plt.show() 214 | -------------------------------------------------------------------------------- /evaluation/plot_adjacency_matrix.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import os 3 | import argparse 4 | import numpy as np 5 | from scipy import sparse 6 | import networkx as nx 7 | from epynet import Network 8 | import matplotlib.pyplot as plt 9 | from matplotlib.colors import LogNorm 10 | import seaborn as sns 11 | import torch_geometric as pyg 12 | 13 | import sys 14 | sys.path.insert(0, os.path.join('..', 'utils')) 15 | from graph_utils import get_nx_graph 16 | 17 | # ----- ----- ----- ----- ----- ----- 18 | # Command line arguments 19 | # ----- ----- ----- ----- ----- ----- 20 | parser = argparse.ArgumentParser() 21 | parser.add_argument( 22 | '--wds', 23 | default = 'anytown', 24 | type = str 25 | ) 26 | parser.add_argument( 27 | '--adj', 28 | default = 'binary', 29 | choices = ['binary', 'weighted', 'logarithmic', 'pruned'], 30 | type = str, 31 | help = 'Type of adjacency matrix.' 32 | ) 33 | parser.add_argument( 34 | '--savepdf', 35 | action = 'store_true' 36 | ) 37 | args = parser.parse_args() 38 | 39 | wds_path = os.path.join('..', 'water_networks', args.wds+'.inp') 40 | wds = Network(wds_path) 41 | wds.solve() 42 | 43 | G = get_nx_graph(wds, mode=args.adj) 44 | A = np.array(nx.adjacency_matrix(G).toarray()) 45 | #D = np.diag(np.sum(A, axis=0)**-.5) 46 | 47 | D = np.sum(A, axis=0) 48 | D[D != 0] = D[D != 0]**-.5 49 | D = np.diag(D) 50 | 51 | I = np.eye(np.shape(A)[0]) 52 | L = I-np.dot(np.dot(D, A), D) 53 | lambda_max = np.linalg.eigvalsh(L).max() 54 | L_tilde = 2.*L/lambda_max-I 55 | 56 | # Sanity check with PyG utils 57 | #pyg_graph = pyg.utils.from_networkx(G) 58 | #L = pyg.utils.get_laplacian( 59 | # pyg_graph.edge_index, 60 | # edge_weight = pyg_graph.weight, 61 | # normalization = 'sym' 62 | # ) 63 | #L = pyg.utils.to_dense_adj(L[0], edge_attr=L[1]).squeeze(0).numpy() 64 | #lambda_max = np.linalg.eigvalsh(L).max() 65 | #L_tilde = 2.*L/lambda_max-I 66 | 67 | print('Conditional number of the Laplacian is {:f}.'.format(np.linalg.cond(L_tilde, 2))) 68 | #print('Conditional number of the square of the Laplacian is {:f}.'.format(np.linalg.cond(np.power(L_tilde, 2), 2))) 69 | #print('Conditional number of the cube of the Laplacian is {:f}.'.format(np.linalg.cond(np.power(L_tilde, 3), 2))) 70 | 71 | # Coloring only nonzero elements 72 | L_star = np.abs(L_tilde.copy()) 73 | vmin = L_star[L_star.nonzero()].min() 74 | vmax = L_star.max() 75 | L_star[L_star == 0] = np.nan 76 | 77 | sns.set(font_scale=1.75, style='whitegrid') 78 | figsize = (8,6) 79 | axlabel = "Node number" 80 | barlabel = "Value of the Laplacian" 81 | fig, ax = plt.subplots(figsize=figsize) 82 | hmap = sns.heatmap(L_star, 83 | norm = LogNorm(vmin = vmin, vmax = vmax), 84 | cmap = 'viridis', 85 | #linewidth = .1, 86 | #linecolor = 'k', 87 | cbar_kws= {'label': barlabel}, 88 | square = True, 89 | ax = ax 90 | ) 91 | hmap.set_xlabel(axlabel) 92 | hmap.set_ylabel(axlabel) 93 | 94 | # ----- ----- ----- ----- ----- ----- 95 | # Diagram export 96 | # ----- ----- ----- ----- ----- ----- 97 | if args.savepdf: 98 | fmt = 'pdf' 99 | fname = 'laplace-'+args.wds+'-'+args.adj+'.'+fmt 100 | fig.savefig(fname, format=fmt, bbox_inches='tight') 101 | else: 102 | plt.show() 103 | 104 | ## Self-importance 105 | #L_tilde=L 106 | #center = L_tilde.diagonal() 107 | #radius = np.sum(L_tilde, axis=0)-center 108 | #ratio = -center/radius 109 | #sns.histplot(ratio, log_scale=True) 110 | ##plt.bar(np.arange(len(ratio)), ratio, log=True) 111 | #plt.show() 112 | -------------------------------------------------------------------------------- /evaluation/plot_ecdf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import os 3 | import argparse 4 | import matplotlib.pyplot as plt 5 | import pandas as pd 6 | import seaborn as sns 7 | 8 | # ----- ----- ----- ----- ----- ----- 9 | # Command line arguments 10 | # ----- ----- ----- ----- ----- ----- 11 | parser = argparse.ArgumentParser() 12 | parser.add_argument( 13 | '--wds', 14 | default = 'anytown', 15 | type = str 16 | ) 17 | parser.add_argument( 18 | '--savepdf', 19 | action = 'store_true' 20 | ) 21 | args = parser.parse_args() 22 | 23 | path_to_csv = os.path.join('..', 'experiments', 'relative_error-'+args.wds+'.csv') 24 | df = pd.read_csv(path_to_csv, index_col=0) 25 | df = df[df['runid'] == 8] 26 | colnames= df.columns[:-2] 27 | df_list = [] 28 | for colname in colnames: 29 | small_df = df[['obsrat']].copy() 30 | small_df['runid'] = df['runid'] 31 | small_df['Relative error'] = df[colname] 32 | small_df['node'] = colname 33 | df_list.append(small_df) 34 | df = pd.concat(df_list, ignore_index=True) 35 | df['grouper'] = df['obsrat'].astype(str)+'-'+df['node'].astype(str) 36 | df['node'] = df['node'].astype(int) 37 | dta = df.groupby('grouper').mean() 38 | dta['node'] = dta['node'].astype(str) 39 | 40 | ## ___Quick & dirty___ 41 | #df = df.groupby('obsrat').mean() 42 | #df.drop('runid', axis=1, inplace=True) 43 | #dta = df.T 44 | #fig, ax = plt.subplots() 45 | #plot = sns.ecdfplot(data=dta) 46 | 47 | sns.set_style('whitegrid') 48 | fig, ax = plt.subplots() 49 | plot = sns.ecdfplot(data=dta, x='Relative error', stat='count', hue='obsrat', palette='colorblind', ax=ax) 50 | 51 | # ----- ----- ----- ----- ----- ----- 52 | # Diagram export 53 | # ----- ----- ----- ----- ----- ----- 54 | if args.savepdf: 55 | fmt = 'pdf' 56 | fname = 'laplace-'+args.wds+'-'+args.adj+'.'+fmt 57 | fig.savefig(fname, format=fmt, bbox_inches='tight') 58 | else: 59 | plt.show() 60 | -------------------------------------------------------------------------------- /evaluation/plot_swarm_plot.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import argparse 3 | import optuna 4 | import pandas as pd 5 | import matplotlib.pyplot as plt 6 | import seaborn as sns 7 | 8 | parser = argparse.ArgumentParser() 9 | parser.add_argument( 10 | '--obsrat', 11 | default = 0.05, 12 | type = float 13 | ) 14 | parser.add_argument( 15 | '--ymin', 16 | default = 0., 17 | type = float 18 | ) 19 | parser.add_argument( 20 | '--ymax', 21 | default = .003, 22 | type = float 23 | ) 24 | parser.add_argument( 25 | '--drop', 26 | default = 5, 27 | type = int 28 | ) 29 | parser.add_argument( 30 | '--runid', 31 | default = 'v4', 32 | type = str 33 | ) 34 | parser.add_argument( 35 | '--savepdf', 36 | action = 'store_true' 37 | ) 38 | args = parser.parse_args() 39 | 40 | db_path = 'sqlite:///../experiments/hyperparams/anytown_ho-'+str(args.obsrat)+'.db' 41 | study = optuna.load_study( 42 | study_name = args.runid, 43 | storage = db_path 44 | ) 45 | df = study.trials_dataframe() 46 | df.drop(index=df.nlargest(args.drop, 'value').index, inplace=True) 47 | 48 | palette = {'binary': 'grey', 'weighted': 'lightgrey', 'logarithmic': 'lightgrey'} 49 | sns.set_theme(style='whitegrid') 50 | fig = sns.violinplot( 51 | data= df, 52 | x = 'params_adjacency', 53 | y = 'value', 54 | inner = None, 55 | order = ['binary', 'weighted', 'logarithmic'], 56 | palette = palette 57 | ) 58 | fig = sns.swarmplot( 59 | data= df, 60 | x = 'params_adjacency', 61 | y = 'value', 62 | hue = 'params_n_layers', 63 | order = ['binary', 'weighted', 'logarithmic'] 64 | ) 65 | fig.set_xlabel('adjacency matrix') 66 | fig.set_ylabel('loss') 67 | fig.set_ylim([args.ymin, args.ymax]) 68 | plt.legend(loc='center left', title='layers') 69 | 70 | if args.savepdf: 71 | fmt = 'pdf' 72 | fname = 'swarm-'+str(args.obsrat)+'.'+fmt 73 | fig = fig.get_figure() 74 | fig.savefig(fname, format=fmt, bbox_inches='tight') 75 | else: 76 | plt.show() 77 | -------------------------------------------------------------------------------- /evaluation/plot_swarms.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | python plot_swarm_plot.py --drop 10 --obsrat 0.05 --ymax 0.003 --ymin 0.0024 --savepdf 3 | python plot_swarm_plot.py --drop 15 --obsrat 0.8 --ymax .00008 --savepdf 4 | -------------------------------------------------------------------------------- /evaluation/process_Taylor_metrics.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "id": "protected-medicine", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "import os\n", 11 | "import numpy as np\n", 12 | "import pandas as pd" 13 | ] 14 | }, 15 | { 16 | "cell_type": "code", 17 | "execution_count": 2, 18 | "id": "delayed-field", 19 | "metadata": {}, 20 | "outputs": [ 21 | { 22 | "data": { 23 | "text/html": [ 24 | "
\n", 25 | "\n", 38 | "\n", 39 | " \n", 40 | " \n", 41 | " \n", 42 | " \n", 43 | " \n", 44 | " \n", 45 | " \n", 46 | " \n", 47 | " \n", 48 | " \n", 49 | " \n", 50 | " \n", 51 | " \n", 52 | " \n", 53 | " \n", 54 | " \n", 55 | " \n", 56 | " \n", 57 | " \n", 58 | " \n", 59 | " \n", 60 | " \n", 61 | " \n", 62 | " \n", 63 | " \n", 64 | " \n", 65 | " \n", 66 | " \n", 67 | "
012
0anytown-fixrnd-0.05-binary-def-1-orig329.14571918.142374
1anytown-fixrnd-0.05-binary-def-1-gcn305.90214317.431887
2anytown-fixrnd-0.05-binary-def-2-gcn289.26501516.723429
\n", 68 | "
" 69 | ], 70 | "text/plain": [ 71 | " 0 1 2\n", 72 | "0 anytown-fixrnd-0.05-binary-def-1-orig 329.145719 18.142374\n", 73 | "1 anytown-fixrnd-0.05-binary-def-1-gcn 305.902143 17.431887\n", 74 | "2 anytown-fixrnd-0.05-binary-def-2-gcn 289.265015 16.723429" 75 | ] 76 | }, 77 | "execution_count": 2, 78 | "metadata": {}, 79 | "output_type": "execute_result" 80 | } 81 | ], 82 | "source": [ 83 | "df = pd.read_csv(os.path.join('..', 'experiments', 'Taylor_metrics.csv'), header=None)\n", 84 | "df.head(3)" 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "execution_count": 3, 90 | "id": "banner-attraction", 91 | "metadata": {}, 92 | "outputs": [], 93 | "source": [ 94 | "df.columns = ['run_id', 'MSEc', 'sigma_pred']\n", 95 | "df['sigma_true'] = 0\n", 96 | "df['seed'] = 0\n", 97 | "df['wds'] = [elem.split('-')[0] for elem in df['run_id']]\n", 98 | "df['placement'] = [elem.split('-')[1] for elem in df['run_id']]\n", 99 | "df['obs_rat'] = [elem.split('-')[2] for elem in df['run_id']]\n", 100 | "df['adjacency'] = [elem.split('-')[3] for elem in df['run_id']]\n", 101 | "df['tag'] = [elem.split('-')[4] for elem in df['run_id']]\n", 102 | "df['num'] = [elem.split('-')[5] for elem in df['run_id']]\n", 103 | "df['model'] = [elem.split('-')[6] for elem in df['run_id']]" 104 | ] 105 | }, 106 | { 107 | "cell_type": "code", 108 | "execution_count": 4, 109 | "id": "ranging-technical", 110 | "metadata": {}, 111 | "outputs": [ 112 | { 113 | "data": { 114 | "text/html": [ 115 | "
\n", 116 | "\n", 129 | "\n", 130 | " \n", 131 | " \n", 132 | " \n", 133 | " \n", 134 | " \n", 135 | " \n", 136 | " \n", 137 | " \n", 138 | " \n", 139 | " \n", 140 | " \n", 141 | " \n", 142 | " \n", 143 | " \n", 144 | " \n", 145 | " \n", 146 | " \n", 147 | " \n", 148 | " \n", 149 | " \n", 150 | " \n", 151 | " \n", 152 | " \n", 153 | " \n", 154 | " \n", 155 | " \n", 156 | " \n", 157 | " \n", 158 | " \n", 159 | " \n", 160 | " \n", 161 | " \n", 162 | " \n", 163 | " \n", 164 | " \n", 165 | " \n", 166 | " \n", 167 | " \n", 168 | " \n", 169 | " \n", 170 | " \n", 171 | " \n", 172 | " \n", 173 | " \n", 174 | " \n", 175 | " \n", 176 | " \n", 177 | " \n", 178 | " \n", 179 | " \n", 180 | " \n", 181 | " \n", 182 | " \n", 183 | " \n", 184 | " \n", 185 | " \n", 186 | " \n", 187 | " \n", 188 | " \n", 189 | " \n", 190 | " \n", 191 | " \n", 192 | " \n", 193 | " \n", 194 | "
run_idMSEcsigma_predsigma_trueseedwdsplacementobs_ratadjacencytagnummodel
0anytown-fixrnd-0.05-binary-def-1-orig329.14571918.14237400anytownfixrnd0.05binarydef1orig
1anytown-fixrnd-0.05-binary-def-1-gcn305.90214317.43188700anytownfixrnd0.05binarydef1gcn
2anytown-fixrnd-0.05-binary-def-2-gcn289.26501516.72342900anytownfixrnd0.05binarydef2gcn
\n", 195 | "
" 196 | ], 197 | "text/plain": [ 198 | " run_id MSEc sigma_pred sigma_true \\\n", 199 | "0 anytown-fixrnd-0.05-binary-def-1-orig 329.145719 18.142374 0 \n", 200 | "1 anytown-fixrnd-0.05-binary-def-1-gcn 305.902143 17.431887 0 \n", 201 | "2 anytown-fixrnd-0.05-binary-def-2-gcn 289.265015 16.723429 0 \n", 202 | "\n", 203 | " seed wds placement obs_rat adjacency tag num model \n", 204 | "0 0 anytown fixrnd 0.05 binary def 1 orig \n", 205 | "1 0 anytown fixrnd 0.05 binary def 1 gcn \n", 206 | "2 0 anytown fixrnd 0.05 binary def 2 gcn " 207 | ] 208 | }, 209 | "execution_count": 4, 210 | "metadata": {}, 211 | "output_type": "execute_result" 212 | } 213 | ], 214 | "source": [ 215 | "df.head(3)" 216 | ] 217 | }, 218 | { 219 | "cell_type": "code", 220 | "execution_count": 5, 221 | "id": "unnecessary-gardening", 222 | "metadata": {}, 223 | "outputs": [], 224 | "source": [ 225 | "for wds in df['wds'].unique():\n", 226 | " sigma_true = df['sigma_pred'][(df['wds'] == wds) & (df['model'] == 'orig')].tolist()[0]\n", 227 | " df.loc[df['wds'] == wds, 'sigma_true'] = sigma_true" 228 | ] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "execution_count": 6, 233 | "id": "visible-sixth", 234 | "metadata": {}, 235 | "outputs": [ 236 | { 237 | "data": { 238 | "text/html": [ 239 | "
\n", 240 | "\n", 253 | "\n", 254 | " \n", 255 | " \n", 256 | " \n", 257 | " \n", 258 | " \n", 259 | " \n", 260 | " \n", 261 | " \n", 262 | " \n", 263 | " \n", 264 | " \n", 265 | " \n", 266 | " \n", 267 | " \n", 268 | " \n", 269 | " \n", 270 | " \n", 271 | " \n", 272 | " \n", 273 | " \n", 274 | " \n", 275 | " \n", 276 | " \n", 277 | " \n", 278 | " \n", 279 | " \n", 280 | " \n", 281 | " \n", 282 | " \n", 283 | " \n", 284 | " \n", 285 | " \n", 286 | " \n", 287 | " \n", 288 | " \n", 289 | " \n", 290 | " \n", 291 | " \n", 292 | " \n", 293 | " \n", 294 | " \n", 295 | " \n", 296 | " \n", 297 | " \n", 298 | " \n", 299 | " \n", 300 | " \n", 301 | " \n", 302 | " \n", 303 | " \n", 304 | " \n", 305 | " \n", 306 | " \n", 307 | " \n", 308 | " \n", 309 | " \n", 310 | " \n", 311 | " \n", 312 | " \n", 313 | " \n", 314 | " \n", 315 | " \n", 316 | " \n", 317 | " \n", 318 | " \n", 319 | " \n", 320 | " \n", 321 | " \n", 322 | " \n", 323 | " \n", 324 | " \n", 325 | " \n", 326 | " \n", 327 | " \n", 328 | " \n", 329 | " \n", 330 | " \n", 331 | " \n", 332 | " \n", 333 | " \n", 334 | " \n", 335 | " \n", 336 | " \n", 337 | " \n", 338 | " \n", 339 | " \n", 340 | " \n", 341 | " \n", 342 | " \n", 343 | " \n", 344 | " \n", 345 | " \n", 346 | " \n", 347 | " \n", 348 | "
run_idMSEcsigma_predsigma_trueseedwdsplacementobs_ratadjacencytagnummodel
0anytown-fixrnd-0.05-binary-def-1-orig329.14571918.14237418.1423740anytownfixrnd0.05binarydef1orig
1anytown-fixrnd-0.05-binary-def-1-gcn305.90214317.43188718.1423740anytownfixrnd0.05binarydef1gcn
2anytown-fixrnd-0.05-binary-def-2-gcn289.26501516.72342918.1423740anytownfixrnd0.05binarydef2gcn
3anytown-fixrnd-0.05-binary-def-3-gcn274.17530216.31168218.1423740anytownfixrnd0.05binarydef3gcn
4anytown-fixrnd-0.05-binary-def-4-gcn277.58652516.27050418.1423740anytownfixrnd0.05binarydef4gcn
\n", 349 | "
" 350 | ], 351 | "text/plain": [ 352 | " run_id MSEc sigma_pred sigma_true \\\n", 353 | "0 anytown-fixrnd-0.05-binary-def-1-orig 329.145719 18.142374 18.142374 \n", 354 | "1 anytown-fixrnd-0.05-binary-def-1-gcn 305.902143 17.431887 18.142374 \n", 355 | "2 anytown-fixrnd-0.05-binary-def-2-gcn 289.265015 16.723429 18.142374 \n", 356 | "3 anytown-fixrnd-0.05-binary-def-3-gcn 274.175302 16.311682 18.142374 \n", 357 | "4 anytown-fixrnd-0.05-binary-def-4-gcn 277.586525 16.270504 18.142374 \n", 358 | "\n", 359 | " seed wds placement obs_rat adjacency tag num model \n", 360 | "0 0 anytown fixrnd 0.05 binary def 1 orig \n", 361 | "1 0 anytown fixrnd 0.05 binary def 1 gcn \n", 362 | "2 0 anytown fixrnd 0.05 binary def 2 gcn \n", 363 | "3 0 anytown fixrnd 0.05 binary def 3 gcn \n", 364 | "4 0 anytown fixrnd 0.05 binary def 4 gcn " 365 | ] 366 | }, 367 | "execution_count": 6, 368 | "metadata": {}, 369 | "output_type": "execute_result" 370 | } 371 | ], 372 | "source": [ 373 | "df.head()" 374 | ] 375 | }, 376 | { 377 | "cell_type": "code", 378 | "execution_count": 7, 379 | "id": "sophisticated-mount", 380 | "metadata": {}, 381 | "outputs": [], 382 | "source": [ 383 | "df['corr_coeff'] = df['MSEc'] / (df['sigma_true']*df['sigma_pred'])" 384 | ] 385 | }, 386 | { 387 | "cell_type": "markdown", 388 | "id": "5a38a962", 389 | "metadata": {}, 390 | "source": [ 391 | "### Handling results from sensor placement with random seeds" 392 | ] 393 | }, 394 | { 395 | "cell_type": "code", 396 | "execution_count": 8, 397 | "id": "82c0786d", 398 | "metadata": {}, 399 | "outputs": [], 400 | "source": [ 401 | "seeds = np.array([1, 8, 5266, 739, 88867])\n", 402 | "mask = df['placement'] == 'xrandom'\n", 403 | "df.loc[mask, 'seed'] = seeds[df.loc[mask, 'num'].values.astype(int) % len(seeds)]\n", 404 | "mask = df['placement'] == 'random'\n", 405 | "df.loc[mask, 'seed'] = seeds[df.loc[mask, 'num'].values.astype(int) % len(seeds)]" 406 | ] 407 | }, 408 | { 409 | "cell_type": "code", 410 | "execution_count": 9, 411 | "id": "d3b0ee26", 412 | "metadata": {}, 413 | "outputs": [ 414 | { 415 | "data": { 416 | "text/html": [ 417 | "
\n", 418 | "\n", 431 | "\n", 432 | " \n", 433 | " \n", 434 | " \n", 435 | " \n", 436 | " \n", 437 | " \n", 438 | " \n", 439 | " \n", 440 | " \n", 441 | " \n", 442 | " \n", 443 | " \n", 444 | " \n", 445 | " \n", 446 | " \n", 447 | " \n", 448 | " \n", 449 | " \n", 450 | " \n", 451 | " \n", 452 | " \n", 453 | " \n", 454 | " \n", 455 | " \n", 456 | " \n", 457 | " \n", 458 | " \n", 459 | " \n", 460 | " \n", 461 | " \n", 462 | " \n", 463 | " \n", 464 | " \n", 465 | " \n", 466 | " \n", 467 | " \n", 468 | " \n", 469 | " \n", 470 | " \n", 471 | " \n", 472 | " \n", 473 | " \n", 474 | " \n", 475 | " \n", 476 | " \n", 477 | " \n", 478 | " \n", 479 | " \n", 480 | " \n", 481 | " \n", 482 | " \n", 483 | " \n", 484 | " \n", 485 | " \n", 486 | " \n", 487 | " \n", 488 | " \n", 489 | " \n", 490 | " \n", 491 | " \n", 492 | " \n", 493 | " \n", 494 | " \n", 495 | " \n", 496 | " \n", 497 | " \n", 498 | " \n", 499 | " \n", 500 | " \n", 501 | " \n", 502 | " \n", 503 | " \n", 504 | " \n", 505 | " \n", 506 | " \n", 507 | " \n", 508 | " \n", 509 | " \n", 510 | " \n", 511 | " \n", 512 | " \n", 513 | " \n", 514 | " \n", 515 | " \n", 516 | " \n", 517 | " \n", 518 | " \n", 519 | " \n", 520 | " \n", 521 | " \n", 522 | " \n", 523 | " \n", 524 | " \n", 525 | " \n", 526 | " \n", 527 | " \n", 528 | " \n", 529 | " \n", 530 | " \n", 531 | " \n", 532 | " \n", 533 | " \n", 534 | " \n", 535 | " \n", 536 | " \n", 537 | " \n", 538 | " \n", 539 | " \n", 540 | " \n", 541 | " \n", 542 | " \n", 543 | " \n", 544 | " \n", 545 | " \n", 546 | " \n", 547 | " \n", 548 | " \n", 549 | " \n", 550 | " \n", 551 | " \n", 552 | " \n", 553 | " \n", 554 | " \n", 555 | " \n", 556 | " \n", 557 | " \n", 558 | " \n", 559 | " \n", 560 | " \n", 561 | " \n", 562 | " \n", 563 | " \n", 564 | " \n", 565 | " \n", 566 | " \n", 567 | " \n", 568 | " \n", 569 | " \n", 570 | " \n", 571 | " \n", 572 | " \n", 573 | " \n", 574 | " \n", 575 | " \n", 576 | " \n", 577 | " \n", 578 | " \n", 579 | " \n", 580 | " \n", 581 | " \n", 582 | " \n", 583 | " \n", 584 | " \n", 585 | " \n", 586 | " \n", 587 | " \n", 588 | " \n", 589 | " \n", 590 | " \n", 591 | " \n", 592 | " \n", 593 | " \n", 594 | " \n", 595 | " \n", 596 | " \n", 597 | " \n", 598 | " \n", 599 | " \n", 600 | " \n", 601 | " \n", 602 | " \n", 603 | " \n", 604 | " \n", 605 | " \n", 606 | " \n", 607 | " \n", 608 | " \n", 609 | " \n", 610 | " \n", 611 | " \n", 612 | "
run_idMSEcsigma_predsigma_trueseedwdsplacementobs_ratadjacencytagnummodelcorr_coeff
1302anytown-random-1-binary-ms-13-gcn323.56919617.92156518.142374739anytownrandom1binaryms13gcn0.995170
1303ctown-random-5-binary-ms-13-gcn1212.77036134.80265734.856233739ctownrandom5binaryms13gcn0.999737
1304richmond-random-10-binary-ms-13-gcn1096.03012432.89529333.914756739richmondrandom10binaryms13gcn0.982426
1305anytown-random-1-binary-ms-14-gcn287.54939116.64534918.14237488867anytownrandom1binaryms14gcn0.952194
1306ctown-random-5-binary-ms-14-gcn1212.03181634.78507034.85623388867ctownrandom5binaryms14gcn0.999633
1307richmond-random-10-binary-ms-14-gcn1149.29398533.90846033.91475688867richmondrandom10binaryms14gcn0.999389
1308anytown-random-1-binary-ms-15-gcn326.20980218.05131518.1423741anytownrandom1binaryms15gcn0.996080
1309ctown-random-5-binary-ms-15-gcn1210.52604534.75883034.8562331ctownrandom5binaryms15gcn0.999145
1310richmond-random-10-binary-ms-15-gcn1104.74359733.18213233.9147561richmondrandom10binaryms15gcn0.981677
1311anytown-hydrodist-1-binary-ms-2-gcn329.27378618.27314018.1423740anytownhydrodist1binaryms2gcn0.993230
\n", 613 | "
" 614 | ], 615 | "text/plain": [ 616 | " run_id MSEc sigma_pred \\\n", 617 | "1302 anytown-random-1-binary-ms-13-gcn 323.569196 17.921565 \n", 618 | "1303 ctown-random-5-binary-ms-13-gcn 1212.770361 34.802657 \n", 619 | "1304 richmond-random-10-binary-ms-13-gcn 1096.030124 32.895293 \n", 620 | "1305 anytown-random-1-binary-ms-14-gcn 287.549391 16.645349 \n", 621 | "1306 ctown-random-5-binary-ms-14-gcn 1212.031816 34.785070 \n", 622 | "1307 richmond-random-10-binary-ms-14-gcn 1149.293985 33.908460 \n", 623 | "1308 anytown-random-1-binary-ms-15-gcn 326.209802 18.051315 \n", 624 | "1309 ctown-random-5-binary-ms-15-gcn 1210.526045 34.758830 \n", 625 | "1310 richmond-random-10-binary-ms-15-gcn 1104.743597 33.182132 \n", 626 | "1311 anytown-hydrodist-1-binary-ms-2-gcn 329.273786 18.273140 \n", 627 | "\n", 628 | " sigma_true seed wds placement obs_rat adjacency tag num model \\\n", 629 | "1302 18.142374 739 anytown random 1 binary ms 13 gcn \n", 630 | "1303 34.856233 739 ctown random 5 binary ms 13 gcn \n", 631 | "1304 33.914756 739 richmond random 10 binary ms 13 gcn \n", 632 | "1305 18.142374 88867 anytown random 1 binary ms 14 gcn \n", 633 | "1306 34.856233 88867 ctown random 5 binary ms 14 gcn \n", 634 | "1307 33.914756 88867 richmond random 10 binary ms 14 gcn \n", 635 | "1308 18.142374 1 anytown random 1 binary ms 15 gcn \n", 636 | "1309 34.856233 1 ctown random 5 binary ms 15 gcn \n", 637 | "1310 33.914756 1 richmond random 10 binary ms 15 gcn \n", 638 | "1311 18.142374 0 anytown hydrodist 1 binary ms 2 gcn \n", 639 | "\n", 640 | " corr_coeff \n", 641 | "1302 0.995170 \n", 642 | "1303 0.999737 \n", 643 | "1304 0.982426 \n", 644 | "1305 0.952194 \n", 645 | "1306 0.999633 \n", 646 | "1307 0.999389 \n", 647 | "1308 0.996080 \n", 648 | "1309 0.999145 \n", 649 | "1310 0.981677 \n", 650 | "1311 0.993230 " 651 | ] 652 | }, 653 | "execution_count": 9, 654 | "metadata": {}, 655 | "output_type": "execute_result" 656 | } 657 | ], 658 | "source": [ 659 | "df.tail(10)" 660 | ] 661 | }, 662 | { 663 | "cell_type": "code", 664 | "execution_count": 10, 665 | "id": "personal-knock", 666 | "metadata": {}, 667 | "outputs": [], 668 | "source": [ 669 | "df.to_csv(os.path.join('..', 'experiments', 'Taylor_metrics_processed.csv'))" 670 | ] 671 | } 672 | ], 673 | "metadata": { 674 | "kernelspec": { 675 | "display_name": "Python 3 (ipykernel)", 676 | "language": "python", 677 | "name": "python3" 678 | }, 679 | "language_info": { 680 | "codemirror_mode": { 681 | "name": "ipython", 682 | "version": 3 683 | }, 684 | "file_extension": ".py", 685 | "mimetype": "text/x-python", 686 | "name": "python", 687 | "nbconvert_exporter": "python", 688 | "pygments_lexer": "ipython3", 689 | "version": "3.9.7" 690 | } 691 | }, 692 | "nbformat": 4, 693 | "nbformat_minor": 5 694 | } 695 | -------------------------------------------------------------------------------- /evaluation/taylorDiagram.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # Copyright: This document has been placed in the public domain. 3 | 4 | """ 5 | Taylor diagram (Taylor, 2001) implementation. 6 | 7 | Note: If you have found these software useful for your research, I would 8 | appreciate an acknowledgment. 9 | 10 | Modified by Gergely Hajgató@2021-09-16. 11 | """ 12 | 13 | __version__ = "Time-stamp: <2018-12-06 11:43:41 ycopin>" 14 | __author__ = "Yannick Copin " 15 | 16 | import numpy as NP 17 | import matplotlib.pyplot as PLT 18 | 19 | 20 | class TaylorDiagram(object): 21 | """ 22 | Taylor diagram. 23 | 24 | Plot model standard deviation and correlation to reference (data) 25 | sample in a single-quadrant polar plot, with r=stddev and 26 | theta=arccos(correlation). 27 | """ 28 | 29 | def __init__(self, refstd, 30 | fig=None, rect=111, label='_', srange=(0, 1.5), extend=None): 31 | """ 32 | Set up Taylor diagram axes, i.e. single quadrant polar 33 | plot, using `mpl_toolkits.axisartist.floating_axes`. 34 | 35 | Parameters: 36 | 37 | * refstd: reference standard deviation to be compared to 38 | * fig: input Figure or None 39 | * rect: subplot definition 40 | * label: reference label 41 | * srange: stddev axis extension, in units of *refstd* 42 | * extend: extend diagram to negative correlations 43 | """ 44 | 45 | from matplotlib.projections import PolarAxes 46 | import mpl_toolkits.axisartist.floating_axes as FA 47 | import mpl_toolkits.axisartist.grid_finder as GF 48 | 49 | self.refstd = refstd # Reference standard deviation 50 | 51 | tr = PolarAxes.PolarTransform() 52 | 53 | # Correlation labels 54 | rlocs = NP.array([0, 0.2, 0.4, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 1]) 55 | if extend: 56 | # Diagram extended to negative correlations 57 | assert (extend <= 2) and (extend > 1) 58 | self.tmax = extend * NP.pi/2 59 | rlocs = NP.concatenate((-rlocs[:0:-1], rlocs)) 60 | else: 61 | # Diagram limited to positive correlations 62 | self.tmax = NP.pi/2 63 | tlocs = NP.arccos(rlocs) # Conversion to polar angles 64 | gl1 = GF.FixedLocator(tlocs) # Positions 65 | tf1 = GF.DictFormatter(dict(zip(tlocs, map(str, rlocs)))) 66 | 67 | # Standard deviation axis extent (in units of reference stddev) 68 | self.smin = srange[0] * self.refstd 69 | self.smax = srange[1] * self.refstd 70 | 71 | ghelper = FA.GridHelperCurveLinear( 72 | tr, 73 | extremes=(0, self.tmax, self.smin, self.smax), 74 | grid_locator1=gl1, tick_formatter1=tf1) 75 | 76 | if fig is None: 77 | fig = PLT.figure() 78 | 79 | ax = FA.FloatingSubplot(fig, rect, grid_helper=ghelper) 80 | fig.add_subplot(ax) 81 | 82 | # Adjust axes 83 | ax.axis["top"].set_axis_direction("bottom") # "Angle axis" 84 | ax.axis["top"].toggle(ticklabels=True, label=True) 85 | ax.axis["top"].major_ticklabels.set_axis_direction("top") 86 | ax.axis["top"].label.set_axis_direction("top") 87 | ax.axis["top"].label.set_text("Correlation") 88 | 89 | ax.axis["left"].set_axis_direction("bottom") # "X axis" 90 | ax.axis["left"].label.set_text("Standard deviation") 91 | 92 | ax.axis["right"].set_axis_direction("top") # "Y-axis" 93 | ax.axis["right"].toggle(ticklabels=True) 94 | ax.axis["right"].major_ticklabels.set_axis_direction( 95 | "bottom" if (extend and extend>1.5) else "left") 96 | 97 | if self.smin: 98 | ax.axis["bottom"].toggle(ticklabels=False, label=False) 99 | else: 100 | ax.axis["bottom"].set_visible(False) # Unused 101 | 102 | self._ax = ax # Graphical axes 103 | self.ax = ax.get_aux_axes(tr) # Polar coordinates 104 | 105 | # Add reference point and stddev contour 106 | l, = self.ax.plot([0], self.refstd, 'k*', 107 | ls='', ms=10, label=label, clip_on=False) 108 | t = NP.linspace(0, self.tmax) 109 | r = NP.zeros_like(t) + self.refstd 110 | self.ax.plot(t, r, 'k--', label='_') 111 | 112 | # Collect sample points for latter use (e.g. legend) 113 | self.samplePoints = [l] 114 | 115 | def add_sample(self, stddev, corrcoef, *args, **kwargs): 116 | """ 117 | Add sample (*stddev*, *corrcoeff*) to the Taylor 118 | diagram. *args* and *kwargs* are directly propagated to the 119 | `Figure.plot` command. 120 | """ 121 | 122 | l, = self.ax.plot(NP.arccos(corrcoef), stddev, 123 | *args, **kwargs) # (theta, radius) 124 | self.samplePoints.append(l) 125 | 126 | return l 127 | 128 | def add_grid(self, *args, **kwargs): 129 | """Add a grid.""" 130 | 131 | self._ax.grid(*args, **kwargs) 132 | 133 | def add_contours(self, levels=5, **kwargs): 134 | """ 135 | Add constant centered RMS difference contours, defined by *levels*. 136 | """ 137 | 138 | rs, ts = NP.meshgrid(NP.linspace(self.smin, self.smax), 139 | NP.linspace(0, self.tmax)) 140 | # Compute centered RMS difference 141 | rms = NP.sqrt(self.refstd**2 + rs**2 - 2*self.refstd*rs*NP.cos(ts)) 142 | 143 | contours = self.ax.contour(ts, rs, rms, levels, **kwargs) 144 | 145 | return contours 146 | 147 | 148 | def test1(): 149 | """Display a Taylor diagram in a separate axis.""" 150 | 151 | # Reference dataset 152 | x = NP.linspace(0, 4*NP.pi, 100) 153 | data = NP.sin(x) 154 | refstd = data.std(ddof=1) # Reference standard deviation 155 | 156 | # Generate models 157 | m1 = data + 0.2*NP.random.randn(len(x)) # Model 1 158 | m2 = 0.8*data + .1*NP.random.randn(len(x)) # Model 2 159 | m3 = NP.sin(x-NP.pi/10) # Model 3 160 | 161 | # Compute stddev and correlation coefficient of models 162 | samples = NP.array([ [m.std(ddof=1), NP.corrcoef(data, m)[0, 1]] 163 | for m in (m1, m2, m3)]) 164 | 165 | fig = PLT.figure(figsize=(10, 4)) 166 | 167 | ax1 = fig.add_subplot(1, 2, 1, xlabel='X', ylabel='Y') 168 | # Taylor diagram 169 | dia = TaylorDiagram(refstd, fig=fig, rect=122, label="Reference", 170 | srange=(0.5, 1.5)) 171 | 172 | colors = PLT.matplotlib.cm.jet(NP.linspace(0, 1, len(samples))) 173 | 174 | ax1.plot(x, data, 'ko', label='Data') 175 | for i, m in enumerate([m1, m2, m3]): 176 | ax1.plot(x, m, c=colors[i], label='Model %d' % (i+1)) 177 | ax1.legend(numpoints=1, prop=dict(size='small'), loc='best') 178 | 179 | # Add the models to Taylor diagram 180 | for i, (stddev, corrcoef) in enumerate(samples): 181 | dia.add_sample(stddev, corrcoef, 182 | marker='$%d$' % (i+1), ms=10, ls='', 183 | mfc=colors[i], mec=colors[i], 184 | label="Model %d" % (i+1)) 185 | 186 | # Add grid 187 | dia.add_grid() 188 | 189 | # Add RMS contours, and label them 190 | contours = dia.add_contours(colors='0.5') 191 | PLT.clabel(contours, inline=1, fontsize=10, fmt='%.2f') 192 | 193 | # Add a figure legend 194 | fig.legend(dia.samplePoints, 195 | [ p.get_label() for p in dia.samplePoints ], 196 | numpoints=1, prop=dict(size='small'), loc='upper right') 197 | 198 | return dia 199 | 200 | 201 | def test2(): 202 | """ 203 | Climatology-oriented example (after iteration w/ Michael A. Rawlins). 204 | """ 205 | 206 | # Reference std 207 | stdref = 48.491 208 | 209 | # Samples std,rho,name 210 | samples = [[25.939, 0.385, "Model A"], 211 | [29.593, 0.509, "Model B"], 212 | [33.125, 0.585, "Model C"], 213 | [29.593, 0.509, "Model D"], 214 | [71.215, 0.473, "Model E"], 215 | [27.062, 0.360, "Model F"], 216 | [38.449, 0.342, "Model G"], 217 | [35.807, 0.609, "Model H"], 218 | [17.831, 0.360, "Model I"]] 219 | 220 | fig = PLT.figure() 221 | 222 | dia = TaylorDiagram(stdref, fig=fig, label='Reference', extend=2) 223 | dia.samplePoints[0].set_color('r') # Mark reference point as a red star 224 | 225 | # Add models to Taylor diagram 226 | for i, (stddev, corrcoef, name) in enumerate(samples): 227 | dia.add_sample(stddev, corrcoef, 228 | marker='$%d$' % (i+1), ms=10, ls='', 229 | mfc='k', mec='k', 230 | label=name) 231 | 232 | # Add RMS contours, and label them 233 | contours = dia.add_contours(levels=5, colors='0.5') # 5 levels in grey 234 | PLT.clabel(contours, inline=1, fontsize=10, fmt='%.0f') 235 | 236 | dia.add_grid() # Add grid 237 | dia._ax.axis[:].major_ticks.set_tick_out(True) # Put ticks outward 238 | 239 | # Add a figure legend and title 240 | fig.legend(dia.samplePoints, 241 | [ p.get_label() for p in dia.samplePoints ], 242 | numpoints=1, prop=dict(size='small'), loc='upper right') 243 | fig.suptitle("Taylor diagram", size='x-large') # Figure title 244 | 245 | return dia 246 | 247 | 248 | if __name__ == '__main__': 249 | 250 | dia = test1() 251 | dia = test2() 252 | 253 | PLT.show() 254 | -------------------------------------------------------------------------------- /experiments/hyperparams/db/db_anytown_doe_pumpfed_1.yaml: -------------------------------------------------------------------------------- 1 | wds : anytown 2 | nScenes : 1000 3 | genAlgo : doe 4 | 5 | chunks: 6 | height : 1000 7 | width : null 8 | 9 | demand: 10 | nodalLo : .7 11 | nodalHi : 1.3 12 | totalLo : .3 13 | totalHi : 1.1 14 | 15 | pumpSpeed: 16 | limitLo : .9 17 | limitHi : 1.4 18 | 19 | feed: 20 | gravityFedProba : 0 21 | pumpOffProba : 0 22 | 23 | data: 24 | vldSplit : 0.2 25 | tstSplit : 0.2 26 | -------------------------------------------------------------------------------- /experiments/hyperparams/db/db_ctown_doe_pumpfed_1.yaml: -------------------------------------------------------------------------------- 1 | wds : ctown 2 | nScenes : 10000 3 | genAlgo : doe 4 | 5 | chunks: 6 | height : 10000 7 | width : null 8 | 9 | demand: 10 | nodalLo : .7 11 | nodalHi : 1.0 12 | totalLo : .3 13 | totalHi : 1.0 14 | 15 | pumpSpeed: 16 | limitLo : .8 17 | limitHi : 1.2 18 | 19 | feed: 20 | gravityFedProba : 0 21 | pumpOffProba : 0 22 | 23 | data: 24 | vldSplit : 0.2 25 | tstSplit : 0.2 26 | -------------------------------------------------------------------------------- /experiments/hyperparams/db/db_richmond_doe_pumpfed_1.yaml: -------------------------------------------------------------------------------- 1 | wds : richmond 2 | nScenes : 20000 3 | genAlgo : doe 4 | 5 | chunks: 6 | height : 20000 7 | width : null 8 | 9 | demand: 10 | nodalLo : .3 11 | nodalHi : 1.0 12 | totalLo : .3 13 | totalHi : 1.0 14 | 15 | pumpSpeed: 16 | limitLo : .95 17 | limitHi : 1.5 18 | 19 | feed: 20 | gravityFedProba : 0 21 | pumpOffProba : 0 22 | 23 | data: 24 | vldSplit : 0.2 25 | tstSplit : 0.2 26 | -------------------------------------------------------------------------------- /experiments/logs/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BME-SmartLab/GraphConvWat/be97b45fbc7dfdba22bb1ee406424a7c568120e5/experiments/logs/.gitkeep -------------------------------------------------------------------------------- /experiments/models/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BME-SmartLab/GraphConvWat/be97b45fbc7dfdba22bb1ee406424a7c568120e5/experiments/models/.gitkeep -------------------------------------------------------------------------------- /generate_dta.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import os 3 | import argparse 4 | import datetime 5 | import time 6 | import pytz 7 | from psutil import virtual_memory 8 | from tqdm import tqdm 9 | import yaml 10 | import zarr 11 | import numpy as np 12 | import dask.array as da 13 | import dask 14 | import ray 15 | import pyDOE2 as doe 16 | from epynet import Network 17 | 18 | # ----- ----- ----- ----- ----- 19 | # Parsing command line arguments 20 | # ----- ----- ----- ----- ----- 21 | parser = argparse.ArgumentParser() 22 | parser.add_argument('--params', 23 | default = 'db_anytown_doe_pumpfed_1', 24 | type = str, 25 | help = "Name of the hyperparams to use.") 26 | parser.add_argument('--nproc', 27 | default = 4, 28 | type = int, 29 | help = "Number of processes to raise.") 30 | parser.add_argument('--batch', 31 | default = 50, 32 | type = int, 33 | help = "Batch size.") 34 | args = parser.parse_args() 35 | 36 | # ----- ----- ----- ----- ----- 37 | # Paths 38 | # ----- ----- ----- ----- ----- 39 | pathToRoot = os.path.dirname(os.path.realpath(__file__)) 40 | pathToExps = os.path.join(pathToRoot, 'experiments') 41 | pathToParam = os.path.join(pathToExps, 'hyperparams', 'db', args.params+'.yaml') 42 | with open(pathToParam, 'r') as fin: 43 | params = yaml.load(fin, Loader=yaml.Loader) 44 | pathToNetwork = os.path.join(pathToRoot, 'water_networks', params['wds']+'.inp') 45 | pathToDB = os.path.join(pathToRoot, 'data', args.params) 46 | 47 | class SequenceGenerator(): 48 | """Sequence generator for parametric studies or data generation from experiments. 49 | Random number generation is not provided by a seed yet, but experiment design can be arbitrarily large. 50 | Experiment design by LHS has to fit in to the physical RAM. 51 | 400M data points in fp32 costs ~1.5 GB RAM. 52 | Chunking data to ~40M data points.""" 53 | def __init__(self, store, n_scenes, feat_dict, chunks=None): 54 | self.store = store 55 | self.chunks = chunks 56 | self.n_scenes = n_scenes 57 | self.n_features = 2 58 | for key, val in feat_dict.items(): 59 | self.n_features += val 60 | 61 | def design_experiments(self, algo): 62 | if algo == 'rnd': 63 | return self.nonunique_random() 64 | elif algo == 'doe': 65 | return self.latin_hypercube_sampling() 66 | 67 | def nonunique_random(self): 68 | """Random sequence generation without checking uniqueness.""" 69 | design = da.random.random( 70 | size = (self.n_scenes, self.n_features), 71 | chunks = self.chunks 72 | ) 73 | da.to_zarr( 74 | design, 75 | url = self.store, 76 | component = 'raw_design', 77 | compute = True 78 | ) 79 | 80 | def latin_hypercube_sampling(self): 81 | """Latin hypercube sampling - uniqueness guaranteed.""" 82 | design = doe.lhs(self.n_features, samples=self.n_scenes) 83 | design = da.from_array(design, chunks=self.chunks) 84 | da.to_zarr( 85 | design, 86 | url = self.store, 87 | component = 'raw_design', 88 | compute = True 89 | ) 90 | 91 | def transform_scenes(self): 92 | n_junc = feat_dict['juncs'] 93 | n_group = feat_dict['groups'] 94 | n_pump = feat_dict['pumps'] 95 | n_tank = feat_dict['tanks'] 96 | lazy_ops = [] 97 | raw_design = da.from_zarr( 98 | url = self.store, 99 | component ='raw_design' 100 | ) 101 | junc_demands= da.multiply( 102 | orig_dmds, 103 | da.add( 104 | da.multiply( 105 | raw_design[:, :n_junc], 106 | dmd_hi - dmd_lo 107 | ), 108 | dmd_lo 109 | ) 110 | ) 111 | tot_dmds = da.sum(junc_demands, axis=1, keepdims=True) 112 | target_tot_dmds = da.multiply( 113 | orig_tot_dmd, 114 | da.add( 115 | da.multiply( 116 | raw_design[:, n_junc], 117 | tot_dmd_hi-tot_dmd_lo 118 | ), 119 | tot_dmd_lo 120 | ) 121 | ).reshape((n_scenes, 1)) 122 | junc_demands = da.multiply( 123 | junc_demands, 124 | da.divide( 125 | target_tot_dmds, 126 | tot_dmds 127 | ) 128 | ) 129 | lazy_op = da.to_zarr( 130 | junc_demands.astype(np.float32).rechunk(self.chunks), 131 | url = self.store, 132 | component = 'junc_demands', 133 | compute = False, 134 | ) 135 | lazy_ops.append(lazy_op) 136 | 137 | group_speeds= da.add( 138 | da.multiply( 139 | raw_design[:, n_junc+1:n_junc+1+n_group], 140 | spd_lmt_hi-spd_lmt_lo 141 | ), 142 | spd_lmt_lo 143 | ) 144 | lazy_op = da.to_zarr( 145 | group_speeds.astype(np.float32).rechunk(self.chunks), 146 | url = self.store, 147 | component = 'group_speeds', 148 | compute = False 149 | ) 150 | lazy_ops.append(lazy_op) 151 | 152 | tankfed_val = da.less(raw_design[:, n_junc+n_group+1], tankfed_proba) 153 | tankfed_val = da.reshape(tankfed_val, (-1, 1)) 154 | pump_status = da.less(raw_design[:, n_junc+n_group+2:n_junc+n_group+2+n_pump], pump_off_proba) 155 | concat_list = [] 156 | for i in range(pump_status.shape[1]): 157 | concat_list.append(tankfed_val) 158 | tankfed_val = da.concatenate(concat_list, axis=1) 159 | pump_status = da.logical_not(da.logical_and(tankfed_val, pump_status)) 160 | del tankfed_val 161 | lazy_op = da.to_zarr( 162 | pump_status.astype(np.float32).rechunk(self.chunks), 163 | url = self.store, 164 | component = 'pump_status', 165 | compute = False 166 | ) 167 | lazy_ops.append(lazy_op) 168 | 169 | tank_level = da.add( 170 | da.multiply( 171 | raw_design[:, n_junc+n_group+n_pump+2:], 172 | da.subtract( 173 | wtr_lvl_hi, 174 | wtr_lvl_lo 175 | ).reshape((1, n_tank))), 176 | wtr_lvl_lo 177 | ) 178 | lazy_op = da.to_zarr( 179 | tank_level.astype(np.float32).rechunk(self.chunks), 180 | url = self.store, 181 | component = 'tank_level', 182 | compute = False 183 | ) 184 | lazy_ops.append(lazy_op) 185 | 186 | dask.compute(*lazy_ops) 187 | 188 | @ray.remote 189 | class simulator(): 190 | """EPYNET wrappper for one-time initialisation of the water network in a multithreaded environment.""" 191 | def __init__(self): 192 | """Read network topology from disk.""" 193 | self.wds = Network(pathToNetwork) 194 | self.junc_heads = np.empty(shape=(n_batch, n_junc), dtype=np.float32) 195 | self.pump_flows = np.empty(shape=(n_batch, n_pump), dtype=np.float32) 196 | self.tank_flows = np.empty(shape=(n_batch, n_tank), dtype=np.float32) 197 | 198 | def evaluate_batch(self, scene_ids, boundaries): 199 | """Set boundaries and run the simulation. No need to re-set the WDS.""" 200 | for idx, scene_id in enumerate(scene_ids): 201 | for junc_id, junc in enumerate(self.wds.junctions): 202 | junc.basedemand = boundaries[0][scene_id, junc_id] 203 | for gid, grp in enumerate(pump_groups): 204 | self.wds.pumps[self.wds.pumps.uid.isin(grp)].speed = boundaries[1][scene_id, gid] 205 | for pump_id, pump in enumerate(self.wds.pumps): 206 | pump.status = boundaries[2][scene_id, pump_id] 207 | for tank_id, tank in enumerate(self.wds.tanks): 208 | tank.tanklevel = boundaries[3][scene_id, tank_id] 209 | self.wds.solve() 210 | self.junc_heads[idx,:] = self.wds.junctions.head.values 211 | self.pump_flows[idx,:] = self.wds.pumps.flow.values 212 | self.tank_flows[idx,:] = self.wds.tanks.outflow.values-self.wds.tanks.inflow.values 213 | return [scene_ids, self.junc_heads, self.pump_flows, self.tank_flows] 214 | 215 | def read_pump_groups(wds): 216 | """Returns the list of the pump groups in the WDS. 217 | Groups are identified by the last characters of the uid. 218 | The trailing slice after the last 'g' letter should be unique.""" 219 | pump_groups = [] 220 | group = ['gx'] 221 | for pump in wds.pumps: 222 | uid = pump.uid 223 | if uid[uid.rfind('g'):] == group[0][group[0].rfind('g'):]: 224 | group.append(uid) 225 | else: 226 | pump_groups.append(group) 227 | group = [uid] 228 | pump_groups.append(group) 229 | return pump_groups[1:] 230 | 231 | def print_store_stats(store): 232 | for key in root.keys(): 233 | arr = da.from_zarr(url=store, component=key) 234 | print(key) 235 | print('max: {:.2f}'.format(arr.max().compute())) 236 | print('min: {:.2f}'.format(arr.min().compute())) 237 | print('avg: {:.2f}'.format(arr.mean().compute())) 238 | print('std: {:.2f}'.format(arr.std().compute())) 239 | print('') 240 | 241 | def chunk_computation(boundaries): 242 | junc_heads = np.empty_like(boundaries[0], dtype=np.float32) 243 | pump_flows = np.empty_like(boundaries[2], dtype=np.float32) 244 | tank_flows = np.empty_like(boundaries[3], dtype=np.float32) 245 | boundary_id = ray.put(boundaries) 246 | 247 | junc_dmds_id = ray.put(junc_demands) 248 | group_speeds_id = ray.put(group_speeds) 249 | pump_status_id = ray.put(pump_status) 250 | tank_level_id = ray.put(tank_level) 251 | workers = [simulator.remote() for i in range(n_proc)] 252 | results = {} 253 | 254 | scene_id_batches = [] 255 | new_batch = [] 256 | for idx in range(junc_heads.shape[0]): 257 | if (idx % n_batch) == 0: 258 | scene_id_batches.append(new_batch) 259 | new_batch = [] 260 | new_batch.append(idx) 261 | scene_id_batches.append(new_batch) 262 | scene_id_batches = scene_id_batches[1:] 263 | 264 | progressbar = tqdm(total=len(scene_id_batches)) 265 | for worker in workers: 266 | results[worker.evaluate_batch.remote(scene_id_batches.pop(), boundary_id)] = worker 267 | 268 | ready_ids, _ = ray.wait(list(results)) 269 | while ready_ids: 270 | ready_id= ready_ids[0] 271 | result = ray.get(ready_id) 272 | worker = results.pop(ready_id) 273 | if scene_id_batches: 274 | results[worker.evaluate_batch.remote(scene_id_batches.pop(), boundary_id)] = worker 275 | 276 | for idx in range(len(result[0])): 277 | junc_heads[result[0][idx], :] = result[1][idx,:] 278 | pump_flows[result[0][idx], :] = result[2][idx,:] 279 | tank_flows[result[0][idx], :] = result[3][idx,:] 280 | 281 | ready_ids, _ = ray.wait(list(results)) 282 | progressbar.update(1) 283 | progressbar.close() 284 | return junc_heads, pump_flows, tank_flows 285 | 286 | # ----- ----- ----- ----- ----- 287 | # Loading WDS 288 | # ----- ----- ----- ----- ----- 289 | wds = Network(pathToNetwork) 290 | dmd_lo = params['demand']['nodalLo'] 291 | dmd_hi = params['demand']['nodalHi'] 292 | tot_dmd_lo = params['demand']['totalLo'] 293 | tot_dmd_hi = params['demand']['totalHi'] 294 | spd_lmt_lo = params['pumpSpeed']['limitLo'] 295 | spd_lmt_hi = params['pumpSpeed']['limitHi'] 296 | wtr_lvl_lo = np.array(wds.tanks.minlevel * 1.01, dtype=np.float32) 297 | wtr_lvl_hi = np.array(wds.tanks.maxlevel * .99, dtype=np.float32) 298 | tankfed_proba = params['feed']['gravityFedProba'] 299 | pump_off_proba = params['feed']['pumpOffProba'] 300 | pump_groups = read_pump_groups(wds) 301 | 302 | # ----- ----- ----- ----- ----- 303 | # Initialization 304 | # ----- ----- ----- ----- ----- 305 | n_scenes = params['nScenes'] 306 | feat_dict = { 'juncs' : len(wds.junctions.uid), 307 | 'groups': len(pump_groups), 308 | 'pumps' : len(wds.pumps.uid), 309 | 'tanks' : len(wds.tanks.uid) 310 | } 311 | n_proc = args.nproc 312 | n_batch = args.batch 313 | orig_dmds = np.array(wds.junctions.basedemand, dtype=np.float32) 314 | orig_dmds = orig_dmds.reshape(1, -1) 315 | orig_tot_dmd = np.sum(wds.junctions.basedemand) 316 | 317 | assert n_proc <= n_scenes 318 | assert n_batch <= n_scenes 319 | assert np.ceil(n_scenes/n_batch) >= n_proc 320 | 321 | store = zarr.DirectoryStore(pathToDB) 322 | root = zarr.group( 323 | store = store, 324 | overwrite = True, 325 | synchronizer= zarr.ThreadSynchronizer() 326 | ) 327 | now = datetime.datetime.now(pytz.UTC) 328 | root.attrs['creation_date'] = str(now) 329 | root.attrs['gmt_timestap'] = int(now.strftime('%s')) 330 | root.attrs['description'] = 'WDS digitwin experiment design' 331 | scene_generator = SequenceGenerator( 332 | store, n_scenes, feat_dict, 333 | chunks = ( params['chunks']['height'], 334 | params['chunks']['width'] 335 | ), 336 | ) 337 | 338 | print( 339 | 'Writing unscaled random experiment design to data store... ', 340 | end = "", 341 | flush = True 342 | ) 343 | scenes = scene_generator.design_experiments(params['genAlgo']) 344 | print('OK') 345 | 346 | print( 347 | 'Splitting and scaling raw experiment design... ', 348 | end = "", 349 | flush = True 350 | ) 351 | scene_generator.transform_scenes() 352 | del root['raw_design'] 353 | print('OK') 354 | print_store_stats(store) 355 | 356 | # ----- ----- ----- ----- ----- 357 | # Scene evaluation 358 | # ----- ----- ----- ----- ----- 359 | junc_demands_store = da.from_zarr( 360 | url = store, 361 | component ='junc_demands', 362 | ) 363 | group_speeds_store = da.from_zarr( 364 | url = store, 365 | component ='group_speeds', 366 | ) 367 | pump_status_store = da.from_zarr( 368 | url = store, 369 | component ='pump_status', 370 | ) 371 | tank_level_store = da.from_zarr( 372 | url = store, 373 | component ='tank_level', 374 | ) 375 | now = datetime.datetime.now(pytz.UTC) 376 | root.attrs['creation_date'] = str(now) 377 | root.attrs['gmt_timestap'] = int(now.strftime('%s')) 378 | root.attrs['description'] = 'WDS digitwin experiment results' 379 | junc_heads_store = root.empty( 380 | 'junc_heads', 381 | shape = junc_demands_store.shape, 382 | chunks = ( params['chunks']['height'], 383 | params['chunks']['width'] 384 | ), 385 | dtype = 'f4' 386 | ) 387 | pump_flows_store = root.empty( 388 | 'pump_flows', 389 | shape = pump_status_store.shape, 390 | chunks = ( params['chunks']['height'], 391 | params['chunks']['width'] 392 | ), 393 | dtype = 'f4' 394 | ) 395 | tank_flows_store = root.empty( 396 | 'tank_flows', 397 | shape = tank_level_store.shape, 398 | chunks = ( params['chunks']['height'], 399 | params['chunks']['width'] 400 | ), 401 | dtype = 'f4' 402 | ) 403 | 404 | n_junc = feat_dict['juncs'] 405 | n_group = feat_dict['groups'] 406 | n_pump = feat_dict['pumps'] 407 | n_tank = feat_dict['tanks'] 408 | 409 | n_experiment= junc_demands_store.shape[0] 410 | chunk_len = root['junc_demands'].chunks[0] 411 | n_full_batch= n_experiment // chunk_len 412 | print('Computing {} full batch...\n'.format(n_full_batch)) 413 | 414 | mem = virtual_memory() 415 | ray.init() 416 | time.sleep(10) 417 | workers = [simulator.remote() for i in range(n_proc)] 418 | scene_ids = list(np.arange(chunk_len)) 419 | for batch_id in range(n_full_batch): 420 | beg_idx = batch_id*chunk_len 421 | end_idx = beg_idx + chunk_len 422 | junc_demands= np.array(junc_demands_store[beg_idx:end_idx, :]) 423 | group_speeds= np.array(group_speeds_store[beg_idx:end_idx, :]) 424 | pump_status = np.array(pump_status_store[beg_idx:end_idx, :]) 425 | tank_level = np.array(tank_level_store[beg_idx:end_idx, :]) 426 | boundaries = [junc_demands, group_speeds, pump_status, tank_level] 427 | 428 | junc_heads, pump_flows, tank_flows = chunk_computation(boundaries) 429 | 430 | junc_heads_store[beg_idx:end_idx, :] = junc_heads 431 | pump_flows_store[beg_idx:end_idx, :] = pump_flows 432 | tank_flows_store[beg_idx:end_idx, :] = tank_flows 433 | if n_experiment % chunk_len: 434 | beg_idx = end_idx 435 | junc_demands= np.array(junc_demands_store[beg_idx:, :]) 436 | group_speeds= np.array(group_speeds_store[beg_idx:, :]) 437 | pump_status = np.array(pump_status_store[beg_idx:, :]) 438 | tank_level = np.array(tank_level_store[beg_idx:, :]) 439 | boundaries = [junc_demands, group_speeds, pump_status, tank_level] 440 | 441 | junc_heads, pump_flows, tank_flows = chunk_computation(boundaries) 442 | 443 | junc_heads_store[beg_idx:, :] = junc_heads 444 | pump_flows_store[beg_idx:, :] = pump_flows 445 | tank_flows_store[beg_idx:, :] = tank_flows 446 | pass 447 | ray.shutdown() 448 | 449 | pump_speeds_store = root.empty( 450 | 'pump_speeds', 451 | shape = pump_status_store.shape, 452 | chunks = (params['chunks']['height'],params['chunks']['width']), 453 | dtype = 'f4' 454 | ) 455 | beg_idx = 0 456 | for gid in range(len(pump_groups)): 457 | for pid in range(len(pump_groups[gid])): 458 | pump_speeds_store[:, beg_idx+pid] = group_speeds_store[:, gid] 459 | beg_idx = beg_idx+pid+1 460 | del root['group_speeds'] 461 | del root['pump_status'] 462 | 463 | head_treshold = 0 464 | junc_heads = da.from_zarr( 465 | url = store, 466 | component ='junc_heads' 467 | ) 468 | min_heads = junc_heads.min(axis=1).compute() 469 | idx_ok = np.where(min_heads > head_treshold)[0] 470 | if len(idx_ok) < len(min_heads): 471 | for key in root.keys(): 472 | arr = da.from_zarr(root[key]) 473 | arr.to_zarr( 474 | url = store, 475 | component = key+'-tmp', 476 | overwrite = True, 477 | compute = True 478 | ) 479 | arr = da.from_zarr(root[key+'-tmp']) 480 | arr = arr[idx_ok, :].rechunk(scene_generator.chunks).to_zarr( 481 | url = store, 482 | component = key, 483 | overwrite = True, 484 | compute = True 485 | ) 486 | del root[key+'-tmp'] 487 | 488 | print('-----') 489 | print_store_stats(store) 490 | 491 | # ----- ----- ----- ----- ----- 492 | # Splitting 493 | # ----- ----- ----- ----- ----- 494 | vld_split = params['data']['vldSplit'] 495 | tst_split = params['data']['tstSplit'] 496 | idx_trn = int(np.floor(len(idx_ok) * (1-tst_split-vld_split))) 497 | idx_vld = int(np.floor(len(idx_ok) * (1-tst_split))) 498 | 499 | unsplit_keys= list(root.keys()) 500 | root_trn = zarr.hierarchy.group( 501 | store = store, 502 | overwrite = True, 503 | synchronizer= zarr.ThreadSynchronizer(), 504 | path = 'trn' 505 | ) 506 | root_vld = zarr.hierarchy.group( 507 | store = store, 508 | overwrite = True, 509 | synchronizer= zarr.ThreadSynchronizer(), 510 | path = 'vld' 511 | ) 512 | root_tst = zarr.hierarchy.group( 513 | store = store, 514 | overwrite = True, 515 | synchronizer= zarr.ThreadSynchronizer(), 516 | path = 'tst' 517 | ) 518 | 519 | for key in unsplit_keys: 520 | arr = da.from_zarr(root[key]) 521 | arr_avg = da.mean(arr[:idx_trn, :]).compute() 522 | arr_std = da.std(arr[:idx_trn, :]).compute() 523 | arr_min = da.min(arr[:idx_trn, :]).compute() 524 | arr_max = da.max(arr[:idx_trn, :]).compute() 525 | arr_range = arr_max - arr_min 526 | 527 | arr[:idx_trn, :].to_zarr( 528 | url = store, 529 | component = 'trn/'+key, 530 | overwrite = True, 531 | compute = True 532 | ) 533 | arr[idx_trn:idx_vld, :].rechunk(scene_generator.chunks).to_zarr( 534 | url = store, 535 | component = 'vld/'+key, 536 | overwrite = True, 537 | compute = True 538 | ) 539 | arr[idx_vld:, :].rechunk(scene_generator.chunks).to_zarr( 540 | url = store, 541 | component = 'tst/'+key, 542 | overwrite = True, 543 | compute = True 544 | ) 545 | 546 | root_trn[key].attrs['avg'] = float(arr_avg) 547 | root_trn[key].attrs['std'] = float(arr_std) 548 | root_trn[key].attrs['min'] = float(arr_min) 549 | root_trn[key].attrs['range'] = float(arr_range) 550 | del root[key] 551 | print(root.tree()) 552 | -------------------------------------------------------------------------------- /hyperopt.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import argparse 3 | import os 4 | import glob 5 | import optuna 6 | import numpy as np 7 | import pandas as pd 8 | import torch 9 | import torch.nn.functional as F 10 | from torch_geometric.data import Data, DataLoader 11 | from torch_geometric.utils import from_networkx 12 | from torch_geometric.nn import ChebConv 13 | from epynet import Network 14 | 15 | from utils.graph_utils import get_nx_graph 16 | from utils.DataReader import DataReader 17 | from utils.Metrics import Metrics 18 | from utils.EarlyStopping import EarlyStopping 19 | from utils.dataloader import build_dataloader 20 | 21 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 22 | torch.set_default_dtype(torch.float64) 23 | # ----- ----- ----- ----- ----- ----- 24 | # Command line arguments 25 | # ----- ----- ----- ----- ----- ----- 26 | parser = argparse.ArgumentParser() 27 | parser.add_argument('--db', 28 | default = 'doe_pumpfed_1', 29 | type = str, 30 | help = "DB.") 31 | parser.add_argument('--setmet', 32 | default = 'fixrnd', 33 | choices = ['spc', 'fixrnd', 'allrnd'], 34 | type = str, 35 | help = "How to setup the transducers.") 36 | parser.add_argument('--obsrat', 37 | default = .8, 38 | type = float, 39 | help = "Observation ratio.") 40 | parser.add_argument('--epoch', 41 | default = '2000', 42 | type = int, 43 | help = "Number of epochs.") 44 | parser.add_argument('--batch', 45 | default = '200', 46 | type = int, 47 | help = "Batch size.") 48 | parser.add_argument('--lr', 49 | default = 0.0003, 50 | type = float, 51 | help = "Learning rate.") 52 | args = parser.parse_args() 53 | 54 | # ----- ----- ----- ----- ----- ----- 55 | # Paths 56 | # ----- ----- ----- ----- ----- ----- 57 | wds_name = 'anytown' 58 | pathToRoot = os.path.dirname(os.path.realpath(__file__)) 59 | pathToDB = os.path.join(pathToRoot, 'data', 'db_' + wds_name +'_'+ args.db) 60 | pathToWDS = os.path.join('water_networks', wds_name+'.inp') 61 | pathToLog = os.path.join('experiments', 'hyperparams', 'anytown_ho.pkl') 62 | 63 | def objective(trial): 64 | # ----- ----- ----- ----- ----- ----- 65 | # Functions 66 | # ----- ----- ----- ----- ----- ----- 67 | def train_one_epoch(): 68 | model.train() 69 | total_loss = 0 70 | for batch in trn_ldr: 71 | batch = batch.to(device) 72 | optimizer.zero_grad() 73 | out = model(batch) 74 | loss = F.mse_loss(out, batch.y) 75 | loss.backward() 76 | optimizer.step() 77 | total_loss += loss.item() * batch.num_graphs 78 | return total_loss / len(trn_ldr.dataset) 79 | 80 | # ----- ----- ----- ----- ----- ----- 81 | # Loading trn and vld datasets 82 | # ----- ----- ----- ----- ----- ----- 83 | wds = Network(pathToWDS) 84 | adj_mode = trial.suggest_categorical('adjacency', ['binary', 'weighted', 'logarithmic']) 85 | G = get_nx_graph(wds, mode=adj_mode) 86 | 87 | reader = DataReader(pathToDB, n_junc=len(wds.junctions.uid), obsrat=args.obsrat, seed=8) 88 | trn_x, _, _ = reader.read_data( 89 | dataset = 'trn', 90 | varname = 'junc_heads', 91 | rescale = 'standardize', 92 | cover = True 93 | ) 94 | trn_y, bias_y, scale_y = reader.read_data( 95 | dataset = 'trn', 96 | varname = 'junc_heads', 97 | rescale = 'normalize', 98 | cover = False 99 | ) 100 | vld_x, _, _ = reader.read_data( 101 | dataset = 'vld', 102 | varname = 'junc_heads', 103 | rescale = 'standardize', 104 | cover = True 105 | ) 106 | vld_y, _, _ = reader.read_data( 107 | dataset = 'vld', 108 | varname = 'junc_heads', 109 | rescale = 'normalize', 110 | cover = False 111 | ) 112 | 113 | # ----- ----- ----- ----- ----- ----- 114 | # Model definition 115 | # ----- ----- ----- ----- ----- ----- 116 | class Net2(torch.nn.Module): 117 | def __init__(self, topo): 118 | super(Net2, self).__init__() 119 | self.conv1 = ChebConv(np.shape(trn_x)[-1], topo[0][0], K=topo[0][1]) 120 | self.conv2 = ChebConv(topo[0][0], topo[1][0], K=topo[1][1]) 121 | self.conv3 = ChebConv(topo[1][0], np.shape(trn_y)[-1], K=1, bias=False) 122 | 123 | def forward(self, data): 124 | x, edge_index, edge_weight = data.x, data.edge_index, data.weight 125 | x = F.silu(self.conv1(x, edge_index, edge_weight)) 126 | x = F.silu(self.conv2(x, edge_index, edge_weight)) 127 | x = self.conv3(x, edge_index, edge_weight) 128 | return torch.sigmoid(x) 129 | 130 | class Net3(torch.nn.Module): 131 | def __init__(self, topo): 132 | super(Net3, self).__init__() 133 | self.conv1 = ChebConv(np.shape(trn_x)[-1], topo[0][0], K=topo[0][1]) 134 | self.conv2 = ChebConv(topo[0][0], topo[1][0], K=topo[1][1]) 135 | self.conv3 = ChebConv(topo[1][0], topo[2][0], K=topo[2][1]) 136 | self.conv4 = ChebConv(topo[2][0], np.shape(trn_y)[-1], K=1, bias=False) 137 | 138 | def forward(self, data): 139 | x, edge_index, edge_weight = data.x, data.edge_index, data.weight 140 | x = F.silu(self.conv1(x, edge_index, edge_weight)) 141 | x = F.silu(self.conv2(x, edge_index, edge_weight)) 142 | x = F.silu(self.conv3(x, edge_index, edge_weight)) 143 | x = self.conv4(x, edge_index, edge_weight) 144 | return torch.sigmoid(x) 145 | 146 | class Net4(torch.nn.Module): 147 | def __init__(self, topo): 148 | super(Net4, self).__init__() 149 | self.conv1 = ChebConv(np.shape(trn_x)[-1], topo[0][0], K=topo[0][1]) 150 | self.conv2 = ChebConv(topo[0][0], topo[1][0], K=topo[1][1]) 151 | self.conv3 = ChebConv(topo[1][0], topo[2][0], K=topo[2][1]) 152 | self.conv4 = ChebConv(topo[2][0], topo[3][0], K=topo[3][1]) 153 | self.conv5 = ChebConv(topo[3][0], np.shape(trn_y)[-1], K=1, bias=False) 154 | 155 | def forward(self, data): 156 | x, edge_index, edge_weight = data.x, data.edge_index, data.weight 157 | x = F.silu(self.conv1(x, edge_index, edge_weight)) 158 | x = F.silu(self.conv2(x, edge_index, edge_weight)) 159 | x = F.silu(self.conv3(x, edge_index, edge_weight)) 160 | x = F.silu(self.conv4(x, edge_index, edge_weight)) 161 | x = self.conv5(x, edge_index, edge_weight) 162 | return torch.sigmoid(x) 163 | 164 | n_layers= trial.suggest_int('n_layers', 2, 4) 165 | topo = [] 166 | for i in range(n_layers): 167 | topo.append([ 168 | trial.suggest_int('n_channels_{}'.format(i), 5, 50, step=5), 169 | trial.suggest_int('filter_size_{}'.format(i), 5, 30, step=5) 170 | ]) 171 | decay = trial.suggest_float('weight_decay', 1e-6, 1e-4, log=True) 172 | if n_layers == 2: 173 | model = Net2(topo).to(device) 174 | optimizer = torch.optim.Adam([ 175 | dict(params=model.conv1.parameters(), weight_decay=decay), 176 | dict(params=model.conv2.parameters(), weight_decay=decay), 177 | dict(params=model.conv3.parameters(), weight_decay=0) 178 | ], 179 | lr = args.lr, 180 | eps = 1e-7 181 | ) 182 | elif n_layers == 3: 183 | model = Net3(topo).to(device) 184 | optimizer = torch.optim.Adam([ 185 | dict(params=model.conv1.parameters(), weight_decay=decay), 186 | dict(params=model.conv2.parameters(), weight_decay=decay), 187 | dict(params=model.conv3.parameters(), weight_decay=decay), 188 | dict(params=model.conv4.parameters(), weight_decay=0) 189 | ], 190 | lr = args.lr, 191 | eps = 1e-7 192 | ) 193 | elif n_layers == 4: 194 | model = Net4(topo).to(device) 195 | optimizer = torch.optim.Adam([ 196 | dict(params=model.conv1.parameters(), weight_decay=decay), 197 | dict(params=model.conv2.parameters(), weight_decay=decay), 198 | dict(params=model.conv3.parameters(), weight_decay=decay), 199 | dict(params=model.conv4.parameters(), weight_decay=decay), 200 | dict(params=model.conv5.parameters(), weight_decay=0) 201 | ], 202 | lr = args.lr, 203 | eps = 1e-7 204 | ) 205 | 206 | # ----- ----- ----- ----- ----- ----- 207 | # Training 208 | # ----- ----- ----- ----- ----- ----- 209 | trn_ldr = build_dataloader(G, trn_x, trn_y, args.batch, shuffle=True) 210 | vld_ldr = build_dataloader(G, vld_x, vld_y, len(vld_x), shuffle=False) 211 | metrics = Metrics(bias_y, scale_y, device) 212 | estop = EarlyStopping(min_delta=.000001, patience=50) 213 | results = pd.DataFrame(columns=['trn_loss', 'vld_loss']) 214 | header = ''.join(['{:^15}'.format(colname) for colname in results.columns]) 215 | header = '{:^5}'.format('epoch') + header 216 | best_vld_loss = np.inf 217 | for epoch in range(0, args.epoch): 218 | trn_loss = train_one_epoch() 219 | model.eval() 220 | tot_vld_loss = 0 221 | for batch in vld_ldr: 222 | batch = batch.to(device) 223 | out = model(batch) 224 | vld_loss = F.mse_loss(out, batch.y) 225 | tot_vld_loss += vld_loss.item() * batch.num_graphs 226 | vld_loss = tot_vld_loss / len(vld_ldr.dataset) 227 | 228 | if estop.step(torch.tensor(vld_loss)): 229 | print('Early stopping...') 230 | break 231 | return estop.best 232 | 233 | if __name__ == '__main__': 234 | sampler = optuna.samplers.TPESampler(n_startup_trials=50, n_ei_candidates=5, multivariate=True) 235 | study = optuna.create_study(direction='minimize', 236 | study_name = 'v4', 237 | sampler = sampler, 238 | storage = 'sqlite:///experiments/hyperparams/anytown_ho-'+str(args.obsrat)+'.db' 239 | ) 240 | study.optimize(objective, n_trials=300) 241 | -------------------------------------------------------------------------------- /model/anytown.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import numpy as np 3 | import torch 4 | import torch.nn.functional as F 5 | from torch_geometric.nn import ChebConv 6 | 7 | class ChebNet(torch.nn.Module): 8 | def __init__(self, in_channels, out_channels): 9 | super(ChebNet, self).__init__() 10 | self.conv1 = ChebConv(in_channels, 14, K=39) 11 | self.conv2 = ChebConv(14, 20, K=43) 12 | self.conv3 = ChebConv(20, 27, K=45) 13 | self.conv4 = ChebConv(27, out_channels, K=1, bias=False) 14 | 15 | def forward(self, data): 16 | x, edge_index, edge_weight = data.x, data.edge_index, data.weight 17 | x = F.silu(self.conv1(x, edge_index, edge_weight)) 18 | x = F.silu(self.conv2(x, edge_index, edge_weight)) 19 | x = F.silu(self.conv3(x, edge_index, edge_weight)) 20 | x = self.conv4(x, edge_index, edge_weight) 21 | return torch.sigmoid(x) 22 | -------------------------------------------------------------------------------- /model/ctown.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import numpy as np 3 | import torch 4 | import torch.nn.functional as F 5 | from torch_geometric.nn import ChebConv 6 | 7 | class ChebNet(torch.nn.Module): 8 | def __init__(self, in_channels, out_channels): 9 | super(ChebNet, self).__init__() 10 | self.conv1 = ChebConv(in_channels, 60, K=200) 11 | self.conv2 = ChebConv(60, 60, K=200) 12 | self.conv3 = ChebConv(60, 30, K=20) 13 | self.conv4 = ChebConv(30, out_channels, K=1, bias=False) 14 | 15 | def forward(self, data): 16 | x, edge_index, edge_weight = data.x, data.edge_index, data.weight 17 | x = F.silu(self.conv1(x, edge_index, edge_weight)) 18 | x = F.silu(self.conv2(x, edge_index, edge_weight)) 19 | x = F.silu(self.conv3(x, edge_index, edge_weight)) 20 | x = self.conv4(x, edge_index, edge_weight) 21 | return torch.sigmoid(x) 22 | -------------------------------------------------------------------------------- /model/richmond.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import numpy as np 3 | import torch 4 | import torch.nn.functional as F 5 | from torch_geometric.nn import ChebConv 6 | 7 | class ChebNet(torch.nn.Module): 8 | def __init__(self, in_channels, out_channels): 9 | super(ChebNet, self).__init__() 10 | self.conv1 = ChebConv(in_channels, 120, K=240) 11 | self.conv2 = ChebConv(120, 60, K=120) 12 | self.conv3 = ChebConv(60, 30, K=20) 13 | self.conv4 = ChebConv(30, out_channels, K=1, bias=False) 14 | 15 | def forward(self, data): 16 | x, edge_index, edge_weight = data.x, data.edge_index, data.weight 17 | x = F.silu(self.conv1(x, edge_index, edge_weight)) 18 | x = F.silu(self.conv2(x, edge_index, edge_weight)) 19 | x = F.silu(self.conv3(x, edge_index, edge_weight)) 20 | x = self.conv4(x, edge_index, edge_weight) 21 | return torch.sigmoid(x) 22 | -------------------------------------------------------------------------------- /test_Taylor_metrics.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import os 3 | import argparse 4 | import copy 5 | from csv import writer 6 | import numpy as np 7 | import dask.array as da 8 | import pandas as pd 9 | import networkx as nx 10 | import torch 11 | import torch.nn.functional as F 12 | from torch_geometric.loader import DataLoader 13 | from torch_geometric.utils import from_networkx 14 | from epynet import Network 15 | 16 | from utils.graph_utils import get_nx_graph, get_sensitivity_matrix 17 | from utils.DataReader import DataReader 18 | from utils.SensorInstaller import SensorInstaller 19 | from utils.Metrics import Metrics 20 | from utils.MeanPredictor import MeanPredictor 21 | from utils.baselines import interpolated_regularization 22 | from utils.dataloader import build_dataloader 23 | 24 | device = torch.device('cpu') 25 | 26 | # ----- ----- ----- ----- ----- ----- 27 | # Command line arguments 28 | # ----- ----- ----- ----- ----- ----- 29 | parser = argparse.ArgumentParser() 30 | parser.add_argument('--wds', 31 | default = 'anytown', 32 | type = str, 33 | help = "Water distribution system." 34 | ) 35 | parser.add_argument('--deploy', 36 | default = 'random', 37 | choices = ['random', 'dist', 'hydrodist', 'hds'], 38 | type = str, 39 | help = "Method of sensor deployment.") 40 | parser.add_argument('--obsrat', 41 | default = .05, 42 | type = float, 43 | help = "Observation ratio." 44 | ) 45 | parser.add_argument('--batch', 46 | default = 80, 47 | type = int, 48 | help = "Batch size." 49 | ) 50 | parser.add_argument('--adj', 51 | default = 'binary', 52 | choices = ['binary', 'weighted', 'logarithmic', 'pruned'], 53 | type = str, 54 | help = "Type of adjacency matrix." 55 | ) 56 | parser.add_argument('--model', 57 | default = 'orig', 58 | choices = ['orig', 'naive', 'gcn', 'interp'], 59 | type = str, 60 | help = "Model to use." 61 | ) 62 | parser.add_argument('--metricsdb', 63 | default = 'Taylor_metrics', 64 | type = str, 65 | help = "Name of the metrics database." 66 | ) 67 | parser.add_argument('--tag', 68 | default = 'def', 69 | type = str, 70 | help = "Custom tag." 71 | ) 72 | parser.add_argument('--runid', 73 | default = 1, 74 | type = int, 75 | help = "Number of the model." 76 | ) 77 | parser.add_argument('--db', 78 | default = 'doe_pumpfed_1', 79 | type = str, 80 | help = "DB.") 81 | args = parser.parse_args() 82 | 83 | # ----- ----- ----- ----- ----- ----- 84 | # Paths 85 | # ----- ----- ----- ----- ----- ----- 86 | wds_name = args.wds 87 | pathToRoot = os.path.dirname(os.path.realpath(__file__)) 88 | pathToExps = os.path.join(pathToRoot, 'experiments') 89 | pathToLogs = os.path.join(pathToExps, 'logs') 90 | run_id = args.runid 91 | run_stamp = wds_name+'-'+args.deploy+'-'+str(args.obsrat)+'-'+args.adj+'-'+args.tag+'-' 92 | run_stamp = run_stamp + str(run_id) 93 | pathToDB = os.path.join(pathToRoot, 'data', 'db_' + wds_name +'_'+ args.db) 94 | pathToModel = os.path.join(pathToExps, 'models', run_stamp+'.pt') 95 | pathToMeta = os.path.join(pathToExps, 'models', run_stamp+'_meta.csv') 96 | pathToWDS = os.path.join('water_networks', wds_name+'.inp') 97 | pathToResults = os.path.join(pathToRoot, 'experiments', args.metricsdb + '.csv') 98 | 99 | # ----- ----- ----- ----- ----- ----- 100 | # Functions 101 | # ----- ----- ----- ----- ----- ----- 102 | def restore_real_nodal_p(dta_ldr, num_nodes, num_graphs): 103 | nodal_pressures = np.empty((num_graphs, num_nodes)) 104 | end_idx = 0 105 | for i, batch in enumerate(tst_ldr): 106 | batch.to(device) 107 | p = metrics_nrm._rescale(batch.y).reshape(-1, num_nodes).detach().cpu().numpy() 108 | nodal_pressures[end_idx:end_idx+batch.num_graphs, :] = p 109 | end_idx += batch.num_graphs 110 | return da.array(nodal_pressures) 111 | 112 | def predict_nodal_p_gcn(dta_ldr, num_nodes, num_graphs): 113 | model.load_state_dict(torch.load(pathToModel, map_location=torch.device(device))) 114 | model.eval() 115 | nodal_pressures = np.empty((num_graphs, num_nodes)) 116 | end_idx = 0 117 | for i, batch in enumerate(tst_ldr): 118 | batch.to(device) 119 | p = model(batch) 120 | p = metrics_nrm._rescale(p).reshape(-1, num_nodes).detach().cpu().numpy() 121 | nodal_pressures[end_idx:end_idx+batch.num_graphs, :] = p 122 | end_idx += batch.num_graphs 123 | return da.array(nodal_pressures) 124 | 125 | def predict_nodal_p_naive(dta_ldr, num_nodes, num_graphs): 126 | model = MeanPredictor(device) 127 | nodal_pressures = np.empty((num_graphs, num_nodes)) 128 | end_idx = 0 129 | for i, batch in enumerate(tst_ldr): 130 | batch.to(device) 131 | p = model.pred(batch.y, batch.x[:, -1].type(torch.bool)) 132 | p = metrics_nrm._rescale(p).reshape(-1, num_nodes).detach().cpu().numpy() 133 | nodal_pressures[end_idx:end_idx+batch.num_graphs, :] = p 134 | end_idx += batch.num_graphs 135 | return da.array(nodal_pressures) 136 | 137 | def load_model(): 138 | if args.wds == 'anytown': 139 | from model.anytown import ChebNet as Net 140 | elif args.wds == 'ctown': 141 | from model.ctown import ChebNet as Net 142 | elif args.wds == 'richmond': 143 | from model.richmond import ChebNet as Net 144 | else: 145 | print('Water distribution system is unknown.\n') 146 | raise 147 | return Net 148 | 149 | def compute_metrics(p, p_hat): 150 | msec = da.multiply(p-p.mean(), p_hat-p_hat.mean()).mean() 151 | sigma = da.sqrt(da.square(p_hat-p_hat.mean()).mean()) 152 | return msec, sigma 153 | 154 | # ----- ----- ----- ----- ----- ----- 155 | # Loading datasets 156 | # ----- ----- ----- ----- ----- ----- 157 | wds = Network(pathToWDS) 158 | G = get_nx_graph(wds, mode=args.adj) 159 | L = nx.linalg.laplacianmatrix.laplacian_matrix(G).todense() 160 | seed = run_id 161 | sensor_budget = int(len(wds.junctions) * args.obsrat) 162 | print('Deploying {} sensors...\n'.format(sensor_budget)) 163 | 164 | sensor_shop = SensorInstaller(wds) 165 | 166 | if args.deploy == 'random': 167 | sensor_shop.deploy_by_random( 168 | sensor_budget = sensor_budget, 169 | seed = seed 170 | ) 171 | elif args.deploy == 'dist': 172 | sensor_shop.deploy_by_shortest_path( 173 | sensor_budget = sensor_budget, 174 | weight_by = 'length' 175 | ) 176 | elif args.deploy == 'hydrodist': 177 | sensor_shop.deploy_by_shortest_path( 178 | sensor_budget = sensor_budget, 179 | weight_by = 'iweight' 180 | ) 181 | elif args.deploy == 'hds': 182 | print('Calculating nodal sensitivity to demand change...\n') 183 | ptb = np.max(wds.junctions.basedemand) / 100 184 | S = get_sensitivity_matrix(wds, ptb) 185 | sensor_shop.deploy_by_shortest_path_with_sensitivity( 186 | sensor_budget = sensor_budget, 187 | sensitivity_matrix = S, 188 | weight_by = 'iweight' 189 | ) 190 | else: 191 | print('Sensor deployment technique is unknown.\n') 192 | raise 193 | 194 | reader = DataReader( 195 | pathToDB, 196 | n_junc = len(wds.junctions), 197 | signal_mask = sensor_shop.signal_mask() 198 | ) 199 | tst_x, bias_std, scale_std = reader.read_data( 200 | dataset = 'tst', 201 | varname = 'junc_heads', 202 | rescale = 'standardize', 203 | cover = True 204 | ) 205 | tst_y, bias_nrm, scale_nrm = reader.read_data( 206 | dataset = 'tst', 207 | varname = 'junc_heads', 208 | rescale = 'normalize', 209 | cover = False 210 | ) 211 | tst_ldr = build_dataloader(G, tst_x, tst_y, args.batch, shuffle=False) 212 | metrics_nrm = Metrics(bias_nrm, scale_nrm, device) 213 | num_nodes = len(wds.junctions) 214 | num_graphs = len(tst_x) 215 | 216 | # ----- ----- ----- ----- ----- ----- 217 | # Compute metrics 218 | # ----- ----- ----- ----- ----- ----- 219 | run_stamp = run_stamp+'-'+args.model 220 | print(run_stamp) 221 | p = restore_real_nodal_p(tst_ldr, num_nodes, num_graphs) 222 | 223 | if args.model == 'orig': 224 | p_hat = p 225 | elif args.model == 'naive': 226 | p_hat = predict_nodal_p_naive(tst_ldr, num_nodes, num_graphs) 227 | elif args.model == 'gcn': 228 | Net = load_model() 229 | model = Net(np.shape(tst_x)[-1], np.shape(tst_y)[-1]).to(device) 230 | p_hat = predict_nodal_p_gcn(tst_ldr, num_nodes, num_graphs) 231 | elif args.model == 'interp': 232 | p_hat = interpolated_regularization(L, tst_x) 233 | p_hat = p_hat*scale_std+bias_std 234 | p_hat = da.array(p_hat) 235 | 236 | msec, sigma = compute_metrics(p, p_hat) 237 | 238 | # ----- ----- ----- ----- ----- ----- 239 | # Write metrics 240 | # ----- ----- ----- ----- ----- ----- 241 | results = [run_stamp, msec.compute(), sigma.compute()] 242 | with open(pathToResults, 'a+') as fout: 243 | csv_writer = writer(fout) 244 | csv_writer.writerow(results) 245 | -------------------------------------------------------------------------------- /test_Taylor_metrics_for_sensor_placement.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import os 3 | import argparse 4 | import copy 5 | from csv import writer 6 | import numpy as np 7 | import dask.array as da 8 | import pandas as pd 9 | import networkx as nx 10 | import torch 11 | import torch.nn.functional as F 12 | from torch_geometric.loader import DataLoader 13 | from torch_geometric.utils import from_networkx 14 | from epynet import Network 15 | 16 | from utils.graph_utils import get_nx_graph, get_sensitivity_matrix 17 | from utils.DataReader import DataReader 18 | from utils.SensorInstaller import SensorInstaller 19 | from utils.Metrics import Metrics 20 | from utils.MeanPredictor import MeanPredictor 21 | from utils.baselines import interpolated_regularization 22 | from utils.dataloader import build_dataloader 23 | 24 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 25 | 26 | # ----- ----- ----- ----- ----- ----- 27 | # Command line arguments 28 | # ----- ----- ----- ----- ----- ----- 29 | parser = argparse.ArgumentParser() 30 | parser.add_argument('--wds', 31 | default = 'anytown', 32 | type = str, 33 | help = "Water distribution system." 34 | ) 35 | parser.add_argument('--deploy', 36 | default = 'random', 37 | choices = ['master', 'dist', 'hydrodist', 'hds', 'hdvar', 'random', 'xrandom'], 38 | type = str, 39 | help = "Method of sensor deployment.") 40 | parser.add_argument('--budget', 41 | default = 1, 42 | type = int, 43 | help = "Sensor budget.") 44 | parser.add_argument('--batch', 45 | default = 80, 46 | type = int, 47 | help = "Batch size." 48 | ) 49 | parser.add_argument('--adj', 50 | default = 'binary', 51 | choices = ['binary', 'weighted', 'logarithmic', 'pruned'], 52 | type = str, 53 | help = "Type of adjacency matrix." 54 | ) 55 | parser.add_argument('--metricsdb', 56 | default = 'Taylor_metrics', 57 | type = str, 58 | help = "Name of the metrics database." 59 | ) 60 | parser.add_argument('--tag', 61 | default = 'def', 62 | type = str, 63 | help = "Custom tag." 64 | ) 65 | parser.add_argument('--runid', 66 | default = 1, 67 | type = int, 68 | help = "Number of the model." 69 | ) 70 | parser.add_argument('--db', 71 | default = 'doe_pumpfed_1', 72 | type = str, 73 | help = "DB.") 74 | args = parser.parse_args() 75 | 76 | # ----- ----- ----- ----- ----- ----- 77 | # Paths 78 | # ----- ----- ----- ----- ----- ----- 79 | wds_name = args.wds 80 | pathToRoot = os.path.dirname(os.path.realpath(__file__)) 81 | pathToExps = os.path.join(pathToRoot, 'experiments') 82 | pathToLogs = os.path.join(pathToExps, 'logs') 83 | run_id = args.runid 84 | run_stamp = wds_name+'-'+args.deploy+'-'+str(args.budget)+'-'+args.adj+'-'+args.tag+'-' 85 | run_stamp = run_stamp + str(run_id) 86 | pathToDB = os.path.join(pathToRoot, 'data', 'db_' + wds_name +'_'+ args.db) 87 | pathToModel = os.path.join(pathToExps, 'models', run_stamp+'.pt') 88 | pathToMeta = os.path.join(pathToExps, 'models', run_stamp+'_meta.csv') 89 | pathToSens = os.path.join(pathToExps, 'models', run_stamp+'_sensor_nodes.csv') 90 | pathToWDS = os.path.join('water_networks', wds_name+'.inp') 91 | pathToResults = os.path.join(pathToRoot, 'experiments', args.metricsdb + '.csv') 92 | 93 | # ----- ----- ----- ----- ----- ----- 94 | # Functions 95 | # ----- ----- ----- ----- ----- ----- 96 | def restore_real_nodal_p(dta_ldr, num_nodes, num_graphs): 97 | nodal_pressures = np.empty((num_graphs, num_nodes)) 98 | end_idx = 0 99 | for i, batch in enumerate(tst_ldr): 100 | batch.to(device) 101 | p = metrics_nrm._rescale(batch.y).reshape(-1, num_nodes).detach().cpu().numpy() 102 | nodal_pressures[end_idx:end_idx+batch.num_graphs, :] = p 103 | end_idx += batch.num_graphs 104 | return da.array(nodal_pressures) 105 | 106 | def predict_nodal_p_gcn(dta_ldr, num_nodes, num_graphs): 107 | model.load_state_dict(torch.load(pathToModel, map_location=torch.device(device))) 108 | model.eval() 109 | nodal_pressures = np.empty((num_graphs, num_nodes)) 110 | end_idx = 0 111 | for i, batch in enumerate(tst_ldr): 112 | batch.to(device) 113 | p = model(batch) 114 | p = metrics_nrm._rescale(p).reshape(-1, num_nodes).detach().cpu().numpy() 115 | nodal_pressures[end_idx:end_idx+batch.num_graphs, :] = p 116 | end_idx += batch.num_graphs 117 | return da.array(nodal_pressures) 118 | 119 | def load_model(): 120 | if args.wds == 'anytown': 121 | from model.anytown import ChebNet as Net 122 | elif args.wds == 'ctown': 123 | from model.ctown import ChebNet as Net 124 | elif args.wds == 'richmond': 125 | from model.richmond import ChebNet as Net 126 | else: 127 | print('Water distribution system is unknown.\n') 128 | raise 129 | return Net 130 | 131 | def compute_metrics(p, p_hat): 132 | msec = da.multiply(p-p.mean(), p_hat-p_hat.mean()).mean() 133 | sigma = da.sqrt(da.square(p_hat-p_hat.mean()).mean()) 134 | return msec, sigma 135 | 136 | # ----- ----- ----- ----- ----- ----- 137 | # Loading datasets 138 | # ----- ----- ----- ----- ----- ----- 139 | wds = Network(pathToWDS) 140 | G = get_nx_graph(wds, mode=args.adj) 141 | 142 | sensor_shop = SensorInstaller(wds) 143 | sensor_nodes= np.loadtxt(pathToSens, dtype=np.int32) 144 | sensor_shop.set_sensor_nodes(sensor_nodes) 145 | 146 | reader = DataReader( 147 | pathToDB, 148 | n_junc = len(wds.junctions), 149 | signal_mask = sensor_shop.signal_mask(), 150 | node_order = np.array(list(G.nodes))-1 151 | ) 152 | tst_x, bias_std, scale_std = reader.read_data( 153 | dataset = 'tst', 154 | varname = 'junc_heads', 155 | rescale = 'standardize', 156 | cover = True 157 | ) 158 | tst_y, bias_nrm, scale_nrm = reader.read_data( 159 | dataset = 'tst', 160 | varname = 'junc_heads', 161 | rescale = 'normalize', 162 | cover = False 163 | ) 164 | tst_ldr = build_dataloader(G, tst_x, tst_y, args.batch, shuffle=False) 165 | metrics_nrm = Metrics(bias_nrm, scale_nrm, device) 166 | num_nodes = len(wds.junctions) 167 | num_graphs = len(tst_x) 168 | 169 | # ----- ----- ----- ----- ----- ----- 170 | # Compute metrics 171 | # ----- ----- ----- ----- ----- ----- 172 | run_stamp = run_stamp+'-'+'gcn' 173 | print(run_stamp) 174 | p = restore_real_nodal_p(tst_ldr, num_nodes, num_graphs) 175 | 176 | Net = load_model() 177 | model = Net(np.shape(tst_x)[-1], np.shape(tst_y)[-1]).to(device) 178 | p_hat = predict_nodal_p_gcn(tst_ldr, num_nodes, num_graphs) 179 | 180 | msec, sigma = compute_metrics(p, p_hat) 181 | 182 | # ----- ----- ----- ----- ----- ----- 183 | # Write metrics 184 | # ----- ----- ----- ----- ----- ----- 185 | results = [run_stamp, msec.compute(), sigma.compute()] 186 | with open(pathToResults, 'a+') as fout: 187 | csv_writer = writer(fout) 188 | csv_writer.writerow(results) 189 | -------------------------------------------------------------------------------- /test_relative_error.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import os 3 | import copy 4 | import argparse 5 | from csv import writer 6 | import numpy as np 7 | import dask.array as da 8 | import pandas as pd 9 | import torch 10 | import torch.nn.functional as F 11 | from torch_geometric.data import Data, DataLoader 12 | from torch_geometric.utils import from_networkx 13 | from epynet import Network 14 | 15 | from utils.graph_utils import get_nx_graph 16 | from utils.DataReader import DataReader 17 | from utils.Metrics import Metrics 18 | from utils.MeanPredictor import MeanPredictor 19 | from utils.dataloader import build_dataloader 20 | 21 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 22 | 23 | # ----- ----- ----- ----- ----- ----- 24 | # Command line arguments 25 | # ----- ----- ----- ----- ----- ----- 26 | parser = argparse.ArgumentParser() 27 | parser.add_argument('--wds', 28 | default = 'anytown', 29 | type = str, 30 | help = "Water distribution system." 31 | ) 32 | parser.add_argument('--batch', 33 | default = 80, 34 | type = int, 35 | help = "Batch size." 36 | ) 37 | parser.add_argument('--setmet', 38 | default = 'fixrnd', 39 | choices = ['spc', 'fixrnd', 'allrnd'], 40 | type = str, 41 | help = "How to setup the transducers." 42 | ) 43 | parser.add_argument('--model', 44 | default = 'orig', 45 | type = str, 46 | help = "Model to use." 47 | ) 48 | parser.add_argument('--tag', 49 | default = 'def', 50 | type = str, 51 | help = "Custom tag." 52 | ) 53 | parser.add_argument('--db', 54 | default = 'doe_pumpfed_1', 55 | type = str, 56 | help = "DB.") 57 | args = parser.parse_args() 58 | 59 | # ----- ----- ----- ----- ----- ----- 60 | # Paths 61 | # ----- ----- ----- ----- ----- ----- 62 | wds_name = args.wds 63 | pathToRoot = os.path.dirname(os.path.realpath(__file__)) 64 | pathToExps = os.path.join(pathToRoot, 'experiments') 65 | pathToLogs = os.path.join(pathToExps, 'logs') 66 | pathToDB = os.path.join(pathToRoot, 'data', 'db_' + wds_name +'_'+ args.db) 67 | pathToWDS = os.path.join('water_networks', wds_name+'.inp') 68 | pathToResults = os.path.join( 69 | pathToRoot, 'experiments', 'relative_error'+'-'+args.wds+'.csv' 70 | ) 71 | 72 | # ----- ----- ----- ----- ----- ----- 73 | # Functions 74 | # ----- ----- ----- ----- ----- ----- 75 | def restore_real_nodal_p(dta_ldr, num_nodes, num_graphs): 76 | nodal_pressures = np.empty((num_graphs, num_nodes)) 77 | end_idx = 0 78 | for i, batch in enumerate(tst_ldr): 79 | batch.to(device) 80 | p = metrics._rescale(batch.y).reshape(-1, num_nodes).detach().cpu().numpy() 81 | nodal_pressures[end_idx:end_idx+batch.num_graphs, :] = p 82 | end_idx += batch.num_graphs 83 | return da.array(nodal_pressures) 84 | 85 | def predict_nodal_p_gcn(dta_ldr, num_nodes, num_graphs): 86 | model.load_state_dict(torch.load(pathToModel, map_location=torch.device(device))) 87 | model.eval() 88 | nodal_pressures = np.empty((num_graphs, num_nodes)) 89 | end_idx = 0 90 | for i, batch in enumerate(tst_ldr): 91 | batch.to(device) 92 | p = model(batch) 93 | p = metrics._rescale(p).reshape(-1, num_nodes).detach().cpu().numpy() 94 | nodal_pressures[end_idx:end_idx+batch.num_graphs, :] = p 95 | end_idx += batch.num_graphs 96 | return da.array(nodal_pressures) 97 | 98 | def load_model(): 99 | if args.wds == 'anytown': 100 | from model.anytown import ChebNet as Net 101 | elif args.wds == 'ctown': 102 | from model.ctown import ChebNet as Net 103 | elif args.wds == 'richmond': 104 | from model.richmond import ChebNet as Net 105 | else: 106 | print('Water distribution system is unknown.\n') 107 | raise 108 | return Net 109 | 110 | def compute_metrics(p, p_hat): 111 | diff = da.subtract(p, p_hat) 112 | rel_diff= da.divide(diff, p) 113 | return rel_diff 114 | 115 | # ----- ----- ----- ----- ----- ----- 116 | # Loading datasets 117 | # ----- ----- ----- ----- ----- ----- 118 | wds = Network(pathToWDS) 119 | G = get_nx_graph(wds, mode='binary') 120 | 121 | run_ids = np.arange(20)+1 122 | obsrats = [.05, .1, .2, .4, .8] 123 | df_list = [] 124 | for run_id in run_ids: 125 | print('Processing {}. run...'.format(run_id)) 126 | seed = run_id 127 | for obsrat in obsrats: 128 | run_stamp = wds_name+'-'+args.setmet+'-'+str(obsrat)+'-binary-'+args.tag+'-' 129 | run_stamp = run_stamp + str(run_id) 130 | pathToModel = os.path.join(pathToExps, 'models', run_stamp+'.pt') 131 | 132 | reader = DataReader(pathToDB, n_junc=len(wds.junctions.uid), obsrat=obsrat, seed=seed) 133 | tst_x, _, _ = reader.read_data( 134 | dataset = 'tst', 135 | varname = 'junc_heads', 136 | rescale = 'standardize', 137 | cover = True 138 | ) 139 | tst_y, _, _ = reader.read_data( 140 | dataset = 'tst', 141 | varname = 'junc_heads', 142 | rescale = 'normalize', 143 | cover = False 144 | ) 145 | _, bias_y, scale_y = reader.read_data( 146 | dataset = 'trn', 147 | varname = 'junc_heads', 148 | rescale = 'normalize', 149 | cover = False 150 | ) 151 | tst_ldr = build_dataloader(G, tst_x, tst_y, args.batch, shuffle=False) 152 | metrics = Metrics(bias_y, scale_y, device) 153 | num_nodes = len(wds.junctions) 154 | num_graphs = len(tst_x) 155 | 156 | Net = load_model() 157 | model = Net(np.shape(tst_x)[-1], np.shape(tst_y)[-1]).to(device) 158 | p = restore_real_nodal_p(tst_ldr, num_nodes, num_graphs) 159 | p_hat = predict_nodal_p_gcn(tst_ldr, num_nodes, num_graphs) 160 | rel_err = compute_metrics(p, p_hat) 161 | df = pd.DataFrame(rel_err.compute()).abs() 162 | df['obsrat']= obsrat 163 | df['runid'] = run_id 164 | df_list.append(df) 165 | df = pd.concat(df_list, ignore_index=True) 166 | df.to_csv(pathToResults) 167 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import argparse 3 | import os 4 | import glob 5 | import numpy as np 6 | import pandas as pd 7 | import torch 8 | import torch.nn.functional as F 9 | from torch_geometric.utils import from_networkx 10 | from torch_geometric.nn import ChebConv 11 | from epynet import Network 12 | 13 | from utils.graph_utils import get_nx_graph, get_sensitivity_matrix 14 | from utils.DataReader import DataReader 15 | from utils.SensorInstaller import SensorInstaller 16 | from utils.Metrics import Metrics 17 | from utils.EarlyStopping import EarlyStopping 18 | from utils.dataloader import build_dataloader 19 | 20 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 21 | # ----- ----- ----- ----- ----- ----- 22 | # Command line arguments 23 | # ----- ----- ----- ----- ----- ----- 24 | parser = argparse.ArgumentParser() 25 | parser.add_argument('--wds', 26 | default = 'anytown', 27 | type = str, 28 | help = "Water distribution system.") 29 | parser.add_argument('--db', 30 | default = 'doe_pumpfed_1', 31 | type = str, 32 | help = "DB.") 33 | parser.add_argument('--budget', 34 | default = 1, 35 | type = int, 36 | help = "Sensor budget.") 37 | parser.add_argument('--adj', 38 | default = 'binary', 39 | choices = ['binary', 'weighted', 'logarithmic', 'pruned'], 40 | type = str, 41 | help = "Type of adjacency matrix.") 42 | parser.add_argument('--deploy', 43 | default = 'random', 44 | choices = ['master', 'dist', 'hydrodist', 'hds', 'hdvar', 'random', 'xrandom'], 45 | type = str, 46 | help = "Method of sensor deployment.") 47 | parser.add_argument('--epoch', 48 | default = 1, 49 | type = int, 50 | help = "Number of epochs.") 51 | parser.add_argument('--idx', 52 | default = None, 53 | type = int, 54 | help = "Dev function.") 55 | parser.add_argument('--batch', 56 | default = '40', 57 | type = int, 58 | help = "Batch size.") 59 | parser.add_argument('--lr', 60 | default = 0.0003, 61 | type = float, 62 | help = "Learning rate.") 63 | parser.add_argument('--decay', 64 | default = 0.000006, 65 | type = float, 66 | help = "Weight decay.") 67 | parser.add_argument('--tag', 68 | default = 'def', 69 | type = str, 70 | help = "Custom tag.") 71 | parser.add_argument('--deterministic', 72 | action = "store_true", 73 | help = "Setting random seed for sensor placement.") 74 | args = parser.parse_args() 75 | 76 | # ----- ----- ----- ----- ----- ----- 77 | # Paths 78 | # ----- ----- ----- ----- ----- ----- 79 | wds_name = args.wds 80 | pathToRoot = os.path.dirname(os.path.realpath(__file__)) 81 | pathToDB = os.path.join(pathToRoot, 'data', 'db_' + wds_name +'_'+ args.db) 82 | pathToExps = os.path.join(pathToRoot, 'experiments') 83 | pathToLogs = os.path.join(pathToExps, 'logs') 84 | run_id = 1 85 | logs = [f for f in glob.glob(os.path.join(pathToLogs, '*.csv'))] 86 | run_stamp = wds_name+'-'+args.deploy+'-'+str(args.budget)+'-'+args.adj+'-'+args.tag+'-' 87 | while os.path.join(pathToLogs, run_stamp + str(run_id)+'.csv') in logs: 88 | run_id += 1 89 | run_stamp = run_stamp + str(run_id) 90 | pathToLog = os.path.join(pathToLogs, run_stamp+'.csv') 91 | pathToModel = os.path.join(pathToExps, 'models', run_stamp+'.pt') 92 | pathToMeta = os.path.join(pathToExps, 'models', run_stamp+'_meta.csv') 93 | pathToSens = os.path.join(pathToExps, 'models', run_stamp+'_sensor_nodes.csv') 94 | pathToWDS = os.path.join('water_networks', wds_name+'.inp') 95 | 96 | # ----- ----- ----- ----- ----- ----- 97 | # Saving hyperparams 98 | # ----- ----- ----- ----- ----- ----- 99 | hyperparams = { 100 | 'db': args.db, 101 | 'deploy': args.deploy, 102 | 'budget': args.budget, 103 | 'adj': args.adj, 104 | 'epoch': args.epoch, 105 | 'batch': args.batch, 106 | 'lr': args.lr, 107 | } 108 | hyperparams = pd.Series(hyperparams) 109 | hyperparams.to_csv(pathToMeta, header=False) 110 | 111 | # ----- ----- ----- ----- ----- ----- 112 | # Functions 113 | # ----- ----- ----- ----- ----- ----- 114 | def train_one_epoch(): 115 | model.train() 116 | total_loss = 0 117 | for batch in trn_ldr: 118 | batch = batch.to(device) 119 | optimizer.zero_grad() 120 | out = model(batch) 121 | loss = F.mse_loss(out, batch.y) 122 | loss.backward() 123 | optimizer.step() 124 | total_loss += loss.item() * batch.num_graphs 125 | return total_loss / len(trn_ldr.dataset) 126 | 127 | def eval_metrics(dataloader): 128 | model.eval() 129 | n = len(dataloader.dataset) 130 | tot_loss = 0 131 | tot_rel_err = 0 132 | tot_rel_err_obs = 0 133 | tot_rel_err_hid = 0 134 | for batch in dataloader: 135 | batch = batch.to(device) 136 | out = model(batch) 137 | loss = F.mse_loss(out, batch.y) 138 | rel_err = metrics.rel_err(out, batch.y) 139 | rel_err_obs = metrics.rel_err( 140 | out, 141 | batch.y, 142 | batch.x[:, -1].type(torch.bool) 143 | ) 144 | rel_err_hid = metrics.rel_err( 145 | out, 146 | batch.y, 147 | ~batch.x[:, -1].type(torch.bool) 148 | ) 149 | tot_loss += loss.item() * batch.num_graphs 150 | tot_rel_err += rel_err.item() * batch.num_graphs 151 | tot_rel_err_obs += rel_err_obs.item() * batch.num_graphs 152 | tot_rel_err_hid += rel_err_hid.item() * batch.num_graphs 153 | loss = tot_loss / n 154 | rel_err = tot_rel_err / n 155 | rel_err_obs = tot_rel_err_obs / n 156 | rel_err_hid = tot_rel_err_hid / n 157 | return loss, rel_err, rel_err_obs, rel_err_hid 158 | 159 | # ----- ----- ----- ----- ----- ----- 160 | # Loading trn and vld datasets 161 | # ----- ----- ----- ----- ----- ----- 162 | wds = Network(pathToWDS) 163 | G = get_nx_graph(wds, mode=args.adj) 164 | 165 | if args.deterministic: 166 | seeds = [1, 8, 5266, 739, 88867] 167 | seed = seeds[run_id % len(seeds)] 168 | else: 169 | seed = None 170 | 171 | sensor_budget = args.budget 172 | print('Deploying {} sensors...\n'.format(sensor_budget)) 173 | 174 | sensor_shop = SensorInstaller(wds, include_pumps_as_master=True) 175 | if args.deploy == 'master': 176 | sensor_shop.set_sensor_nodes(sensor_shop.master_nodes) 177 | elif args.deploy == 'dist': 178 | sensor_shop.deploy_by_shortest_path( 179 | sensor_budget = sensor_budget, 180 | weight_by = 'length', 181 | sensor_nodes = sensor_shop.master_nodes 182 | ) 183 | elif args.deploy == 'hydrodist': 184 | sensor_shop.deploy_by_shortest_path( 185 | sensor_budget = sensor_budget, 186 | weight_by = 'iweight', 187 | sensor_nodes = sensor_shop.master_nodes 188 | ) 189 | elif args.deploy == 'hds': 190 | print('Calculating nodal sensitivity to demand change...\n') 191 | ptb = np.max(wds.junctions.basedemand) / 100 192 | S = get_sensitivity_matrix(wds, ptb) 193 | sensor_shop.deploy_by_shortest_path_with_sensitivity( 194 | sensor_budget = sensor_budget, 195 | node_weights_arr= np.sum(np.abs(S), axis=0), 196 | weight_by = 'iweight', 197 | sensor_nodes = sensor_shop.master_nodes 198 | ) 199 | elif args.deploy == 'hdvar': 200 | print('Calculating nodal head variation...\n') 201 | reader = DataReader( 202 | pathToDB, 203 | n_junc = len(wds.junctions), 204 | node_order = np.array(list(G.nodes))-1 205 | ) 206 | heads, _, _ = reader.read_data( 207 | dataset = 'trn', 208 | varname = 'junc_heads', 209 | rescale = None, 210 | cover = False 211 | ) 212 | sensor_shop.deploy_by_shortest_path_with_sensitivity( 213 | sensor_budget = sensor_budget, 214 | node_weights_arr= heads.std(axis=0).T[0], 215 | weight_by = 'iweight', 216 | sensor_nodes = sensor_shop.master_nodes 217 | ) 218 | del reader, heads 219 | elif args.deploy == 'random': 220 | sensor_shop.deploy_by_random( 221 | sensor_budget = len(sensor_shop.master_nodes)+sensor_budget, 222 | seed = seed 223 | ) 224 | elif args.deploy == 'xrandom': 225 | sensor_shop.deploy_by_xrandom( 226 | sensor_budget = sensor_budget, 227 | seed = seed, 228 | sensor_nodes = sensor_shop.master_nodes 229 | ) 230 | else: 231 | print('Sensor deployment technique is unknown.\n') 232 | raise 233 | 234 | if args.idx: 235 | sensor_shop.set_sensor_nodes([args.idx]) 236 | 237 | np.savetxt(pathToSens, np.array(list(sensor_shop.sensor_nodes)), fmt='%d') 238 | 239 | reader = DataReader( 240 | pathToDB, 241 | n_junc = len(wds.junctions), 242 | signal_mask = sensor_shop.signal_mask(), 243 | node_order = np.array(list(G.nodes))-1 244 | ) 245 | trn_x, _, _ = reader.read_data( 246 | dataset = 'trn', 247 | varname = 'junc_heads', 248 | rescale = 'standardize', 249 | cover = True 250 | ) 251 | trn_y, bias_y, scale_y = reader.read_data( 252 | dataset = 'trn', 253 | varname = 'junc_heads', 254 | rescale = 'normalize', 255 | cover = False 256 | ) 257 | vld_x, _, _ = reader.read_data( 258 | dataset = 'vld', 259 | varname = 'junc_heads', 260 | rescale = 'standardize', 261 | cover = True 262 | ) 263 | vld_y, _, _ = reader.read_data( 264 | dataset = 'vld', 265 | varname = 'junc_heads', 266 | rescale = 'normalize', 267 | cover = False 268 | ) 269 | 270 | if args.wds == 'anytown': 271 | from model.anytown import ChebNet as Net 272 | elif args.wds == 'ctown': 273 | from model.ctown import ChebNet as Net 274 | elif args.wds == 'richmond': 275 | from model.richmond import ChebNet as Net 276 | else: 277 | print('Water distribution system is unknown.\n') 278 | raise 279 | 280 | model = Net(np.shape(trn_x)[-1], np.shape(trn_y)[-1]).to(device) 281 | optimizer = torch.optim.Adam([ 282 | dict(params=model.conv1.parameters(), weight_decay=args.decay), 283 | dict(params=model.conv2.parameters(), weight_decay=args.decay), 284 | dict(params=model.conv3.parameters(), weight_decay=args.decay), 285 | dict(params=model.conv4.parameters(), weight_decay=0) 286 | ], 287 | lr = args.lr, 288 | eps = 1e-7 289 | ) 290 | 291 | # ----- ----- ----- ----- ----- ----- 292 | # Training 293 | # ----- ----- ----- ----- ----- ----- 294 | trn_ldr = build_dataloader(G, trn_x, trn_y, args.batch, shuffle=True) 295 | vld_ldr = build_dataloader(G, vld_x, vld_y, args.batch, shuffle=False) 296 | metrics = Metrics(bias_y, scale_y, device) 297 | estop = EarlyStopping(min_delta=.00001, patience=30) 298 | results = pd.DataFrame(columns=[ 299 | 'trn_loss', 'vld_loss', 'vld_rel_err', 'vld_rel_err_o', 'vld_rel_err_h' 300 | ]) 301 | header = ''.join(['{:^15}'.format(colname) for colname in results.columns]) 302 | header = '{:^5}'.format('epoch') + header 303 | best_vld_loss = np.inf 304 | for epoch in range(0, args.epoch): 305 | trn_loss = train_one_epoch() 306 | vld_loss, vld_rel_err, vld_rel_err_obs, vld_rel_err_hid = eval_metrics(vld_ldr) 307 | new_results = pd.Series({ 308 | 'trn_loss' : trn_loss, 309 | 'vld_loss' : vld_loss, 310 | 'vld_rel_err' : vld_rel_err, 311 | 'vld_rel_err_o' : vld_rel_err_obs, 312 | 'vld_rel_err_h' : vld_rel_err_hid 313 | }) 314 | results = results.append(new_results, ignore_index=True) 315 | if epoch % 20 == 0: 316 | print(header) 317 | values = ''.join(['{:^15.6f}'.format(value) for value in new_results.values]) 318 | print('{:^5}'.format(epoch) + values) 319 | if vld_loss < best_vld_loss: 320 | best_vld_loss = vld_loss 321 | torch.save(model.state_dict(), pathToModel) 322 | if estop.step(torch.tensor(vld_loss)): 323 | print('Early stopping...') 324 | break 325 | results.to_csv(pathToLog) 326 | 327 | # ----- ----- ----- ----- ----- ----- 328 | # Testing 329 | # ----- ----- ----- ----- ----- ----- 330 | if best_vld_loss is not np.inf: 331 | print('Testing...\n') 332 | del trn_ldr, vld_ldr, trn_x, trn_y, vld_x, vld_y 333 | tst_x, _, _ = reader.read_data( 334 | dataset = 'tst', 335 | varname = 'junc_heads', 336 | rescale = 'standardize', 337 | cover = True 338 | ) 339 | tst_y, _, _ = reader.read_data( 340 | dataset = 'tst', 341 | varname = 'junc_heads', 342 | rescale = 'normalize', 343 | cover = False 344 | ) 345 | tst_ldr = build_dataloader(G, tst_x, tst_y, args.batch, shuffle=False) 346 | model.load_state_dict(torch.load(pathToModel)) 347 | model.eval() 348 | tst_loss, tst_rel_err, tst_rel_err_obs, tst_rel_err_hid = eval_metrics(tst_ldr) 349 | results = pd.Series({ 350 | 'tst_loss' : tst_loss, 351 | 'tst_rel_err' : tst_rel_err, 352 | 'tst_rel_err_o' : tst_rel_err_obs, 353 | 'tst_rel_err_h' : tst_rel_err_hid 354 | }) 355 | results.to_csv(pathToLog[:-4]+'_tst.csv') 356 | -------------------------------------------------------------------------------- /utils/DataReader.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import dask.array as da 3 | import numpy as np 4 | import zarr 5 | 6 | class DataReader(): 7 | def __init__(self, path_to_db=None, n_junc=None, obsrat=.8, seed=None, signal_mask=None, node_order=None): 8 | self.path_to_db = path_to_db 9 | self.obsrat = obsrat 10 | self.node_order = node_order 11 | if seed: 12 | np.random.seed(seed) 13 | if signal_mask is None: 14 | self._set_fixed_random_setup(n_junc) 15 | else: 16 | self.obs_ind = da.from_array(signal_mask) 17 | 18 | def _set_fixed_random_setup(self, n_junc): 19 | obs_ind = np.ones(shape=(n_junc,)) 20 | obs_len = int(n_junc * (1-self.obsrat)) 21 | assert obs_len > 0 22 | hid_ind = np.random.choice( 23 | np.arange(n_junc), 24 | size = obs_len, 25 | replace = False 26 | ) 27 | obs_ind[hid_ind]= 0 28 | self.obs_ind = da.from_array(obs_ind) 29 | 30 | def read_data(self, dataset='trn', varname=None, rescale=None, cover=False): 31 | store = zarr.open(self.path_to_db, mode='r') 32 | arr = da.from_zarr(url=store[dataset+'/'+varname]) 33 | if rescale == 'normalize': 34 | bias = np.float32(store['trn/'+varname].attrs['min']) 35 | scale = np.float32(store['trn/'+varname].attrs['range']) 36 | arr = (arr-bias) / scale 37 | elif rescale == 'standardize': 38 | bias = np.float32(store['trn/'+varname].attrs['avg']) 39 | scale = np.float32(store['trn/'+varname].attrs['std']) 40 | arr = (arr-bias) / scale 41 | else: 42 | bias = np.nan 43 | scale = np.nan 44 | 45 | obs_mx = da.tile(self.obs_ind, (np.shape(arr)[0], 1)) 46 | if cover: 47 | arr = da.multiply(arr, obs_mx) 48 | arr = np.expand_dims(arr.compute(), axis=2) 49 | obs_mx = np.expand_dims(obs_mx.compute(), axis=2) 50 | arr = np.concatenate((arr, obs_mx), axis=2) 51 | else: 52 | arr = np.expand_dims(arr.compute(), axis=2) 53 | 54 | if self.node_order is not None: 55 | arr = arr[:, self.node_order, :] 56 | return arr, bias, scale 57 | -------------------------------------------------------------------------------- /utils/EarlyStopping.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Early stopping for PyTorch from Stefano Nardo. 3 | # Original version: 4 | # https://gist.github.com/stefanonardo/693d96ceb2f531fa05db530f3e21517d 5 | # . 6 | import torch 7 | 8 | class EarlyStopping(object): 9 | def __init__(self, mode='min', min_delta=0, patience=10, percentage=False): 10 | self.mode = mode 11 | self.min_delta = min_delta 12 | self.patience = patience 13 | self.best = None 14 | self.num_bad_epochs = 0 15 | self.is_better = None 16 | self._init_is_better(mode, min_delta, percentage) 17 | 18 | if patience == 0: 19 | self.is_better = lambda a, b: True 20 | self.step = lambda a: False 21 | 22 | def step(self, metrics): 23 | if self.best is None: 24 | self.best = metrics 25 | return False 26 | 27 | if torch.isnan(metrics): 28 | return True 29 | 30 | if self.is_better(metrics, self.best): 31 | self.num_bad_epochs = 0 32 | self.best = metrics 33 | else: 34 | self.num_bad_epochs += 1 35 | 36 | if self.num_bad_epochs >= self.patience: 37 | return True 38 | 39 | return False 40 | 41 | def _init_is_better(self, mode, min_delta, percentage): 42 | if mode not in {'min', 'max'}: 43 | raise ValueError('mode ' + mode + ' is unknown!') 44 | if not percentage: 45 | if mode == 'min': 46 | self.is_better = lambda a, best: a < best - min_delta 47 | if mode == 'max': 48 | self.is_better = lambda a, best: a > best + min_delta 49 | else: 50 | if mode == 'min': 51 | self.is_better = lambda a, best: a < best - ( 52 | best * min_delta / 100) 53 | if mode == 'max': 54 | self.is_better = lambda a, best: a > best + ( 55 | best * min_delta / 100) 56 | -------------------------------------------------------------------------------- /utils/MeanPredictor.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import torch 3 | 4 | class MeanPredictor(): 5 | def __init__(self, device): 6 | pass 7 | 8 | def pred(self, y_true, mask=None): 9 | if mask is None: 10 | raise NotImplementedError 11 | else: 12 | y_pred = torch.zeros_like(y_true).squeeze(dim=1) 13 | pred_val= torch.masked_select(y_true[:, 0], mask).mean() 14 | y_pred += (pred_val*~mask) 15 | y_pred += torch.multiply(y_true.squeeze(dim=1), mask) 16 | return y_pred 17 | -------------------------------------------------------------------------------- /utils/Metrics.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import torch 3 | 4 | class Metrics(): 5 | def __init__(self, bias, scale, device): 6 | self.bias = torch.tensor(bias).to(device) 7 | self.scale = torch.tensor(scale).to(device) 8 | 9 | def _rescale(self, data): 10 | return torch.add( 11 | torch.multiply( 12 | data, 13 | self.scale 14 | ), 15 | self.bias 16 | ) 17 | 18 | def rel_err(self, y_pred, y_true, mask=None): 19 | if mask is None: 20 | y_pred = y_pred[:, 0] 21 | y_true = y_true[:, 0] 22 | else: 23 | y_pred = torch.masked_select(y_pred[:, 0], mask) 24 | y_true = torch.masked_select(y_true[:, 0], mask) 25 | y_pred = self._rescale(y_pred) 26 | y_true = self._rescale(y_true) 27 | err = torch.subtract(y_true, y_pred) 28 | rel_err = torch.abs(torch.divide(err, y_true)) 29 | return torch.mean(rel_err) 30 | -------------------------------------------------------------------------------- /utils/SensorInstaller.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import networkx as nx 3 | import numpy as np 4 | import random 5 | 6 | from utils.graph_utils import get_nx_graph 7 | 8 | class SensorInstaller(): 9 | def __init__(self, wds, include_pumps_as_master=False): 10 | self.wds = wds 11 | self.G = get_nx_graph(wds, mode='weighted') 12 | self.include_pumps = include_pumps_as_master 13 | self.master_nodes = self._collect_master_nodes(self.wds, self.G) 14 | self.sensor_nodes = set() 15 | 16 | def _collect_master_nodes(self, wds, G): 17 | master_nodes = set() 18 | 19 | if self.include_pumps: 20 | for pump in wds.pumps: 21 | node_a = pump.to_node.index 22 | node_b = pump.from_node.index 23 | if node_a in set(G.nodes): 24 | master_nodes.add(node_a) 25 | elif node_b in set(G.nodes): 26 | master_nodes.add(node_b) 27 | else: 28 | print('Neither node {} nor {} of pump {} not found in graph.'.format( 29 | node_a, node_b, pump)) 30 | raise 31 | 32 | for tank in wds.tanks: 33 | node_a = wds.links[list(tank.links.keys())[0]].from_node.index 34 | node_b = wds.links[list(tank.links.keys())[0]].to_node.index 35 | if node_a in set(G.nodes): 36 | master_nodes.add(node_a) 37 | elif node_b in set(G.nodes): 38 | master_nodes.add(node_b) 39 | else: 40 | print('Neither node {} nor {} of tank {} not found in graph.'.format( 41 | node_a, node_b, tank)) 42 | raise 43 | 44 | for reservoir in wds.reservoirs: 45 | node_a = wds.links[list(reservoir.links.keys())[0]].from_node.index 46 | node_b = wds.links[list(reservoir.links.keys())[0]].to_node.index 47 | if node_a in set(G.nodes): 48 | master_nodes.add(node_a) 49 | elif node_b in set(G.nodes): 50 | master_nodes.add(node_b) 51 | else: 52 | print('Neither node {} nor {} of reservoir {} not found in graph.'.format( 53 | node_a, node_b, reservoir)) 54 | raise 55 | return master_nodes 56 | 57 | def get_shortest_path_lengths(self, nodes): 58 | lengths = np.zeros(shape=(len(nodes), len(nodes))) 59 | for i, source in enumerate(nodes): 60 | for j, target in enumerate(nodes): 61 | lengths[i, j] = nx.shortest_path_length(self.G, source=source, target=target) 62 | return lengths 63 | 64 | def get_shortest_paths(self, nodes): 65 | paths = [] 66 | for source in list(nodes)[:int(len(nodes)//2+len(nodes)%2)]: 67 | for target in nodes: 68 | path = nx.shortest_path(self.G, source=source, target=target, weight=None) 69 | paths.append(nx.path_graph(path)) 70 | return paths 71 | 72 | def get_shortest_path_to_node_collection(self, target, node_collection, weight=None): 73 | closest_node= target 74 | path_length = np.inf 75 | for source in node_collection: 76 | tempo = nx.shortest_path_length( 77 | self.G, 78 | source = source, 79 | target = target, 80 | weight = weight 81 | ) 82 | if tempo < path_length: 83 | closest_node= source 84 | path_length = tempo 85 | assert closest_node is not target 86 | return set(nx.shortest_path(self.G, source=closest_node, target=target)) 87 | 88 | def set_sensor_nodes(self, sensor_nodes): 89 | self.sensor_nodes = set(sensor_nodes) 90 | 91 | def deploy_by_random(self, sensor_budget, seed=None): 92 | num_nodes = len(self.G.nodes) 93 | signal_mask = np.zeros(shape=(num_nodes,), dtype=np.int8) 94 | if seed: 95 | np.random.seed(seed) 96 | observed_nodes = np.random.choice( 97 | np.arange(num_nodes), 98 | size = sensor_budget, 99 | replace = False 100 | ) 101 | signal_mask[observed_nodes] = 1 102 | self.sensor_nodes = set(self.wds.junctions.index[np.where(signal_mask)[0]]) 103 | 104 | def deploy_by_xrandom(self, sensor_budget, seed=None, sensor_nodes=None): 105 | random.seed(seed) 106 | if not sensor_nodes: 107 | sensor_nodes = set() 108 | free_nodes = set(self.G.nodes).difference(sensor_nodes) 109 | rnd_nodes = random.sample(free_nodes, sensor_budget) 110 | self.sensor_nodes = sensor_nodes.union(rnd_nodes) 111 | 112 | def deploy_by_random_deprecated(self, sensor_budget, seed=None): 113 | num_nodes = len(self.G.nodes) 114 | signal_mask = np.ones(shape=(num_nodes,), dtype=np.int8) 115 | if seed: 116 | np.random.seed(seed) 117 | unobserved_nodes = np.random.choice( 118 | np.arange(num_nodes), 119 | size = num_nodes-sensor_budget, 120 | replace = False 121 | ) 122 | signal_mask[unobserved_nodes] = 0 123 | self.sensor_nodes = set(self.wds.junctions.index[np.where(signal_mask)[0]]) 124 | 125 | def deploy_by_shortest_path(self, sensor_budget, weight_by=None, sensor_nodes=None): 126 | if not sensor_nodes: 127 | sensor_nodes = set() 128 | forbidden_nodes = self.master_nodes.union(sensor_nodes) 129 | for _ in range(sensor_budget): 130 | path_lengths = dict() 131 | for node in forbidden_nodes: 132 | path_lengths[node] = 0 133 | for node in set(self.G.nodes).difference(forbidden_nodes): 134 | path_lengths[node] = np.inf 135 | 136 | for node in forbidden_nodes: 137 | tempo = nx.shortest_path_length( 138 | self.G, 139 | source = node, 140 | weight = weight_by 141 | ) 142 | for key, value in tempo.items(): 143 | if (key not in forbidden_nodes) and (path_lengths[key] > value): 144 | path_lengths[key] = value 145 | 146 | sensor_node = [candidate for candidate, path_length in path_lengths.items() 147 | if path_length == np.max(list(path_lengths.values()))][0] 148 | sensor_nodes.add(sensor_node) 149 | forbidden_nodes.add(sensor_node) 150 | self.sensor_nodes = sensor_nodes 151 | 152 | def deploy_by_shortest_path_v2(self, sensor_budget, weight_by=None): 153 | """Computationally ineffective implementation.""" 154 | sensor_nodes = set() 155 | forbidden_nodes = self.master_nodes 156 | for _ in range(sensor_budget): 157 | path_lengths = dict() 158 | for node in forbidden_nodes: 159 | path_lengths[node] = 0 160 | for node in set(self.G.nodes).difference(forbidden_nodes): 161 | path_lengths[node] = np.inf 162 | 163 | for node in forbidden_nodes: 164 | tempo = nx.shortest_path_length( 165 | self.G, 166 | source = node, 167 | weight = weight_by 168 | ) 169 | for key, value in tempo.items(): 170 | if (key not in forbidden_nodes) and (path_lengths[key] > value): 171 | path_lengths[key] = value 172 | 173 | sensor_node = [candidate for candidate, path_length in path_lengths.items() 174 | if path_length == np.max(list(path_lengths.values()))][0] 175 | branch = self.get_shortest_path_to_node_collection( 176 | sensor_node, 177 | forbidden_nodes, 178 | weight = weight_by 179 | ) 180 | sensor_nodes.add(sensor_node) 181 | forbidden_nodes = forbidden_nodes.union(branch) 182 | self.sensor_nodes = sensor_nodes 183 | 184 | def deploy_by_shortest_path_with_sensitivity( 185 | self, sensor_budget, node_weights_arr, weight_by=None, aversion=0, sensor_nodes=None): 186 | assert aversion >= 0 187 | if not sensor_nodes: 188 | sensor_nodes = set() 189 | forbidden_nodes = self.master_nodes.union(sensor_nodes) 190 | node_weights = dict() 191 | for i, junc in enumerate(self.wds.junctions): 192 | node_weights[junc.index] = node_weights_arr[i] 193 | 194 | for _ in range(sensor_budget): 195 | path_lengths = dict() 196 | for node in forbidden_nodes: 197 | path_lengths[node] = 0 198 | for node in set(self.G.nodes).difference(forbidden_nodes): 199 | path_lengths[node] = np.inf 200 | 201 | for node in self.master_nodes.union(sensor_nodes): 202 | tempo = nx.shortest_path_length( 203 | self.G, 204 | source = node, 205 | weight = weight_by 206 | ) 207 | for key, value in tempo.items(): 208 | if (key not in forbidden_nodes) and (path_lengths[key] > node_weights[key]*value): 209 | path_lengths[key] = node_weights[key]*value 210 | sensor_node = [candidate for candidate, path_length in path_lengths.items() 211 | if path_length == np.max(list(path_lengths.values()))][0] 212 | sensor_nodes.add(sensor_node) 213 | 214 | forbidden_nodes = forbidden_nodes.union(set( 215 | nx.algorithms.shortest_paths.single_source_shortest_path( 216 | self.G, sensor_node, cutoff=aversion 217 | ).keys() 218 | )) 219 | self.sensor_nodes = sensor_nodes 220 | 221 | def deploy_by_shortest_path_with_sensitivity_v2( 222 | self, sensor_budget, sensitivity_matrix, weight_by=None, aversion=0): 223 | if aversion > 0: 224 | print('Aversion is not implemented in this module.') 225 | sensor_nodes = set() 226 | forbidden_nodes = self.master_nodes 227 | node_weights = dict() 228 | nodal_sensitivities = np.sum(np.abs(sensitivity_matrix), axis=0) 229 | for i, junc in enumerate(self.wds.junctions): 230 | node_weights[junc.index] = nodal_sensitivities[i] 231 | 232 | for _ in range(sensor_budget): 233 | path_lengths = dict() 234 | for node in forbidden_nodes: 235 | path_lengths[node] = 0 236 | for node in set(self.G.nodes).difference(forbidden_nodes): 237 | path_lengths[node] = np.inf 238 | 239 | for node in self.master_nodes.union(sensor_nodes): 240 | tempo = nx.shortest_path_length( 241 | self.G, 242 | source = node, 243 | weight = weight_by 244 | ) 245 | for key, value in tempo.items(): 246 | if (key not in forbidden_nodes) and (path_lengths[key] > node_weights[key]*value): 247 | path_lengths[key] = node_weights[key]*value 248 | sensor_node = [candidate for candidate, path_length in path_lengths.items() 249 | if path_length == np.max(list(path_lengths.values()))][0] 250 | branch = self.get_shortest_path_to_node_collection( 251 | sensor_node, 252 | forbidden_nodes, 253 | weight = weight_by 254 | ) 255 | sensor_nodes.add(sensor_node) 256 | forbidden_nodes = forbidden_nodes.union(branch) 257 | self.sensor_nodes = sensor_nodes 258 | 259 | def master_node_mask(self): 260 | mask = np.zeros(shape=(len(self.wds.junctions),), dtype=np.float32) 261 | if self.master_nodes: 262 | for index in list(self.master_nodes): 263 | mask[np.where(self.wds.junctions.index.values == index)[0][0]] = 1. 264 | else: 265 | print('There are no master nodes in the system.') 266 | return mask 267 | 268 | def signal_mask(self): 269 | signal_mask = np.zeros(shape=(len(self.wds.junctions),), dtype=np.float32) 270 | if self.sensor_nodes: 271 | for index in list(self.sensor_nodes): 272 | signal_mask[ 273 | np.where(self.wds.junctions.index.values == index)[0][0] 274 | ] = 1. 275 | else: 276 | print('Sensors are not installed.') 277 | return signal_mask 278 | -------------------------------------------------------------------------------- /utils/baselines.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import numpy as np 3 | import networkx as nx 4 | 5 | def interpolated_regularization(S, X): 6 | """ Interpolated regularization as introduced in 7 | Belkin et al.: Regularization and Semi-Supervised Learning on Large Graphs, 8 | DOI: 10.1007/978-3-540-27819-1_43. 9 | """ 10 | idx_on = np.where(X[0,:,1] == 1)[0] 11 | idx_off = np.array(list(set(np.arange(X.shape[1]))-set(idx_on)), dtype=int) 12 | idx_off.sort() 13 | 14 | F = X[:, idx_on, 0].copy() 15 | S2 = S[idx_on, :][:, idx_off] 16 | S3 = S[idx_off, :][:, idx_off] 17 | try: 18 | S3_inv = np.linalg.inv(S3) 19 | except: 20 | S3_inv = np.linalg.pinv(S3) 21 | 22 | S32 = np.dot(S3_inv, S2.T) 23 | mu = - np.sum(np.dot(S32, F.T), axis=0) /\ 24 | np.sum(np.dot(S32, np.ones_like(F.T)), axis=0) 25 | F_tilde = np.dot(S32, F.T+mu).T 26 | 27 | X_hat = X[:, :, 0].copy() 28 | X_hat[:, idx_off] = F_tilde 29 | return X_hat 30 | -------------------------------------------------------------------------------- /utils/dataloader.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import copy 3 | import torch 4 | from torch_geometric.utils import from_networkx 5 | from torch_geometric.loader import DataLoader 6 | 7 | def build_dataloader(G, set_x, set_y, batch_size, shuffle): 8 | data = [] 9 | master_graph = from_networkx(G) 10 | for x, y in zip(set_x, set_y): 11 | graph = copy.deepcopy(master_graph) 12 | graph.x = torch.Tensor(x) 13 | graph.y = torch.Tensor(y) 14 | data.append(graph) 15 | loader = DataLoader(data, batch_size=batch_size, shuffle=shuffle) 16 | return loader 17 | -------------------------------------------------------------------------------- /utils/envs/conda_env-cpu.yml: -------------------------------------------------------------------------------- 1 | name: graco 2 | channels: 3 | - pytorch 4 | - pyg 5 | - conda-forge 6 | dependencies: 7 | - python 8 | - pytorch 9 | - torchvision 10 | - torchaudio 11 | - pyg 12 | - cpuonly 13 | - dask 14 | - jupyter 15 | - networkx 16 | - matplotlib 17 | - pandas 18 | - pip 19 | - pyyaml 20 | - wheel 21 | - zarr 22 | - pip: 23 | - epynet 24 | - optuna 25 | - pyDOE2 26 | - ray 27 | - torchinfo 28 | -------------------------------------------------------------------------------- /utils/envs/conda_env-cuda.yml: -------------------------------------------------------------------------------- 1 | name: graco 2 | channels: 3 | - pytorch 4 | - pyg 5 | - conda-forge 6 | dependencies: 7 | - python 8 | - pytorch 9 | - torchvision 10 | - torchaudio 11 | - pyg 12 | - cudatoolkit=11.3 13 | - dask 14 | - jupyter 15 | - networkx 16 | - matplotlib 17 | - pandas 18 | - pip 19 | - pyyaml 20 | - wheel 21 | - zarr 22 | - pip: 23 | - epynet 24 | - optuna 25 | - pyDOE2 26 | - ray 27 | - torchinfo 28 | -------------------------------------------------------------------------------- /utils/graph_utils.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import numpy as np 3 | import networkx as nx 4 | 5 | def get_nx_graph(wds, mode='binary'): 6 | junc_list = [] 7 | for junction in wds.junctions: 8 | junc_list.append(junction.index) 9 | G = nx.Graph() 10 | if mode == 'binary': 11 | for pipe in wds.pipes: 12 | if (pipe.from_node.index in junc_list) and (pipe.to_node.index in junc_list): 13 | G.add_edge(pipe.from_node.index, pipe.to_node.index, weight=1., length=pipe.length) 14 | for pump in wds.pumps: 15 | if (pump.from_node.index in junc_list) and (pump.to_node.index in junc_list): 16 | G.add_edge(pump.from_node.index, pump.to_node.index, weight=1., length=0.) 17 | for valve in wds.valves: 18 | if (valve.from_node.index in junc_list) and (valve.to_node.index in junc_list): 19 | G.add_edge(valve.from_node.index, valve.to_node.index, weight=1., length=0.) 20 | elif mode == 'weighted': 21 | max_weight = 0 22 | max_iweight = 0 23 | for pipe in wds.pipes: 24 | if (pipe.from_node.index in junc_list) and (pipe.to_node.index in junc_list): 25 | weight = ((pipe.diameter*3.281)**4.871 * pipe.roughness**1.852) / (4.727*pipe.length*3.281) 26 | iweight = weight**-1 27 | G.add_edge(pipe.from_node.index, pipe.to_node.index, 28 | weight = weight, 29 | iweight = iweight, 30 | length = pipe.length 31 | ) 32 | if weight > max_weight: 33 | max_weight = weight 34 | if iweight > max_iweight: 35 | max_iweight = iweight 36 | for (_,_,d) in G.edges(data=True): 37 | d['weight'] /= max_weight 38 | d['iweight'] /= max_iweight 39 | for pump in wds.pumps: 40 | if (pump.from_node.index in junc_list) and (pump.to_node.index in junc_list): 41 | G.add_edge(pump.from_node.index, pump.to_node.index, weight=1., iweight=1., length=0.) 42 | for valve in wds.valves: 43 | if (valve.from_node.index in junc_list) and (valve.to_node.index in junc_list): 44 | G.add_edge(valve.from_node.index, valve.to_node.index, weight=1., iweight=1., length=0.) 45 | elif mode == 'logarithmic': 46 | max_weight = 0 47 | for pipe in wds.pipes: 48 | if (pipe.from_node.index in junc_list) and (pipe.to_node.index in junc_list): 49 | weight = np.log10(((pipe.diameter*3.281)**4.871 * pipe.roughness**1.852) / (4.727*pipe.length*3.281)) 50 | G.add_edge(pipe.from_node.index, pipe.to_node.index, weight=float(weight), length=pipe.length) 51 | if weight > max_weight: 52 | max_weight = weight 53 | for (_,_,d) in G.edges(data=True): 54 | d['weight'] /= max_weight 55 | for pump in wds.pumps: 56 | if (pump.from_node.index in junc_list) and (pump.to_node.index in junc_list): 57 | G.add_edge(pump.from_node.index, pump.to_node.index, weight=1., length=0.) 58 | for valve in wds.valves: 59 | if (valve.from_node.index in junc_list) and (valve.to_node.index in junc_list): 60 | G.add_edge(valve.from_node.index, valve.to_node.index, weight=1., length=0.) 61 | elif mode == 'pruned': 62 | for pipe in wds.pipes: 63 | if (pipe.from_node.index in junc_list) and (pipe.to_node.index in junc_list): 64 | G.add_edge(pipe.from_node.index, pipe.to_node.index, weight=0., length=pipe.length) 65 | for pump in wds.pumps: 66 | if (pump.from_node.index in junc_list) and (pump.to_node.index in junc_list): 67 | G.add_edge(pump.from_node.index, pump.to_node.index, weight=0., length=0.) 68 | for valve in wds.valves: 69 | if (valve.from_node.index in junc_list) and (valve.to_node.index in junc_list): 70 | G.add_edge(valve.from_node.index, valve.to_node.index, weight=0., length=0.) 71 | return G 72 | 73 | def get_sensitivity_matrix(wds, perturbance): 74 | wds.solve() 75 | base_demands = wds.junctions.basedemand 76 | base_heads = wds.junctions.head 77 | S = np.empty(shape=(len(wds.junctions), len(wds.junctions)), dtype=np.float64) 78 | for i, junc in enumerate(wds.junctions): 79 | wds.junctions.basedemand = base_demands 80 | junc.basedemand += perturbance 81 | wds.solve() 82 | S[i, :] = (wds.junctions.head-base_heads) / base_heads 83 | return S 84 | -------------------------------------------------------------------------------- /water_networks/anytown.inp: -------------------------------------------------------------------------------- 1 | [TITLE] 2 | Anytown master file for rl-wds 3 | 4 | [JUNCTIONS] 5 | ;ID Elev Demand Pattern 6 | 1 6.0960 31.545834 1 ; 7 | 2 15.2400 12.618333 1 ; 8 | 3 15.2400 12.618333 1 ; 9 | 4 15.2400 37.855000 1 ; 10 | 5 24.3840 37.855000 1 ; 11 | 6 24.3840 37.855000 1 ; 12 | 7 24.3840 37.855000 1 ; 13 | 8 24.3840 25.236666 1 ; 14 | 9 36.5760 25.236666 1 ; 15 | 10 36.5760 25.236666 1 ; 16 | 11 36.5760 25.236666 1 ; 17 | 12 15.2400 31.545834 1 ; 18 | 13 15.2400 31.545834 1 ; 19 | 14 15.2400 31.545834 1 ; 20 | 15 15.2400 31.545834 1 ; 21 | 16 36.5760 25.236666 1 ; 22 | 17 36.5760 63.091668 1 ; 23 | 18 15.2400 31.545834 1 ; 24 | 19 15.2400 63.091668 1 ; 25 | 20 6.0960 0.000000 1 ; 26 | 21 15.2400 0.000000 1 ; 27 | 22 36.5760 0.000000 1 ; 28 | 29 | [RESERVOIRS] 30 | ;ID Head Pattern 31 | 40 3.0480 ; 32 | 33 | [TANKS] 34 | ;ID Elevation InitLevel MinLevel MaxLevel Diameter MinVol VolCurve 35 | 41 66.5350 3.0510 3.0480 10.6680 9.9532 237.0800 ; 36 | 42 66.5350 3.0510 3.0480 10.6680 9.9532 237.0800 ; 37 | 38 | [PIPES] 39 | ;ID Node1 Node2 Length Diameter Roughness MinorLoss Status 40 | 1 1 2 3657.6001 304.8000 120.0000 0.0000 Open ;Res 41 | 2 1 12 3657.6001 304.8000 70.0000 0.0000 Open ;city 42 | 3 1 13 3657.6001 406.4000 70.0000 0.0000 Open ;city 43 | 4 1 20 30.4800 762.0000 130.0000 0.0000 Open ;city 44 | 5 2 3 1828.8000 254.0000 120.0000 0.0000 Open ;Res 45 | 6 2 4 2743.2000 254.0000 120.0000 0.0000 Open ;Res 46 | 7 2 13 2743.2000 304.8000 70.0000 0.0000 Open ;Res 47 | 8 2 14 1828.8000 254.0000 120.0000 0.0000 Open ;Res 48 | 9 3 4 1828.8000 254.0000 120.0000 0.0000 Open ;Res 49 | 11 4 8 3657.6001 203.2000 120.0000 0.0000 Open ;Res 50 | 12 4 15 1828.8000 254.0000 120.0000 0.0000 Open ;Res 51 | 17 8 9 3657.6001 203.2000 120.0000 0.0000 Open ;Res 52 | 18 8 15 1828.8000 254.0000 120.0000 0.0000 Open ;Res 53 | 19 8 16 1828.8000 203.2000 120.0000 0.0000 Open ;Res 54 | 20 8 17 1828.8000 203.2000 120.0000 0.0000 Open ;Res 55 | 21 9 10 1828.8000 203.2000 120.0000 0.0000 Open ;Res 56 | 22 10 11 1828.8000 203.2000 120.0000 0.0000 Open ;Res 57 | 23 10 17 1828.8000 254.0000 120.0000 0.0000 Open ;Res 58 | 24 11 12 1828.8000 203.2000 120.0000 0.0000 Open ;Res 59 | 26 12 17 1828.8000 254.0000 120.0000 0.0000 Open ;Res 60 | 27 12 18 1828.8000 203.2000 70.0000 0.0000 Open ;city 61 | 28 13 14 1828.8000 304.8000 70.0000 0.0000 Open ;city 62 | 29 13 18 1828.8000 304.8000 70.0000 0.0000 Open ;city 63 | 30 13 19 1828.8000 254.0000 70.0000 0.0000 Open ;city 64 | 31 14 15 1828.8000 304.8000 70.0000 0.0000 Open ;city 65 | 32 14 19 1828.8000 254.0000 70.0000 0.0000 Open ;city 66 | 33 14 21 30.4800 304.8000 120.0000 0.0000 Open ;city 67 | 34 15 16 1828.8000 254.0000 70.0000 0.0000 Open ;city 68 | 35 15 19 1828.8000 254.0000 70.0000 0.0000 Open ;city 69 | 36 16 17 1828.8000 203.2000 120.0000 0.0000 Open ;Res 70 | 37 16 18 1828.8000 304.8000 70.0000 0.0000 Open ;city 71 | 38 16 19 1828.8000 254.0000 70.0000 0.0000 Open ;city 72 | 39 17 18 1828.8000 203.2000 120.0000 0.0000 Open ;Res 73 | 40 17 22 30.4800 304.8000 120.0000 0.0000 Open ;Res 74 | 41 18 19 1828.8000 254.0000 70.0000 0.0000 Open ;city 75 | 142 21 41 0.3048 304.8000 120.0000 0.0000 Open ;city 76 | 143 22 42 0.3048 304.8000 120.0000 0.0000 Open ;Res 77 | 110 4 5 1828.8000 203.2000 130.0000 0.0000 Open ;New, original value: 0.0001 78 | 113 5 6 1828.8000 203.2000 130.0000 0.0000 Open ;New, original value: 0.0001 79 | 114 6 7 1828.8000 203.2000 130.0000 0.0000 Open ;New, original value: 0.0001 80 | 115 6 8 1828.8000 203.2000 130.0000 0.0000 Open ;New, original value: 0.0001 81 | 116 7 8 1828.8000 203.2000 130.0000 0.0000 Open ;New, original value: 0.0001 82 | 125 11 17 2743.2000 203.2000 130.0000 0.0000 Open ;New, original value: 0.0001 83 | 84 | [PUMPS] 85 | ;ID Node1 Node2 Parameters 86 | 78g1 40 20 HEAD H1 ; 87 | 79g1 40 20 HEAD H1 ; 88 | 89 | [VALVES] 90 | ;ID Node1 Node2 Diameter Type Setting MinorLoss 91 | 92 | [TAGS] 93 | 94 | [DEMANDS] 95 | ;Junction Demand Pattern Category 96 | 97 | [STATUS] 98 | ;ID Status/Setting 99 | 100 | [PATTERNS] 101 | ;ID Multipliers 102 | ; 103 | 1 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 104 | ; 105 | 4 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 106 | 107 | [CURVES] 108 | ;ID X-Value Y-Value 109 | ;PUMP: 110 | H1 0.0000 91.4400 111 | H1 126.1833 89.0000 112 | H1 252.3667 82.2960 113 | H1 378.5500 70.1040 114 | H1 504.7333 55.1690 115 | H1 805.5556 0.0000 116 | ;PUMP: 117 | E1 0.0000 0.0000 118 | E1 126.1833 0.5000 119 | E1 252.3667 0.6500 120 | E1 378.5500 0.5500 121 | E1 504.7333 0.4000 122 | E1 805.5556 0.0000 123 | 124 | [CONTROLS] 125 | 126 | 127 | [RULES] 128 | 129 | 130 | [ENERGY] 131 | Global Efficiency 75.0000 132 | Global Price 0 133 | Demand Charge 0.0000 134 | 135 | [EMITTERS] 136 | ;Junction Coefficient 137 | 138 | [QUALITY] 139 | ;Node InitQual 140 | 141 | [SOURCES] 142 | ;Node Type Quality Pattern 143 | 144 | [REACTIONS] 145 | ;Type Pipe/Tank Coefficient 146 | 147 | 148 | [REACTIONS] 149 | Order Bulk 1.00 150 | Order Tank 1.00 151 | Order Wall 1 152 | Global Bulk 0.000000 153 | Global Wall 0.000000 154 | Limiting Potential 0 155 | Roughness Correlation 0 156 | 157 | [MIXING] 158 | ;Tank Model 159 | 160 | [TIMES] 161 | Duration 0:00 162 | Hydraulic Timestep 0:01 163 | Quality Timestep 0:01 164 | Pattern Timestep 1:00 165 | Pattern Start 0:00 166 | Report Timestep 1:00 167 | Report Start 0:00 168 | Start ClockTime 18:00:00 169 | Statistic NONE 170 | 171 | [REPORT] 172 | Status No 173 | Summary No 174 | Page 0 175 | 176 | [OPTIONS] 177 | Units LPS 178 | Headloss H-W 179 | Specific Gravity 1.000000 180 | Viscosity 1.000000 181 | Trials 200 182 | Accuracy 0.00100000 183 | CHECKFREQ 2 184 | MAXCHECK 10 185 | DAMPLIMIT 0.00000000 186 | Unbalanced STOP 187 | Pattern 1 188 | Demand Multiplier 1.0000 189 | Emitter Exponent 0.5000 190 | Quality NONE mg/L 191 | Diffusivity 1.000000 192 | Tolerance 0.01000000 193 | 194 | [COORDINATES] 195 | ;Node X-Coord Y-Coord 196 | 1 7682.33 3371.15 197 | 2 7633.71 5737.44 198 | 3 7520.26 7293.35 199 | 4 6175.04 7568.88 200 | 5 5591.57 8460.29 201 | 6 4100.49 8071.31 202 | 7 3192.87 7763.37 203 | 8 3321.19 6696.55 204 | 9 1305.62 4850.97 205 | 10 2317.45 3588.20 206 | 11 3701.63 2892.06 207 | 12 4846.03 3354.94 208 | 13 6450.57 4424.64 209 | 14 6466.77 5769.85 210 | 15 5094.45 6482.09 211 | 16 4017.86 5615.96 212 | 17 3005.49 4292.43 213 | 18 4846.03 4440.84 214 | 19 5332.25 5332.25 215 | 20 8544.18 3371.15 216 | 21 6094.00 6499.19 217 | 22 2422.68 4519.09 218 | 40 8700.60 3371.15 219 | 41 6094.00 6600.19 220 | 42 2422.68 4700.09 221 | 222 | [VERTICES] 223 | ;Link X-Coord Y-Coord 224 | 225 | [LABELS] 226 | ;X-Coord Y-Coord Label & Anchor Node 227 | 228 | [BACKDROP] 229 | DIMENSIONS 935.87 2613.65 9070.35 8738.70 230 | UNITS None 231 | FILE 232 | OFFSET 0.00 0.00 233 | 234 | [END] 235 | --------------------------------------------------------------------------------