├── .gitignore ├── LICENSE ├── README.md ├── code ├── builder.py ├── check_functions.py ├── class_commerciallink.py ├── class_country.py ├── class_firm.py ├── class_households.py ├── class_observer.py ├── class_transport_network.py ├── export_functions.py ├── functions.py ├── mainBase.py └── simulations.py ├── data_requirements.md ├── parameter ├── filepaths.py ├── filepaths_default.py ├── parameters.py └── parameters_default.py └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | input/* 2 | output/* 3 | tmp/* 4 | code/__pycache__/ 5 | parameter/__pycache__/ 6 | parameter/parameters.py -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Attribution-NonCommercial-NoDerivatives 4.0 International 2 | 3 | ======================================================================= 4 | 5 | Creative Commons Corporation ("Creative Commons") is not a law firm and 6 | does not provide legal services or legal advice. Distribution of 7 | Creative Commons public licenses does not create a lawyer-client or 8 | other relationship. Creative Commons makes its licenses and related 9 | information available on an "as-is" basis. Creative Commons gives no 10 | warranties regarding its licenses, any material licensed under their 11 | terms and conditions, or any related information. Creative Commons 12 | disclaims all liability for damages resulting from their use to the 13 | fullest extent possible. 14 | 15 | Using Creative Commons Public Licenses 16 | 17 | Creative Commons public licenses provide a standard set of terms and 18 | conditions that creators and other rights holders may use to share 19 | original works of authorship and other material subject to copyright 20 | and certain other rights specified in the public license below. The 21 | following considerations are for informational purposes only, are not 22 | exhaustive, and do not form part of our licenses. 23 | 24 | Considerations for licensors: Our public licenses are 25 | intended for use by those authorized to give the public 26 | permission to use material in ways otherwise restricted by 27 | copyright and certain other rights. Our licenses are 28 | irrevocable. Licensors should read and understand the terms 29 | and conditions of the license they choose before applying it. 30 | Licensors should also secure all rights necessary before 31 | applying our licenses so that the public can reuse the 32 | material as expected. Licensors should clearly mark any 33 | material not subject to the license. This includes other CC- 34 | licensed material, or material used under an exception or 35 | limitation to copyright. More considerations for licensors: 36 | wiki.creativecommons.org/Considerations_for_licensors 37 | 38 | Considerations for the public: By using one of our public 39 | licenses, a licensor grants the public permission to use the 40 | licensed material under specified terms and conditions. If 41 | the licensor's permission is not necessary for any reason--for 42 | example, because of any applicable exception or limitation to 43 | copyright--then that use is not regulated by the license. Our 44 | licenses grant only permissions under copyright and certain 45 | other rights that a licensor has authority to grant. Use of 46 | the licensed material may still be restricted for other 47 | reasons, including because others have copyright or other 48 | rights in the material. A licensor may make special requests, 49 | such as asking that all changes be marked or described. 50 | Although not required by our licenses, you are encouraged to 51 | respect those requests where reasonable. More considerations 52 | for the public: 53 | wiki.creativecommons.org/Considerations_for_licensees 54 | 55 | ======================================================================= 56 | 57 | Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 58 | International Public License 59 | 60 | By exercising the Licensed Rights (defined below), You accept and agree 61 | to be bound by the terms and conditions of this Creative Commons 62 | Attribution-NonCommercial-NoDerivatives 4.0 International Public 63 | License ("Public License"). To the extent this Public License may be 64 | interpreted as a contract, You are granted the Licensed Rights in 65 | consideration of Your acceptance of these terms and conditions, and the 66 | Licensor grants You such rights in consideration of benefits the 67 | Licensor receives from making the Licensed Material available under 68 | these terms and conditions. 69 | 70 | 71 | Section 1 -- Definitions. 72 | 73 | a. Adapted Material means material subject to Copyright and Similar 74 | Rights that is derived from or based upon the Licensed Material 75 | and in which the Licensed Material is translated, altered, 76 | arranged, transformed, or otherwise modified in a manner requiring 77 | permission under the Copyright and Similar Rights held by the 78 | Licensor. For purposes of this Public License, where the Licensed 79 | Material is a musical work, performance, or sound recording, 80 | Adapted Material is always produced where the Licensed Material is 81 | synched in timed relation with a moving image. 82 | 83 | b. Copyright and Similar Rights means copyright and/or similar rights 84 | closely related to copyright including, without limitation, 85 | performance, broadcast, sound recording, and Sui Generis Database 86 | Rights, without regard to how the rights are labeled or 87 | categorized. For purposes of this Public License, the rights 88 | specified in Section 2(b)(1)-(2) are not Copyright and Similar 89 | Rights. 90 | 91 | c. Effective Technological Measures means those measures that, in the 92 | absence of proper authority, may not be circumvented under laws 93 | fulfilling obligations under Article 11 of the WIPO Copyright 94 | Treaty adopted on December 20, 1996, and/or similar international 95 | agreements. 96 | 97 | d. Exceptions and Limitations means fair use, fair dealing, and/or 98 | any other exception or limitation to Copyright and Similar Rights 99 | that applies to Your use of the Licensed Material. 100 | 101 | e. Licensed Material means the artistic or literary work, database, 102 | or other material to which the Licensor applied this Public 103 | License. 104 | 105 | f. Licensed Rights means the rights granted to You subject to the 106 | terms and conditions of this Public License, which are limited to 107 | all Copyright and Similar Rights that apply to Your use of the 108 | Licensed Material and that the Licensor has authority to license. 109 | 110 | g. Licensor means the individual(s) or entity(ies) granting rights 111 | under this Public License. 112 | 113 | h. NonCommercial means not primarily intended for or directed towards 114 | commercial advantage or monetary compensation. For purposes of 115 | this Public License, the exchange of the Licensed Material for 116 | other material subject to Copyright and Similar Rights by digital 117 | file-sharing or similar means is NonCommercial provided there is 118 | no payment of monetary compensation in connection with the 119 | exchange. 120 | 121 | i. Share means to provide material to the public by any means or 122 | process that requires permission under the Licensed Rights, such 123 | as reproduction, public display, public performance, distribution, 124 | dissemination, communication, or importation, and to make material 125 | available to the public including in ways that members of the 126 | public may access the material from a place and at a time 127 | individually chosen by them. 128 | 129 | j. Sui Generis Database Rights means rights other than copyright 130 | resulting from Directive 96/9/EC of the European Parliament and of 131 | the Council of 11 March 1996 on the legal protection of databases, 132 | as amended and/or succeeded, as well as other essentially 133 | equivalent rights anywhere in the world. 134 | 135 | k. You means the individual or entity exercising the Licensed Rights 136 | under this Public License. Your has a corresponding meaning. 137 | 138 | 139 | Section 2 -- Scope. 140 | 141 | a. License grant. 142 | 143 | 1. Subject to the terms and conditions of this Public License, 144 | the Licensor hereby grants You a worldwide, royalty-free, 145 | non-sublicensable, non-exclusive, irrevocable license to 146 | exercise the Licensed Rights in the Licensed Material to: 147 | 148 | a. reproduce and Share the Licensed Material, in whole or 149 | in part, for NonCommercial purposes only; and 150 | 151 | b. produce and reproduce, but not Share, Adapted Material 152 | for NonCommercial purposes only. 153 | 154 | 2. Exceptions and Limitations. For the avoidance of doubt, where 155 | Exceptions and Limitations apply to Your use, this Public 156 | License does not apply, and You do not need to comply with 157 | its terms and conditions. 158 | 159 | 3. Term. The term of this Public License is specified in Section 160 | 6(a). 161 | 162 | 4. Media and formats; technical modifications allowed. The 163 | Licensor authorizes You to exercise the Licensed Rights in 164 | all media and formats whether now known or hereafter created, 165 | and to make technical modifications necessary to do so. The 166 | Licensor waives and/or agrees not to assert any right or 167 | authority to forbid You from making technical modifications 168 | necessary to exercise the Licensed Rights, including 169 | technical modifications necessary to circumvent Effective 170 | Technological Measures. For purposes of this Public License, 171 | simply making modifications authorized by this Section 2(a) 172 | (4) never produces Adapted Material. 173 | 174 | 5. Downstream recipients. 175 | 176 | a. Offer from the Licensor -- Licensed Material. Every 177 | recipient of the Licensed Material automatically 178 | receives an offer from the Licensor to exercise the 179 | Licensed Rights under the terms and conditions of this 180 | Public License. 181 | 182 | b. No downstream restrictions. You may not offer or impose 183 | any additional or different terms or conditions on, or 184 | apply any Effective Technological Measures to, the 185 | Licensed Material if doing so restricts exercise of the 186 | Licensed Rights by any recipient of the Licensed 187 | Material. 188 | 189 | 6. No endorsement. Nothing in this Public License constitutes or 190 | may be construed as permission to assert or imply that You 191 | are, or that Your use of the Licensed Material is, connected 192 | with, or sponsored, endorsed, or granted official status by, 193 | the Licensor or others designated to receive attribution as 194 | provided in Section 3(a)(1)(A)(i). 195 | 196 | b. Other rights. 197 | 198 | 1. Moral rights, such as the right of integrity, are not 199 | licensed under this Public License, nor are publicity, 200 | privacy, and/or other similar personality rights; however, to 201 | the extent possible, the Licensor waives and/or agrees not to 202 | assert any such rights held by the Licensor to the limited 203 | extent necessary to allow You to exercise the Licensed 204 | Rights, but not otherwise. 205 | 206 | 2. Patent and trademark rights are not licensed under this 207 | Public License. 208 | 209 | 3. To the extent possible, the Licensor waives any right to 210 | collect royalties from You for the exercise of the Licensed 211 | Rights, whether directly or through a collecting society 212 | under any voluntary or waivable statutory or compulsory 213 | licensing scheme. In all other cases the Licensor expressly 214 | reserves any right to collect such royalties, including when 215 | the Licensed Material is used other than for NonCommercial 216 | purposes. 217 | 218 | 219 | Section 3 -- License Conditions. 220 | 221 | Your exercise of the Licensed Rights is expressly made subject to the 222 | following conditions. 223 | 224 | a. Attribution. 225 | 226 | 1. If You Share the Licensed Material, You must: 227 | 228 | a. retain the following if it is supplied by the Licensor 229 | with the Licensed Material: 230 | 231 | i. identification of the creator(s) of the Licensed 232 | Material and any others designated to receive 233 | attribution, in any reasonable manner requested by 234 | the Licensor (including by pseudonym if 235 | designated); 236 | 237 | ii. a copyright notice; 238 | 239 | iii. a notice that refers to this Public License; 240 | 241 | iv. a notice that refers to the disclaimer of 242 | warranties; 243 | 244 | v. a URI or hyperlink to the Licensed Material to the 245 | extent reasonably practicable; 246 | 247 | b. indicate if You modified the Licensed Material and 248 | retain an indication of any previous modifications; and 249 | 250 | c. indicate the Licensed Material is licensed under this 251 | Public License, and include the text of, or the URI or 252 | hyperlink to, this Public License. 253 | 254 | For the avoidance of doubt, You do not have permission under 255 | this Public License to Share Adapted Material. 256 | 257 | 2. You may satisfy the conditions in Section 3(a)(1) in any 258 | reasonable manner based on the medium, means, and context in 259 | which You Share the Licensed Material. For example, it may be 260 | reasonable to satisfy the conditions by providing a URI or 261 | hyperlink to a resource that includes the required 262 | information. 263 | 264 | 3. If requested by the Licensor, You must remove any of the 265 | information required by Section 3(a)(1)(A) to the extent 266 | reasonably practicable. 267 | 268 | 269 | Section 4 -- Sui Generis Database Rights. 270 | 271 | Where the Licensed Rights include Sui Generis Database Rights that 272 | apply to Your use of the Licensed Material: 273 | 274 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right 275 | to extract, reuse, reproduce, and Share all or a substantial 276 | portion of the contents of the database for NonCommercial purposes 277 | only and provided You do not Share Adapted Material; 278 | 279 | b. if You include all or a substantial portion of the database 280 | contents in a database in which You have Sui Generis Database 281 | Rights, then the database in which You have Sui Generis Database 282 | Rights (but not its individual contents) is Adapted Material; and 283 | 284 | c. You must comply with the conditions in Section 3(a) if You Share 285 | all or a substantial portion of the contents of the database. 286 | 287 | For the avoidance of doubt, this Section 4 supplements and does not 288 | replace Your obligations under this Public License where the Licensed 289 | Rights include other Copyright and Similar Rights. 290 | 291 | 292 | Section 5 -- Disclaimer of Warranties and Limitation of Liability. 293 | 294 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE 295 | EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS 296 | AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF 297 | ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, 298 | IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, 299 | WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR 300 | PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, 301 | ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT 302 | KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT 303 | ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. 304 | 305 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE 306 | TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, 307 | NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, 308 | INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, 309 | COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR 310 | USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN 311 | ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR 312 | DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR 313 | IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. 314 | 315 | c. The disclaimer of warranties and limitation of liability provided 316 | above shall be interpreted in a manner that, to the extent 317 | possible, most closely approximates an absolute disclaimer and 318 | waiver of all liability. 319 | 320 | 321 | Section 6 -- Term and Termination. 322 | 323 | a. This Public License applies for the term of the Copyright and 324 | Similar Rights licensed here. However, if You fail to comply with 325 | this Public License, then Your rights under this Public License 326 | terminate automatically. 327 | 328 | b. Where Your right to use the Licensed Material has terminated under 329 | Section 6(a), it reinstates: 330 | 331 | 1. automatically as of the date the violation is cured, provided 332 | it is cured within 30 days of Your discovery of the 333 | violation; or 334 | 335 | 2. upon express reinstatement by the Licensor. 336 | 337 | For the avoidance of doubt, this Section 6(b) does not affect any 338 | right the Licensor may have to seek remedies for Your violations 339 | of this Public License. 340 | 341 | c. For the avoidance of doubt, the Licensor may also offer the 342 | Licensed Material under separate terms or conditions or stop 343 | distributing the Licensed Material at any time; however, doing so 344 | will not terminate this Public License. 345 | 346 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public 347 | License. 348 | 349 | 350 | Section 7 -- Other Terms and Conditions. 351 | 352 | a. The Licensor shall not be bound by any additional or different 353 | terms or conditions communicated by You unless expressly agreed. 354 | 355 | b. Any arrangements, understandings, or agreements regarding the 356 | Licensed Material not stated herein are separate from and 357 | independent of the terms and conditions of this Public License. 358 | 359 | 360 | Section 8 -- Interpretation. 361 | 362 | a. For the avoidance of doubt, this Public License does not, and 363 | shall not be interpreted to, reduce, limit, restrict, or impose 364 | conditions on any use of the Licensed Material that could lawfully 365 | be made without permission under this Public License. 366 | 367 | b. To the extent possible, if any provision of this Public License is 368 | deemed unenforceable, it shall be automatically reformed to the 369 | minimum extent necessary to make it enforceable. If the provision 370 | cannot be reformed, it shall be severed from this Public License 371 | without affecting the enforceability of the remaining terms and 372 | conditions. 373 | 374 | c. No term or condition of this Public License will be waived and no 375 | failure to comply consented to unless expressly agreed to by the 376 | Licensor. 377 | 378 | d. Nothing in this Public License constitutes or may be interpreted 379 | as a limitation upon, or waiver of, any privileges and immunities 380 | that apply to the Licensor or You, including from the legal 381 | processes of any jurisdiction or authority. 382 | 383 | ======================================================================= 384 | 385 | Creative Commons is not a party to its public 386 | licenses. Notwithstanding, Creative Commons may elect to apply one of 387 | its public licenses to material it publishes and in those instances 388 | will be considered the “Licensor.” The text of the Creative Commons 389 | public licenses is dedicated to the public domain under the CC0 Public 390 | Domain Dedication. Except for the limited purpose of indicating that 391 | material is shared under a Creative Commons public license or as 392 | otherwise permitted by the Creative Commons policies published at 393 | creativecommons.org/policies, Creative Commons does not authorize the 394 | use of the trademark "Creative Commons" or any other trademark or logo 395 | of Creative Commons without its prior written consent including, 396 | without limitation, in connection with any unauthorized modifications 397 | to any of its public licenses or any other arrangements, 398 | understandings, or agreements concerning use of licensed material. For 399 | the avoidance of doubt, this paragraph does not form part of the 400 | public licenses. 401 | 402 | Creative Commons may be contacted at creativecommons.org. 403 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # This repo is no longer updated 2 | 3 | For the new model, see https://github.com/ccolon/dsc 4 | 5 | # DisruptSupplyChain Model 6 | 7 | For the code related to the [Nature Sustainability paper (2020)](https://www.nature.com/articles/s41893-020-00649-4) or the [World Bank report (2019)](https://openknowledge.worldbank.org/handle/10986/31909) on Tanzania, please see the following Zenodo record [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7401053.svg)](https://doi.org/10.5281/zenodo.7401053) 8 | -------------------------------------------------------------------------------- /code/check_functions.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | 3 | 4 | def compareProductionPurchasePlans(firm_list, country_list, households): 5 | # Create dictionary to map firm id and country id into sector 6 | dic_agent_id_to_sector = {firm.pid: firm.sector for firm in firm_list} 7 | for country in country_list: 8 | dic_agent_id_to_sector[country.pid] = "IMP" 9 | 10 | # Evalute purchase plans 11 | ## of firms 12 | df = pd.DataFrame({firm.pid: firm.purchase_plan for firm in firm_list}) 13 | df["tot_purchase_planned_by_firms"] = df.sum(axis=1) 14 | df['input_sector'] = df.index.map(dic_agent_id_to_sector) 15 | df_firms = df.groupby('input_sector')["tot_purchase_planned_by_firms"].sum() 16 | 17 | ## of countries 18 | df = pd.DataFrame({country.pid: country.purchase_plan for country in country_list}) 19 | df["tot_purchase_planned_by_countries"] = df.sum(axis=1) 20 | df['input_sector'] = df.index.map(dic_agent_id_to_sector) 21 | df_countries = df.groupby('input_sector')["tot_purchase_planned_by_countries"].sum() 22 | 23 | ## of households 24 | df = pd.DataFrame({"tot_purchase_planned_by_households": households.purchase_plan}) 25 | df['input_sector'] = df.index.map(dic_agent_id_to_sector) 26 | df_households = df.groupby('input_sector')["tot_purchase_planned_by_households"].sum() 27 | 28 | ## concat 29 | df_purchase_plan = pd.concat([df_firms, df_countries, df_households], axis=1, sort=True) 30 | 31 | # Evalute productions/sales 32 | ## of firms 33 | df = pd.DataFrame({"tot_production_per_firm": 34 | {firm.pid: firm.production for firm in firm_list} 35 | }) 36 | df['sector'] = df.index.map(dic_agent_id_to_sector) 37 | df_firms = df.groupby('sector')["tot_production_per_firm"].sum() 38 | 39 | ## of countries 40 | df = pd.DataFrame({"tot_production_per_country": 41 | {country.pid: country.qty_sold for country in country_list} 42 | }) 43 | df['sector'] = df.index.map(dic_agent_id_to_sector) 44 | df_countries = df.groupby('sector')["tot_production_per_country"].sum() 45 | 46 | ## concat 47 | df_sales = pd.concat([df_firms, df_countries], axis=1, sort=True) 48 | 49 | # Compare 50 | res = pd.concat([df_purchase_plan, df_sales], axis=1, sort=True) 51 | res['dif'] = res["tot_purchase_planned_by_firms"] \ 52 | + res["tot_purchase_planned_by_countries"] \ 53 | + res["tot_purchase_planned_by_households"] \ 54 | - res['tot_production_per_firm'] - res['tot_production_per_country'] 55 | boolindex_unbalanced = res['dif'] > 1e-6 56 | if boolindex_unbalanced.sum() > 0: 57 | logging.warn("Sales does not equate purchases for sectors: "+ 58 | str(res.index[boolindex_unbalanced].tolist())) 59 | 60 | 61 | def compareDeliveredVsReceived(): 62 | # not finished 63 | qty_delivered_by_firm_per_sector = {} 64 | for firm in firm_list: 65 | if firm.sector not in qty_delivered_by_firm_per_sector.keys(): 66 | qty_delivered_by_firm_per_sector[firm.sector] = 0 67 | qty_delivered_by_firm = 0 68 | for edge in G.out_edges(firm): 69 | qty_delivered_by_firm_per_sector[firm.sector] += \ 70 | G[firm][edge[1]]['object'].delivery 71 | 72 | qty_bought_by_household_per_sector = {} 73 | for edge in G.in_edges(households): 74 | if edge[0].sector not in qty_bought_by_household_per_sector.keys(): 75 | qty_bought_by_household_per_sector[edge[0].sector] = 0 76 | qty_bought_by_household_per_sector[firm.sector] += \ 77 | G[edge[0]][households]['object'].delivery 78 | 79 | qty_ordered_by_firm_per_sector = {} 80 | for firm in firm_list: 81 | if firm.sector not in qty_ordered_by_firm_per_sector.keys(): 82 | qty_delivered_by_firm_per_sector[firm.sector] = 0 83 | -------------------------------------------------------------------------------- /code/class_commerciallink.py: -------------------------------------------------------------------------------- 1 | class CommercialLink(object): 2 | 3 | def __init__(self, pid=None, supplier_id=None, buyer_id=None, product=None, 4 | product_type=None, category=None, order=0, delivery=0, payment=0, 5 | route=None): 6 | # Parameter 7 | self.pid = pid 8 | self.product = product #sector of producing firm 9 | self.product_type = product_type #service, manufacturing, etc. (=sector_type) 10 | self.category = category #import, export, domestic_B2B, transit 11 | self.route = route or [] # node_id path of the transport network, as 12 | # [(node1, ), (node1, node2), (node2, ), (node2, node3), (node3, )] 13 | self.route_length = 1 14 | self.route_time_cost = 0 15 | self.route_cost_per_ton = 0 16 | self.route_mode = "road" 17 | self.supplier_id = supplier_id 18 | self.buyer_id = buyer_id 19 | self.eq_price = 1 20 | self.possible_transport_modes = "roads" 21 | 22 | # Variable 23 | self.current_route = 'main' 24 | self.order = order # flows upstream 25 | self.delivery = delivery # flows downstream. What is supposed to be delivered (if no transport pb) 26 | self.realized_delivery = delivery 27 | self.payment = payment # flows upstream 28 | self.alternative_route = [] 29 | self.alternative_route_length = 1 30 | self.alternative_route_time_cost = 0 31 | self.alternative_route_cost_per_ton = 0 32 | self.alternative_route_mode = None 33 | self.price = 1 34 | 35 | def print_info(self): 36 | # print("\nCommercial Link from "+str(self.supplier_id)+" to "+str(self.buyer_id)+":") 37 | # print("route:", self.route) 38 | # print("alternative route:", self.alternative_route) 39 | # print("product:", self.product) 40 | # print("order:", self.order) 41 | # print("delivery:", self.delivery) 42 | # print("payment:", self.payment) 43 | attribute_to_print = [a for a in dir(self) if not a.startswith('__') and not callable(getattr(self, a))] 44 | for attribute in attribute_to_print: 45 | print(attribute+": "+str(getattr(self, attribute))) 46 | print("\n") 47 | 48 | 49 | 50 | def reset_variables(self): 51 | # Variable 52 | self.current_route = 'main' 53 | self.order = 0 # flows upstream 54 | self.delivery = 0 # flows downstream 55 | self.payment = 0 # flows upstream 56 | self.alternative_route = [] 57 | self.alternative_route_time_cost = 0 58 | self.alternative_route_cost_per_ton = 0 59 | self.price = 1 60 | 61 | 62 | def storeRouteInformation(self, route, transport_mode, 63 | main_or_alternative, transport_network): 64 | 65 | distance, route_time_cost, cost_per_ton = transport_network.giveRouteCaracteristics(route) 66 | 67 | if main_or_alternative == "main": 68 | self.route = route 69 | self.route_mode = transport_mode 70 | self.route_length = distance 71 | self.route_time_cost = route_time_cost 72 | self.route_cost_per_ton = cost_per_ton 73 | 74 | elif main_or_alternative == "alternative": 75 | self.alternative_route = route 76 | self.alternative_route_mode = transport_mode 77 | self.alternative_route_length = distance 78 | self.alternative_route_time_cost = route_time_cost 79 | self.alternative_route_cost_per_ton = cost_per_ton 80 | 81 | switching_cost = 0.05 82 | if self.alternative_route_mode != self.route_mode: 83 | self.alternative_route_cost_per_ton * (1 + switching_cost) 84 | 85 | else: 86 | raise ValueError("'main_or_alternative' is not in ['main', 'alternative']") 87 | -------------------------------------------------------------------------------- /code/class_country.py: -------------------------------------------------------------------------------- 1 | import random 2 | import pandas as pd 3 | import math 4 | import logging 5 | 6 | from class_firm import Firm 7 | from class_commerciallink import CommercialLink 8 | from functions import rescale_values, \ 9 | generate_weights_from_list, \ 10 | determine_suppliers_and_weights,\ 11 | identify_firms_in_each_sector,\ 12 | identify_special_transport_nodes,\ 13 | transformUSDtoTons 14 | 15 | class Country(object): 16 | 17 | def __init__(self, pid=None, qty_sold=None, qty_purchased=None, odpoint=None, 18 | purchase_plan=None, transit_from=None, transit_to=None, supply_importance=None, 19 | usd_per_ton=None): 20 | # Instrinsic parameters 21 | self.pid = pid 22 | self.usd_per_ton = usd_per_ton 23 | self.odpoint = odpoint 24 | 25 | # Parameter based on data 26 | # self.entry_points = entry_points or [] 27 | self.transit_from = transit_from or {} 28 | self.transit_to = transit_to or {} 29 | self.supply_importance = supply_importance 30 | 31 | # Parameters depending on supplier-buyer network 32 | self.clients = {} 33 | self.purchase_plan = purchase_plan or {} 34 | self.qty_sold = qty_sold or {} 35 | self.qty_purchased = qty_purchased or {} 36 | self.qty_purchased_perfirm = {} 37 | 38 | # Variable 39 | self.generalized_transport_cost = 0 40 | self.usd_transported = 0 41 | self.tons_transported = 0 42 | self.tonkm_transported = 0 43 | self.extra_spending = 0 44 | self.consumption_loss = 0 45 | 46 | def reset_variables(self): 47 | self.generalized_transport_cost = 0 48 | self.usd_transported = 0 49 | self.tons_transported = 0 50 | self.tonkm_transported = 0 51 | self.extra_spending = 0 52 | self.consumption_loss = 0 53 | 54 | 55 | def create_transit_links(self, graph, country_list): 56 | for selling_country_pid, quantity in self.transit_from.items(): 57 | selling_country_object = [country for country in country_list if country.pid==selling_country_pid][0] 58 | graph.add_edge(selling_country_object, self, 59 | object=CommercialLink( 60 | pid=str(selling_country_pid)+'to'+str(self.pid), 61 | product='transit', 62 | product_type="transit", #suppose that transit type are non service, material stuff 63 | category="transit", 64 | supplier_id=selling_country_pid, 65 | buyer_id=self.pid)) 66 | graph[selling_country_object][self]['weight'] = 1 67 | self.purchase_plan[selling_country_pid] = quantity 68 | selling_country_object.clients[self.pid] = {'sector':self.pid, 'share':0} 69 | 70 | 71 | def select_suppliers(self, graph, firm_list, country_list, sector_table, transport_nodes): 72 | # Select other country as supplier: transit flows 73 | self.create_transit_links(graph, country_list) 74 | 75 | # Select Tanzanian suppliers 76 | ## Identify firms from each sectors 77 | dic_sector_to_firmid = identify_firms_in_each_sector(firm_list) 78 | share_exporting_firms = sector_table.set_index('sector')['share_exporting_firms'].to_dict() 79 | ## Identify odpoints which exports 80 | export_odpoints = identify_special_transport_nodes(transport_nodes, "export") 81 | ## Identify sectors to buy from 82 | present_sectors = list(set(list(dic_sector_to_firmid.keys()))) 83 | sectors_to_buy_from = list(self.qty_purchased.keys()) 84 | present_sectors_to_buy_from = list(set(present_sectors) & set(sectors_to_buy_from)) 85 | ## For each one of these sectors, select suppliers 86 | supplier_selection_mode = { 87 | "importance_export": { 88 | "export_odpoints": export_odpoints, 89 | "bonus": 10 90 | } 91 | } 92 | for sector in present_sectors_to_buy_from: #only select suppliers from sectors that are present 93 | # Identify potential suppliers 94 | potential_supplier_pid = dic_sector_to_firmid[sector] 95 | # Evaluate how much to select 96 | nb_selected_suppliers = math.ceil( 97 | len(dic_sector_to_firmid[sector])*share_exporting_firms[sector] 98 | ) 99 | # Select supplier and weights 100 | selected_supplier_ids, supplier_weights = determine_suppliers_and_weights( 101 | potential_supplier_pid, 102 | nb_selected_suppliers, 103 | firm_list, 104 | mode=supplier_selection_mode) 105 | # Materialize the link 106 | for supplier_id in selected_supplier_ids: 107 | # For each supplier, create an edge in the economic network 108 | graph.add_edge(firm_list[supplier_id], self, 109 | object=CommercialLink( 110 | pid=str(supplier_id)+'to'+str(self.pid), 111 | product=sector, 112 | product_type=firm_list[supplier_id].sector_type, 113 | category="export", 114 | supplier_id=supplier_id, 115 | buyer_id=self.pid)) 116 | # Associate a weight 117 | weight = supplier_weights.pop(0) 118 | graph[firm_list[supplier_id]][self]['weight'] = weight 119 | # Households save the name of the retailer, its sector, its weight, and adds it to its purchase plan 120 | self.qty_purchased_perfirm[supplier_id] = { 121 | 'sector': sector, 122 | 'weight': weight, 123 | 'amount': self.qty_purchased[sector] * weight 124 | } 125 | self.purchase_plan[supplier_id] = self.qty_purchased[sector] * weight 126 | # The supplier saves the fact that it exports to this country. 127 | # The share of sales cannot be calculated now, we put 0 for the moment 128 | firm_list[supplier_id].clients[self.pid] = {'sector':self.pid, 'share':0} 129 | 130 | 131 | def send_purchase_orders(self, graph): 132 | for edge in graph.in_edges(self): 133 | try: 134 | quantity_to_buy = self.purchase_plan[edge[0].pid] 135 | except KeyError: 136 | print("Country "+self.pid+": No purchase plan for supplier", edge[0].pid) 137 | quantity_to_buy = 0 138 | graph[edge[0]][self]['object'].order = quantity_to_buy 139 | 140 | 141 | def choose_route(self, transport_network, 142 | origin_node, destination_node, 143 | possible_transport_modes): 144 | 145 | # If possible_transport_modes is "roads", then simply pick the shortest road route 146 | if possible_transport_modes == "roads": 147 | route = transport_network.provide_shortest_route(origin_node, 148 | destination_node, route_weight="road_weight") 149 | return route, "roads" 150 | 151 | # If possible_transport_modes is "intl_multimodes", 152 | capacity_burden = 1e5 153 | if possible_transport_modes == "intl_multimodes": 154 | # pick routes for each modes 155 | modes = ['intl_road_shv', 'intl_road_vnm', 'intl_rail', 'intl_river'] 156 | routes = { 157 | mode: transport_network.provide_shortest_route(origin_node, 158 | destination_node, route_weight=mode+"_weight") 159 | for mode in modes 160 | } 161 | # compute associated weight and capacity_weight 162 | modes_weight = { 163 | mode: { 164 | mode+"_weight": transport_network.sum_indicator_on_route(route, mode+"_weight"), 165 | "weight": transport_network.sum_indicator_on_route(route, "weight", detail_type=False), 166 | "capacity_weight": transport_network.sum_indicator_on_route(route, "capacity_weight") 167 | } 168 | for mode, route in routes.items() 169 | } 170 | # print(self.pid, modes_weight) 171 | # remove any mode which is over capacity (where capacity_weight > capacity_burden) 172 | for mode, route in routes.items(): 173 | if mode != "intl_rail": 174 | if transport_network.check_edge_in_route(route, (2610, 2589)): 175 | print("(2610, 2589) in", mode) 176 | # if weight_dic['capacity_weight'] >= capacity_burden: 177 | # print(mode, "will be eliminated") 178 | # if modes_weight['intl_rail']['capacity_weight'] >= capacity_burden: 179 | # print("intl_rail", "will be eliminated") 180 | # else: 181 | # print("intl_rail", "will not be eliminated") 182 | 183 | modes_weight = { 184 | mode: weight_dic['weight'] 185 | for mode, weight_dic in modes_weight.items() 186 | if weight_dic['capacity_weight'] < capacity_burden 187 | } 188 | if len(modes_weight) == 0: 189 | logging.warning("All transport modes are over capacity, no route selected!") 190 | return None 191 | # and select one route choosing random weighted choice 192 | selection_weights = rescale_values(list(modes_weight.values()), minimum=0, maximum=0.5) 193 | selection_weights = [1-w for w in selection_weights] 194 | selected_mode = random.choices( 195 | list(modes_weight.keys()), 196 | weights=selection_weights, 197 | k=1 198 | )[0] 199 | route = routes[selected_mode] 200 | # print("Country "+str(self.pid)+" chooses "+selected_mode+ 201 | # " to serve a client located "+str(destination_node)) 202 | # print(transport_network.give_route_mode(route)) 203 | return route, selected_mode 204 | 205 | raise ValueError("The transport_mode attributes of the commerical link\ 206 | does not belong to ('roads', 'intl_multimodes')") 207 | 208 | 209 | def decide_initial_routes(self, graph, transport_network, transport_modes, 210 | account_capacity, monetary_unit_flow): 211 | for edge in graph.out_edges(self): 212 | if edge[1].pid == -1: # we do not create route for households 213 | continue 214 | elif edge[1].odpoint == -1: # we do not create route for service firms if explicit_service_firms = False 215 | continue 216 | else: 217 | # Get the id of the orign and destination node 218 | origin_node = self.odpoint 219 | destination_node = edge[1].odpoint 220 | # Define the type of transport mode to use 221 | cond_from = (transport_modes['from'] == self.pid) #self is a country 222 | if isinstance(edge[1], Firm): #see what is the other end 223 | cond_to = (transport_modes['to'] == "domestic") 224 | else: 225 | cond_to = (transport_modes['to'] == edge[1].pid) 226 | # we have not implemented the "sector" condition 227 | transport_mode = transport_modes.loc[cond_from & cond_to, "transport_mode"].iloc[0] 228 | graph[self][edge[1]]['object'].transport_mode = transport_mode 229 | route, selected_mode = self.choose_route( 230 | transport_network=transport_network, 231 | origin_node=origin_node, 232 | destination_node=destination_node, 233 | possible_transport_modes=transport_mode 234 | ) 235 | # Store it into commercial link object 236 | graph[self][edge[1]]['object'].storeRouteInformation( 237 | route=route, 238 | transport_mode=selected_mode, 239 | main_or_alternative="main", 240 | transport_network=transport_network 241 | ) 242 | # Update the "current load" on the transport network 243 | # if current_load exceed burden, then add burden to the weight 244 | if account_capacity: 245 | new_load_in_usd = graph[self][edge[1]]['object'].order 246 | new_load_in_tons = transformUSDtoTons(new_load_in_usd, monetary_unit_flow, self.usd_per_ton) 247 | transport_network.update_load_on_route(route, new_load_in_tons) 248 | 249 | 250 | def deliver_products(self, graph, transport_network, 251 | monetary_unit_transport_cost, monetary_unit_flow, 252 | cost_repercussion_mode, explicit_service_firm): 253 | """ The quantity to be delivered is the quantity that was ordered (no rationning takes place) 254 | """ 255 | self.generalized_transport_cost = 0 256 | self.usd_transported = 0 257 | self.tons_transported = 0 258 | self.tonkm_transported = 0 259 | self.qty_sold = 0 260 | for edge in graph.out_edges(self): 261 | graph[self][edge[1]]['object'].delivery = graph[self][edge[1]]['object'].order 262 | graph[self][edge[1]]['object'].delivery_in_tons = \ 263 | transformUSDtoTons(graph[self][edge[1]]['object'].order, monetary_unit_flow, self.usd_per_ton) 264 | 265 | explicit_service_firm = True 266 | if explicit_service_firm: 267 | # If send services, no use of transport network 268 | if graph[self][edge[1]]['object'].product_type in ['utility', 'transport', 'services']: 269 | graph[self][edge[1]]['object'].price = graph[self][edge[1]]['object'].eq_price 270 | self.qty_sold += graph[self][edge[1]]['object'].delivery 271 | # Otherwise, send shipment through transportation network 272 | else: 273 | self.send_shipment( 274 | graph[self][edge[1]]['object'], 275 | transport_network, 276 | monetary_unit_transport_cost, 277 | monetary_unit_flow, 278 | cost_repercussion_mode 279 | ) 280 | else: 281 | if (edge[1].odpoint != -1): # to non service firms, send shipment through transportation network 282 | self.send_shipment( 283 | graph[self][edge[1]]['object'], 284 | transport_network, 285 | monetary_unit_transport_cost, 286 | monetary_unit_flow, 287 | cost_repercussion_mode 288 | ) 289 | else: # if it sends to service firms, nothing to do. price is equilibrium price 290 | graph[self][edge[1]]['object'].price = graph[self][edge[1]]['object'].eq_price 291 | self.qty_sold += graph[self][edge[1]]['object'].delivery 292 | 293 | 294 | def send_shipment(self, commercial_link, transport_network, 295 | monetary_unit_transport_cost, monetary_unit_flow, cost_repercussion_mode): 296 | 297 | monetary_unit_factor = { 298 | "mUSD": 1e6, 299 | "kUSD": 1e3, 300 | "USD": 1 301 | } 302 | factor = monetary_unit_factor[monetary_unit_flow] 303 | """Only apply to B2B flows 304 | """ 305 | if len(commercial_link.route)==0: 306 | raise ValueError("Country "+str(self.pid)+ 307 | ": commercial link "+str(commercial_link.pid)+ 308 | " is not associated to any route, I cannot send any shipment to client "+ 309 | str(commercial_link.pid)) 310 | 311 | if self.check_route_avaibility(commercial_link, transport_network, 'main') == 'available': 312 | # If the normal route is available, we can send the shipment as usual and pay the usual price 313 | commercial_link.current_route = 'main' 314 | commercial_link.price = commercial_link.eq_price 315 | transport_network.transport_shipment(commercial_link) 316 | 317 | self.generalized_transport_cost += commercial_link.route_time_cost \ 318 | + commercial_link.delivery_in_tons * commercial_link.route_cost_per_ton 319 | self.usd_transported += commercial_link.delivery 320 | self.tons_transported += commercial_link.delivery_in_tons 321 | self.tonkm_transported += commercial_link.delivery_in_tons *commercial_link.route_length 322 | self.qty_sold += commercial_link.delivery 323 | return 0 324 | 325 | # If there is a disruption, we try the alternative route, if there is any 326 | if (len(commercial_link.alternative_route)>0) & (self.check_route_avaibility(commercial_link, transport_network, 'alternative') == 'available'): 327 | commercial_link.current_route = 'alternative' 328 | route = commercial_link.alternative_route 329 | # Otherwise we have to find a new one 330 | else: 331 | origin_node = self.odpoint 332 | destination_node = commercial_link.route[-1][0] 333 | route, selected_mode = self.choose_route( 334 | transport_network=transport_network.get_undisrupted_network(), 335 | origin_node=origin_node, 336 | destination_node=destination_node, 337 | possible_transport_modes=commercial_link.possible_transport_modes 338 | ) 339 | # We evaluate the cost of this new route 340 | if route is not None: 341 | commercial_link.storeRouteInformation( 342 | route=route, 343 | transport_mode=selected_mode, 344 | main_or_alternative="alternative", 345 | transport_network=transport_network 346 | ) 347 | 348 | # If the alternative route is available, or if we discovered one, we proceed 349 | if route is not None: 350 | commercial_link.current_route = 'alternative' 351 | # Calculate contribution to generalized transport cost, to usd/tons/tonkms transported 352 | self.generalized_transport_cost += commercial_link.alternative_route_time_cost \ 353 | + commercial_link.delivery_in_tons * commercial_link.alternative_route_cost_per_ton 354 | self.usd_transported += commercial_link.delivery 355 | self.tons_transported += commercial_link.delivery_in_tons 356 | self.tonkm_transported += commercial_link.delivery_in_tons * commercial_link.alternative_route_length 357 | self.qty_sold += commercial_link.delivery 358 | 359 | if cost_repercussion_mode == "type1": #relative cost change with actual bill 360 | # Calculate relative increase in routing cost 361 | new_transport_bill = commercial_link.delivery_in_tons * commercial_link.alternative_route_cost_per_ton 362 | normal_transport_bill = commercial_link.delivery_in_tons * commercial_link.route_cost_per_ton 363 | relative_cost_change = max(new_transport_bill - normal_transport_bill, 0)/normal_transport_bill 364 | # Translate that into an increase in transport costs in the balance sheet 365 | relative_price_change_transport = 0.2 * relative_cost_change 366 | total_relative_price_change = relative_price_change_transport 367 | commercial_link.price = commercial_link.eq_price * (1 + total_relative_price_change) 368 | 369 | elif cost_repercussion_mode == "type2": #actual repercussion de la bill 370 | added_costUSD_per_ton = max(commercial_link.alternative_route_cost_per_ton - commercial_link.route_cost_per_ton, 0) 371 | added_costUSD_per_mUSD = added_costUSD_per_ton / (self.usd_per_ton/factor) 372 | added_costmUSD_per_mUSD = added_costUSD_per_mUSD/factor 373 | commercial_link.price = commercial_link.eq_price + added_costmUSD_per_mUSD 374 | relative_price_change_transport = commercial_link.price / commercial_link.eq_price - 1 375 | 376 | elif cost_repercussion_mode == "type3": 377 | # We translate this real cost into transport cost 378 | relative_cost_change = (commercial_link.alternative_route_time_cost - commercial_link.route_time_cost)/commercial_link.route_time_cost 379 | relative_price_change_transport = 0.2 * relative_cost_change 380 | # With that, we deliver the shipment 381 | total_relative_price_change = relative_price_change_transport 382 | commercial_link.price = commercial_link.eq_price * (1 + total_relative_price_change) 383 | 384 | transport_network.transport_shipment(commercial_link) 385 | # Print information 386 | logging.debug("Country "+str(self.pid)+": found an alternative route to client "+ 387 | str(commercial_link.buyer_id)+", it is costlier by "+ 388 | '{:.0f}'.format(100*relative_price_change_transport)+"%, price is "+ 389 | '{:.4f}'.format(commercial_link.price)+" instead of "+ 390 | '{:.4f}'.format(commercial_link.eq_price)) 391 | 392 | else: 393 | logging.debug("Country "+str(self.pid)+": because of disruption, there is"+ 394 | "no route between me and client "+str(commercial_link.buyer_id)) 395 | # We do not write how the input price would have changed 396 | commercial_link.price = commercial_link.eq_price 397 | # We do not pay the transporter, so we don't increment the transport cost 398 | 399 | 400 | def check_route_avaibility(self, commercial_link, transport_network, which_route='main'): 401 | 402 | if which_route=='main': 403 | route_to_check = commercial_link.route 404 | elif which_route=='alternative': 405 | route_to_check = commercial_link.alternative_route 406 | else: 407 | KeyError('Wrong value for parameter which_route, admissible values are main and alternative') 408 | 409 | res = 'available' 410 | for route_segment in route_to_check: 411 | if len(route_segment) == 2: 412 | if transport_network[route_segment[0]][route_segment[1]]['disruption_duration'] > 0: 413 | res = 'disrupted' 414 | break 415 | if len(route_segment) == 1: 416 | if transport_network.node[route_segment[0]]['disruption_duration'] > 0: 417 | res = 'disrupted' 418 | break 419 | return res 420 | 421 | 422 | 423 | def receive_products_and_pay(self, graph, transport_network): 424 | self.extra_spending = 0 425 | self.consumption_loss = 0 426 | for edge in graph.in_edges(self): 427 | if (edge[0].odpoint == -1): # if buys service, get directly from commercial link 428 | self.receive_service_and_pay(graph[edge[0]][self]['object']) 429 | else: # else collect through transport network 430 | self.receive_shipment_and_pay(graph[edge[0]][self]['object'], transport_network) 431 | 432 | 433 | def receive_service_and_pay(self, commercial_link): 434 | quantity_delivered = commercial_link.delivery 435 | commercial_link.payment = quantity_delivered * commercial_link.price 436 | self.extra_spending += quantity_delivered * (commercial_link.price - commercial_link.eq_price) 437 | 438 | 439 | def receive_shipment_and_pay(self, commercial_link, transport_network): 440 | """Firm look for shipments in the transport nodes it is locatedd 441 | It takes those which correspond to the commercial link 442 | It receives them, thereby removing them from the transport network 443 | Then it pays the corresponding supplier along the commecial link 444 | """ 445 | #quantity_intransit = commercial_link.delivery 446 | quantity_delivered = 0 447 | price = 1 448 | if commercial_link.pid in transport_network.node[self.odpoint]['shipments'].keys(): 449 | quantity_delivered += transport_network.node[self.odpoint]['shipments'][commercial_link.pid]['quantity'] 450 | price = transport_network.node[self.odpoint]['shipments'][commercial_link.pid]['price'] 451 | transport_network.remove_shipment(commercial_link) 452 | # Increment extra spending 453 | self.extra_spending += quantity_delivered * (price - commercial_link.eq_price) 454 | # Increment consumption loss 455 | self.consumption_loss += commercial_link.delivery - quantity_delivered 456 | # Log if quantity received does not match order 457 | if abs(commercial_link.delivery - quantity_delivered) > 1e-6: 458 | logging.debug("Agent "+str(self.pid)+": quantity delivered by "+ 459 | str(commercial_link.supplier_id)+" is "+str(quantity_delivered)+ 460 | ". It was supposed to be "+str(commercial_link.delivery)+".") 461 | # Make payment 462 | commercial_link.payment = quantity_delivered * price 463 | 464 | 465 | 466 | 467 | def evaluate_commercial_balance(self, graph): 468 | exports = sum([graph[self][edge[1]]['object'].payment for edge in graph.out_edges(self)]) 469 | imports = sum([graph[edge[0]][self]['object'].payment for edge in graph.in_edges(self)]) 470 | print("Country "+self.pid+": imports "+str(imports)+" from Tanzania and export "+str(exports)+" to Tanzania") 471 | 472 | 473 | def add_congestion_malus2(self, graph, transport_network): 474 | """Congestion cost are perceived costs, felt by firms, but they do not influence prices paid to transporter, hence do not change price 475 | """ 476 | if len(transport_network.congestionned_edges) > 0: 477 | # for each client 478 | for edge in graph.out_edges(self): 479 | if graph[self][edge[1]]['object'].current_route == 'main': 480 | route_to_check = graph[self][edge[1]]['object'].route 481 | elif graph[self][edge[1]]['object'].current_route == 'alternative': 482 | route_to_check = graph[self][edge[1]]['object'].alternative_route 483 | else: 484 | continue 485 | # check if the route currently used is congestionned 486 | if len(set(route_to_check) & set(transport_network.congestionned_edges)) > 0: 487 | # if it is, we add its cost to the generalized cost model 488 | self.generalized_transport_cost += transport_network.giveCongestionCostOfTime(route_to_check) -------------------------------------------------------------------------------- /code/class_households.py: -------------------------------------------------------------------------------- 1 | import random 2 | import pandas as pd 3 | import logging 4 | 5 | from class_commerciallink import CommercialLink 6 | 7 | 8 | class Households(object): 9 | 10 | def __init__(self, purchase_plan=None, final_demand_per_sector=None): 11 | # Intrinsic parameters 12 | self.pid = -1 13 | self.odpoint = -1 14 | # Parameters depending on data and network 15 | self.final_demand_per_sector = final_demand_per_sector or {} 16 | self.purchase_plan = purchase_plan or {} 17 | self.retailers = {} 18 | self.budget = 1 19 | # Variables reset and updated at each time step 20 | self.consumption = {} 21 | self.tot_consumption = 0 22 | self.consumption_per_sector = {} 23 | self.spending = {} 24 | self.tot_spending = 0 25 | self.spending_per_sector = {} 26 | # Cumulated variables reset at beginning and updated at each time step 27 | self.consumption_loss = 0 28 | self.extra_spending = 0 29 | 30 | 31 | def reset_variables(self): 32 | self.budget = 1 33 | self.consumption = {} 34 | self.tot_consumption = 0 35 | self.spending = {} 36 | self.tot_spending = 0 37 | self.extra_spending = 0 38 | self.consumption_loss = 0 39 | self.consumption_loss_per_sector = {key: 0 for key, val in self.final_demand_per_sector.items()} 40 | 41 | 42 | def initialize_var_on_purchase_plan(self): 43 | if len(self.purchase_plan) == 0: 44 | logging.warn("Households initialize variables based on purchase plan, " 45 | +"but it is empty.") 46 | 47 | self.consumption = self.purchase_plan 48 | self.tot_consumption = sum(list(self.purchase_plan.values())) 49 | self.spending = self.consumption 50 | self.tot_spending = self.tot_consumption 51 | self.extra_spending_per_sector = { 52 | sector: 0 for sector in self.extra_spending_per_sector.keys() 53 | } 54 | 55 | 56 | def select_suppliers(self, graph, firm_list, mode='inputed'): 57 | if mode=='equal': 58 | for firm in firm_list: 59 | graph.add_edge(firm, self, 60 | object=CommercialLink( 61 | pid=str(firm.pid)+'to'+str(self.pid), 62 | product=firm.sector, 63 | category="domestic_B2C", 64 | supplier_id=firm.pid, 65 | buyer_id=self.pid) 66 | ) 67 | self.purchase_plan = {firm.pid: self.budget/len(firm_list) for firm in firm_list} 68 | 69 | elif mode=='selected_retailers': 70 | firm_id_each_sector = pd.DataFrame({ 71 | 'firm': [firm.pid for firm in firm_list], 72 | 'sector': [firm.sector for firm in firm_list]}) 73 | dic_sector_to_nbfirms = firm_id_each_sector.groupby('sector')['firm'].count().to_dict() 74 | sectors = firm_id_each_sector['sector'].unique() 75 | consumption_each_sector_quantity = {sector: self.budget/len(sectors) for sector in sectors} 76 | 77 | for sector in sectors: 78 | nb_retailers = random.randint(1,dic_sector_to_nbfirms[sector]) 79 | retailers = random.sample(firm_id_each_sector.loc[firm_id_each_sector['sector']==sector, 'firm'].tolist(), nb_retailers) 80 | weight = 1 / nb_retailers 81 | for retailer_id in retailers: 82 | # For each supplier, create an edge in the economic network 83 | graph.add_edge(firm_list[retailer_id], self, 84 | object=CommercialLink( 85 | pid=str(retailer_id)+'to'+str(self.pid), 86 | product=sector, 87 | supplier_id=retailer_id, 88 | buyer_id=self.pid) 89 | ) 90 | # Associate a weight 91 | graph[firm_list[retailer_id]][self]['weight'] = weight 92 | # Households save the name of the retailer, its sector, its weight, and adds it to its purchase plan 93 | self.retailers[retailer_id] = {'sector':self.pid, 'weight':weight} 94 | self.purchase_plan[retailer_id] = consumption_each_sector_quantity[sector] * weight 95 | # The retailer saves the fact that it supplies to households. The share of sales cannot be calculated now. 96 | firm_list[retailer_id].clients[self.pid] = {'sector':self.pid, 'share':0} 97 | 98 | 99 | elif mode == 'inputed': 100 | if len(self.purchase_plan)==0: 101 | raise KeyError('Households: mode==inputed but no purchase plan') 102 | elif len(self.final_demand_per_sector)==0: 103 | raise KeyError('Households: mode==inputed but no final_demand_per_sector') 104 | else: 105 | for retailer_id, purchase_this_retailer in self.purchase_plan.items(): 106 | if purchase_this_retailer > 0: 107 | sector = firm_list[retailer_id].sector 108 | graph.add_edge(firm_list[retailer_id], self, 109 | object=CommercialLink( 110 | pid=str(retailer_id)+'to'+str(self.pid), 111 | product=sector, 112 | supplier_id=retailer_id, 113 | buyer_id=self.pid) 114 | ) 115 | weight = purchase_this_retailer / self.final_demand_per_sector[sector] 116 | graph[firm_list[retailer_id]][self]['weight'] = weight 117 | self.retailers[retailer_id] = {'sector':self.pid, 'weight':weight} 118 | firm_list[retailer_id].clients[self.pid] = {'sector':self.pid, 'share':0} 119 | 120 | else: 121 | raise ValueError("Households: Wrong mode chosen") 122 | 123 | 124 | def send_purchase_orders(self, graph): 125 | for edge in graph.in_edges(self): 126 | try: 127 | quantity_to_buy = self.purchase_plan[edge[0].pid] 128 | except KeyError: 129 | print("Households: No purchase plan for supplier", edge[0].pid) 130 | quantity_to_buy = 0 131 | graph[edge[0]][self]['object'].order = quantity_to_buy 132 | 133 | 134 | def receive_products_and_pay(self, graph): 135 | # Re-initialize values that reset at each time step 136 | self.consumption = {} 137 | self.tot_consumption = 0 #quantity 138 | self.spending = {} 139 | self.tot_spending = 0 #money 140 | 141 | for edge in graph.in_edges(self): 142 | # For each retailer, get the products 143 | quantity_delivered = graph[edge[0]][self]['object'].delivery 144 | self.consumption[edge[0].pid] = quantity_delivered 145 | self.tot_consumption += quantity_delivered 146 | # Update the price and pay 147 | price = graph[edge[0]][self]['object'].price 148 | graph[edge[0]][self]['object'].payment = quantity_delivered * price 149 | # Measure spending 150 | self.spending[edge[0].pid] = quantity_delivered * price 151 | self.tot_spending += quantity_delivered * price 152 | # Increment extra spending and consumption loss 153 | self.extra_spending += quantity_delivered * \ 154 | (price - graph[edge[0]][self]['object'].eq_price) 155 | # Increment consumption loss 156 | consum_loss = (self.purchase_plan[edge[0].pid] - quantity_delivered) * \ 157 | (graph[edge[0]][self]['object'].eq_price) 158 | self.consumption_loss += consum_loss 159 | # Log if there is undelivered goods 160 | if consum_loss >= 1e-6: 161 | logging.debug("Household: Firm "+str(edge[0].pid)+" supposed to deliver "+ 162 | str(self.purchase_plan[edge[0].pid])+ " but delivered "+ 163 | str(quantity_delivered)) 164 | 165 | 166 | 167 | def print_info(self): 168 | print("\nHouseholds:") 169 | print("final_demand_per_sector:", str(self.final_demand_per_sector)) 170 | print("purchase_plan:", self.purchase_plan) 171 | print("consumption:", self.consumption) 172 | -------------------------------------------------------------------------------- /code/class_observer.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | import pandas as pd 4 | import networkx as nx 5 | import geopandas as gpd 6 | import logging 7 | 8 | from functions import rescale_values 9 | 10 | # Observer is an object that collects data while the simulation is running 11 | # It does not export anything. Export functions are in export_functions.py 12 | class Observer(object): 13 | 14 | def __init__(self, firm_list, Tfinal=0): 15 | self.firms = {} 16 | self.households = {} 17 | self.countries = {} 18 | sector_list = list(set([firm.sector for firm in firm_list])) 19 | self.disruption_time = 1 20 | self.production = pd.DataFrame(index=range(0,Tfinal+1), 21 | columns=["firm_"+str(firm.pid) for firm in firm_list]+['total'], 22 | data=0) 23 | self.delta_price = pd.DataFrame(index=range(0,Tfinal+1), 24 | columns=["firm_"+str(firm.pid) for firm in firm_list]+['average'], 25 | data=0) 26 | self.profit = pd.DataFrame(index=range(0,Tfinal+1), 27 | columns=["firm_"+str(firm.pid) for firm in firm_list]+['total'], 28 | data=0) 29 | self.consumption = pd.DataFrame(index=range(0,Tfinal+1), 30 | columns=["sector_"+str(sector_id) for sector_id in sector_list]+['total'], 31 | data=0) 32 | self.spending = pd.DataFrame(index=range(0,Tfinal+1), 33 | columns=["sector_"+str(sector_id) for sector_id in sector_list]+['total'], 34 | data=0) 35 | self.av_inventory_duration = pd.DataFrame(index=range(0,Tfinal+1), 36 | columns=["firm_"+str(firm.pid) for firm in firm_list]+['average'], 37 | data=0) 38 | self.flows_snapshot = {} 39 | self.shipments_snapshot = {} 40 | self.disrupted_nodes = {} 41 | self.disrupted_edges = {} 42 | self.households_extra_spending = 0 43 | self.households_extra_local = 0 44 | self.households_extra_spending_per_firm = {} 45 | self.spending_recovered = True 46 | self.households_consumption_loss = 0 47 | self.households_consumption_loss_local = 0 48 | self.households_consumption_loss_per_firm = {} 49 | self.consumption_recovered = True 50 | self.firm_to_sector = {firm.pid: firm.sector for firm in firm_list} 51 | self.generalized_cost_normal = 0 52 | self.generalized_cost_disruption = 0 53 | self.generalized_cost_country_normal = 0 54 | self.generalized_cost_country_disruption = 0 55 | self.usd_transported_normal = 0 56 | self.usd_transported_disruption = 0 57 | self.tons_transported_normal = 0 58 | self.tons_transported_disruption = 0 59 | self.tonkm_transported_normal = 0 60 | self.tonkm_transported_disruption = 0 61 | 62 | 63 | def collect_agent_data(self, firm_list, households, country_list, time_step): 64 | self.firms[time_step] = {firm.pid: { 65 | 'production': firm.production, 66 | 'profit': firm.profit, 67 | 'transport_cost': firm.finance['costs']['transport'], 68 | 'input_cost': firm.finance['costs']['input'], 69 | 'other_cost': firm.finance['costs']['other'], 70 | 'inventory_duration': firm.current_inventory_duration, 71 | 'generalized_transport_cost': firm.generalized_transport_cost, 72 | 'usd_transported': firm.usd_transported, 73 | 'tons_transported': firm.tons_transported, 74 | 'tonkm_transported': firm.tonkm_transported 75 | } for firm in firm_list} 76 | self.countries[time_step] = {country.pid: { 77 | 'generalized_transport_cost': country.generalized_transport_cost, 78 | 'usd_transported': country.usd_transported, 79 | 'tons_transported': country.tons_transported, 80 | 'tonkm_transported': country.tonkm_transported, 81 | 'extra_spending': country.extra_spending, 82 | 'consumption_loss': country.consumption_loss, 83 | 'spending': sum(list(country.qty_purchased.values())) 84 | } for country in country_list} 85 | self.households[time_step] = { 86 | 'spending': households.spending, 87 | 'consumption': households.consumption 88 | } 89 | 90 | 91 | def collect_transport_flows(self, transport_network, time_step, flow_types=None, 92 | collect_shipments=False): 93 | """ 94 | Store the transport flow at that time step. 95 | 96 | See TransportNetwork.compute_flow_per_segment() for details on the flow types. 97 | 98 | Parameters 99 | ---------- 100 | transport_network : TransportNetwork 101 | Transport network 102 | time_step : int 103 | The time step to index these data 104 | flow_types : list of string 105 | See TransportNetwork.compute_flow_per_segment() for details 106 | collect_shipments : Boolean 107 | Whether or not to store all indivual shipments 108 | 109 | Returns 110 | ------- 111 | Nothing 112 | """ 113 | 114 | # Get disrupted nodes and edges 115 | self.disrupted_nodes[time_step] = [ 116 | node 117 | for node, val in nx.get_node_attributes(transport_network, "disruption_duration").items() 118 | if val > 0 119 | ] 120 | self.disrupted_edges[time_step] = [ 121 | nx.get_edge_attributes(transport_network, "id")[edge] 122 | for edge, val in nx.get_edge_attributes(transport_network, "disruption_duration").items() 123 | if val > 0 124 | ] 125 | 126 | # Store flow 127 | flow_types = flow_types or ['total'] 128 | self.flows_snapshot[time_step] = { 129 | str(transport_network[edge[0]][edge[1]]['id']): { 130 | str(flow_type): transport_network[edge[0]][edge[1]]["flow_"+str(flow_type)] 131 | for flow_type in flow_types 132 | } 133 | for edge in transport_network.edges 134 | if transport_network[edge[0]][edge[1]]['type'] != 'virtual' #deprecated, no virtual edges anymore 135 | } 136 | for edge in transport_network.edges: 137 | edge_id = transport_network[edge[0]][edge[1]]['id'] 138 | self.flows_snapshot[time_step][str(edge_id)]['total_tons'] = \ 139 | transport_network[edge[0]][edge[1]]["current_load"] 140 | 141 | # Store shipments 142 | if collect_shipments: 143 | self.shipments_snapshot[time_step] = { 144 | transport_network[edge[0]][edge[1]]['id']: transport_network[edge[0]][edge[1]]["shipments"] 145 | for edge in transport_network.edges 146 | } 147 | 148 | 149 | def collect_specific_flows(self, transport_network): 150 | self.specific_flows = {} 151 | multimodal_flows_to_collect = [ 152 | 'roads-maritime-shv', 153 | 'roads-maritime-vnm', 154 | 'railways-maritime', 155 | 'waterways-maritime', 156 | ] 157 | special_flows_to_collect = [ 158 | 'poipet', 159 | 'bavet' 160 | ] 161 | flow_types = ['import', 'export', 'transit'] 162 | for edge in transport_network.edges: 163 | if transport_network[edge[0]][edge[1]]['multimodes'] in multimodal_flows_to_collect: 164 | self.specific_flows[transport_network[edge[0]][edge[1]]['multimodes']] = { 165 | flow_type: transport_network[edge[0]][edge[1]]["flow_"+str(flow_type)] 166 | for flow_type in flow_types 167 | } 168 | if transport_network[edge[0]][edge[1]]['special']: 169 | for special in special_flows_to_collect: 170 | if special in transport_network[edge[0]][edge[1]]['special']: 171 | self.specific_flows[special] = { 172 | flow_type: transport_network[edge[0]][edge[1]]["flow_"+str(flow_type)] 173 | for flow_type in flow_types 174 | } 175 | 176 | # code to display key metric on multimodal internatinonal flow 177 | df = pd.DataFrame(self.specific_flows).transpose() 178 | # print(df) 179 | total_import = df.loc[multimodal_flows_to_collect, 'import'].sum() 180 | # print(total_import) 181 | total_export = df.loc[multimodal_flows_to_collect, 'export'].sum() 182 | total = total_import + total_export 183 | # print(total) 184 | res = {} 185 | for flow in multimodal_flows_to_collect: 186 | # print(df.loc[flow, "import"]) 187 | # print(df.loc[flow, "import"] / total_import) 188 | res[flow] = { 189 | "import": df.loc[flow, "import"] / total_import, 190 | "export": df.loc[flow, "export"] / total_export, 191 | "total": df.loc[flow, ["import", "export"]].sum() / total 192 | } 193 | res["total"] = { 194 | "import": total_import, 195 | "export": total_export, 196 | "total": total 197 | } 198 | print(pd.DataFrame(res).transpose()) 199 | 200 | 201 | def compute_sectoral_IO_table(self, graph): 202 | for edge in graph.edges: 203 | graph[edge[0]][edge[1]]['object'].delivery 204 | 205 | 206 | def get_ts_feature_agg_all_agents(self, feature, agent_type): 207 | if agent_type=='firm': 208 | return pd.Series(index=list(self.firms.keys()), data=[sum([val[feature] for firm_id, val in self.firms[t].items()]) for t in self.firms.keys()]) 209 | elif agent_type=='country': 210 | return pd.Series(index=list(self.countries.keys()), data=[sum([val[feature] for country_id, val in self.countries[t].items()]) for t in self.countries.keys()]) 211 | elif agent_type=='firm+country': 212 | tot_firm = pd.Series(index=list(self.firms.keys()), data=[sum([val[feature] for firm_id, val in self.firms[t].items()]) for t in self.firms.keys()]) 213 | tot_country = pd.Series(index=list(self.countries.keys()), data=[sum([val[feature] for country_id, val in self.countries[t].items()]) for t in self.countries.keys()]) 214 | return tot_firm+tot_country 215 | else: 216 | raise ValueError("'agent_type' should be 'firm', 'country', or 'firm+country'") 217 | 218 | 219 | def evaluate_results(self, transport_network, households, 220 | disruption, disruption_duration, per_firm=False): 221 | 222 | initial_time_step = 0 223 | 224 | # Compute indirect cost 225 | self.households_extra_spending = households.extra_spending 226 | logging.info("Impact of price change on households: "+'{:.4f}'.format(self.households_extra_spending)) 227 | tot_spending_ts = pd.Series({t: sum(val['spending'].values()) for t, val in self.households.items()}) 228 | self.spending_recovered = (tot_spending_ts.iloc[-1] - tot_spending_ts.loc[initial_time_step]) < 1e-6 229 | if self.spending_recovered: 230 | logging.info("Households spending has recovered") 231 | else: 232 | logging.info("Households spending has not recovered") 233 | 234 | self.households_consumption_loss = households.consumption_loss 235 | logging.info("Impact of shortages on households: "+'{:.4f}'.format(self.households_consumption_loss)) 236 | tot_consumption_ts = pd.Series({t: sum(val['consumption'].values()) for t, val in self.households.items()}) 237 | self.consumption_recovered = (tot_consumption_ts.iloc[-1] - tot_consumption_ts.loc[initial_time_step]) < 1e-6 238 | if self.consumption_recovered: 239 | logging.info("Households consumption has recovered") 240 | else: 241 | logging.info("Households consumption has not recovered") 242 | 243 | # Compute other indicators 244 | tot_ts = self.get_ts_feature_agg_all_agents('generalized_transport_cost', 'firm+country') 245 | self.generalized_cost_normal = tot_ts.loc[initial_time_step] 246 | self.generalized_cost_disruption = tot_ts.loc[self.disruption_time] 247 | 248 | tot_ts = self.get_ts_feature_agg_all_agents('generalized_transport_cost', 'country') 249 | self.generalized_cost_country_normal = tot_ts.loc[initial_time_step] 250 | self.generalized_cost_country_disruption = tot_ts.loc[self.disruption_time] 251 | 252 | tot_ts = self.get_ts_feature_agg_all_agents('usd_transported', 'firm+country') 253 | self.usd_transported_normal = tot_ts.loc[initial_time_step] 254 | self.usd_transported_disruption = tot_ts.loc[self.disruption_time] 255 | 256 | tot_ts = self.get_ts_feature_agg_all_agents('tons_transported', 'firm+country') 257 | self.tons_transported_normal = tot_ts.loc[initial_time_step] 258 | self.tons_transported_disruption = tot_ts.loc[self.disruption_time] 259 | 260 | tot_ts = self.get_ts_feature_agg_all_agents('tonkm_transported', 'firm+country') 261 | self.tonkm_transported_normal = tot_ts.loc[initial_time_step] 262 | self.tonkm_transported_disruption = tot_ts.loc[self.disruption_time] 263 | 264 | tot_ts = self.get_ts_feature_agg_all_agents('extra_spending', 'country') 265 | self.countries_extra_spending = tot_ts.sum() 266 | 267 | tot_ts = self.get_ts_feature_agg_all_agents('consumption_loss', 'country') 268 | self.countries_consumption_loss = tot_ts.sum() 269 | 270 | # Measure impact per firm 271 | firm_id_in_disrupted_nodes = [ 272 | firm_id for disrupted_node in disruption['node'] 273 | for firm_id in transport_network.node[disrupted_node]['firms_there'] 274 | ] 275 | 276 | extra_spending_per_firm = pd.DataFrame({t: val['spending'] for t, val in self.households.items()}).transpose() 277 | extra_spending_per_firm = extra_spending_per_firm.sum() - extra_spending_per_firm.shape[0]*extra_spending_per_firm.iloc[0,:] 278 | self.households_extra_spending_local = extra_spending_per_firm[firm_id_in_disrupted_nodes].sum() 279 | 280 | consumption_loss_per_firm = pd.DataFrame({t: val['consumption'] for t, val in self.households.items()}).transpose() 281 | consumption_loss_per_firm = -(consumption_loss_per_firm.sum() - consumption_loss_per_firm.shape[0]*consumption_loss_per_firm.iloc[0,:]) 282 | self.households_consumption_loss_local = consumption_loss_per_firm[firm_id_in_disrupted_nodes].sum() 283 | 284 | if per_firm: 285 | self.households_extra_spending_per_firm = extra_spending_per_firm 286 | self.households_extra_spending_per_firm[self.households_extra_spending_per_firm<1e-6] = 0 287 | self.households_consumption_loss_per_firm = consumption_loss_per_firm 288 | self.households_consumption_loss_per_firm[self.households_consumption_loss_per_firm<1e-6] = 0 289 | 290 | # if export_folder is not None: 291 | # pd.DataFrame(index=["value", "recovered"], data={ 292 | # "agg_spending":[self.households_extra_spending, self.spending_recovered], 293 | # "agg_consumption":[self.households_consumption_loss, self.consumption_recovered] 294 | # }).to_csv(os.path.join(export_folder, "results.csv")) 295 | 296 | 297 | 298 | 299 | 300 | @staticmethod 301 | def agg_per_sector(df, mapping, fun='sum'): 302 | tt = df.transpose().copy() 303 | tt['cat'] = tt.index.map(mapping) 304 | if fun=='sum': 305 | tt = tt.groupby('cat').sum().transpose() 306 | elif fun=='mean': 307 | tt = tt.groupby('cat').mean().transpose() 308 | else: 309 | raise ValueError('Fun should be sum or mean') 310 | return tt 311 | 312 | 313 | 314 | -------------------------------------------------------------------------------- /code/class_transport_network.py: -------------------------------------------------------------------------------- 1 | 2 | import networkx as nx 3 | import pandas as pd 4 | import geopandas as gpd 5 | import logging 6 | 7 | from functions import rescale_values, congestion_function, transformUSDtoTons 8 | 9 | 10 | class TransportNetwork(nx.Graph): 11 | 12 | def add_transport_node(self, node_id, all_nodes_data): #used in add_transport_edge_with_nodes 13 | node_attributes = ["id", "geometry", "special", "name"] 14 | node_data = all_nodes_data.loc[node_id, node_attributes].to_dict() 15 | node_data['shipments'] = {} 16 | node_data['disruption_duration'] = 0 17 | node_data['firms_there'] = [] 18 | node_data['type'] = 'road' 19 | self.add_node(node_id, **node_data) 20 | 21 | 22 | def add_transport_edge_with_nodes(self, edge_id, all_edges_data, all_nodes_data): 23 | # Selecting data 24 | edge_attributes = ['id', "type", 'surface', "geometry", "class", "km", 'special', "name", 25 | "multimodes", "capacity", 26 | "cost_per_ton", "travel_time", "time_cost", 'cost_travel_time', 'cost_variability', 'agg_cost'] 27 | edge_data = all_edges_data.loc[edge_id, edge_attributes].to_dict() 28 | end_ids = all_edges_data.loc[edge_id, ["end1", "end2"]].tolist() 29 | # Creating the start and end nodes 30 | if end_ids[0] not in self.nodes: 31 | self.add_transport_node(end_ids[0], all_nodes_data) 32 | if end_ids[1] not in self.nodes: 33 | self.add_transport_node(end_ids[1], all_nodes_data) 34 | # Creating the edge 35 | self.add_edge(end_ids[0], end_ids[1], **edge_data) 36 | # print("edge id:", edge_id, "| end1:", end_ids[0], "| end2:", end_ids[1], "| nb edges:", len(self.edges)) 37 | # print(self.edges) 38 | self[end_ids[0]][end_ids[1]]['node_tuple'] = (end_ids[0], end_ids[1]) 39 | self[end_ids[0]][end_ids[1]]['shipments'] = {} 40 | self[end_ids[0]][end_ids[1]]['disruption_duration'] = 0 41 | self[end_ids[0]][end_ids[1]]['current_load'] = 0 42 | 43 | 44 | # def connect_country(self, country): 45 | # self.add_node(country.pid, **{'type':'virtual'}) 46 | # for entry_point in country.entry_points: #ATT so far works for road only 47 | # self.add_edge(entry_point, country.pid, 48 | # **{'type':'virtual', 'time_cost':1000} 49 | # ) # high time cost to avoid that algo goes through countries 50 | 51 | 52 | # def remove_countries(self, country_list): 53 | # country_node_to_remove = list(set(self.nodes) & set([country.pid for country in country_list])) 54 | # for country in country_node_to_remove: 55 | # self.remove_node(country) 56 | 57 | 58 | # def giveRouteCost(self, route): 59 | # time_cost = 1 #cost cannot be 0 60 | # for segment in route: 61 | # if len(segment) == 2: #only edges have costs 62 | # if self[segment[0]][segment[1]]['type'] != 'virtual': 63 | # time_cost += self[segment[0]][segment[1]]['time_cost'] 64 | # return time_cost 65 | 66 | 67 | # def giveRouteCostAndTransportUnitCost(self, route): 68 | # time_cost = 1 #cost cannot be 0 69 | # cost_per_ton = 0 70 | # for segment in route: 71 | # if len(segment) == 2: #only edges have costs 72 | # if self[segment[0]][segment[1]]['type'] != 'virtual': 73 | # time_cost += self[segment[0]][segment[1]]['time_cost'] 74 | # cost_per_ton += (surface=='paved')*self.graph['unit_cost']['road']['paved']+\ 75 | # (surface=='unpaved')*self.graph['unit_cost']['road']['unpaved'] 76 | 77 | # return time_cost, cost_per_ton 78 | 79 | 80 | def giveRouteCaracteristicsOld(self, route): 81 | distance = 0 # km 82 | time_cost = 1 #USD, cost cannot be 0 83 | cost_per_ton = 0 #USD/ton 84 | for segment in route: 85 | if len(segment) == 2: #only edges have costs 86 | if self[segment[0]][segment[1]]['type'] != 'virtual': 87 | distance += self[segment[0]][segment[1]]['km'] 88 | time_cost += self[segment[0]][segment[1]]['time_cost'] 89 | surface = self[segment[0]][segment[1]]['surface'] 90 | cost_per_ton += (surface=='paved')*self.graph['unit_cost']['roads']['paved']+\ 91 | (surface=='unpaved')*self.graph['unit_cost']['roads']['unpaved'] 92 | 93 | return distance, time_cost, cost_per_ton 94 | 95 | 96 | def giveRouteCaracteristics(self, route): 97 | distance = 0 # km 98 | time_cost = 1 #USD, cost cannot be 0 99 | cost_per_ton = 0 #USD/ton 100 | for segment in route: 101 | if len(segment) == 2: #only edges have costs 102 | if self[segment[0]][segment[1]]['type'] != 'virtual': 103 | distance += self[segment[0]][segment[1]]['km'] 104 | time_cost += self[segment[0]][segment[1]]['time_cost'] 105 | cost_per_ton += self[segment[0]][segment[1]]['cost_per_ton'] 106 | 107 | return distance, time_cost, cost_per_ton 108 | 109 | 110 | def giveRouteCostWithCongestion(self, route): 111 | time_cost = 1 #cost cannot be 0 112 | for segment in route: 113 | if len(segment) == 2: #only edges have costs 114 | if self[segment[0]][segment[1]]['type'] != 'virtual': 115 | time_cost += self[segment[0]][segment[1]]['cost_variability'] + self[segment[0]][segment[1]]['cost_travel_time'] * (1 + self[segment[0]][segment[1]]['congestion']) 116 | return time_cost 117 | 118 | 119 | def giveCongestionCostOfTime(self, route): 120 | congestion_time_cost = 0 121 | for segment in route: 122 | if len(segment) == 2: #only edges have costs 123 | if self[segment[0]][segment[1]]['type'] != 'virtual': 124 | congestion_time_cost += self[segment[0]][segment[1]]['cost_travel_time'] * self[segment[0]][segment[1]]['congestion'] 125 | return congestion_time_cost 126 | 127 | 128 | def defineWeights(self, route_optimization_weight): 129 | '''Define the edge weights used by firms and countries to decide routes 130 | 131 | There are 3 types of weights: 132 | - weight: the indicator 'route_optimization_weight' parametrized 133 | - capacity_weight: same as weight, but we add capacity_burden when load 134 | are over threshold 135 | - mode_weight: we generate different weights for different mode of transportation 136 | defined in the commercial links. So far, we defined: 137 | - dom_road_weight: domestic roads (used between national firms) 138 | - intl_road_shv_weight: internatinal route using primarily roads via shv (maritime + roads) 139 | - intl_road_vnm_weight: internatinal route using primarily roads via vnm (maritime + roads) 140 | - intl_rail_weight: internatinal route using primarily ails (maritime + rails + roads) 141 | - intl_river_weight: internatinal route using primarily waterways (maritime + river + roads) 142 | 143 | The idea is to weight more or less different part of the network 144 | to "force" agent to choose one mode or the other. 145 | Since road needs always to be taken, we define a smaller burden. 146 | 147 | We start with the "route_optimization_weight" chosen as parameter (e.g., cost_per_ton, travel_time) 148 | Then, we add a hugen burden if we want agents to avoid certain edges 149 | Or we set the weight to 0 if we want to favor it 150 | ''' 151 | road_burden = 1e6 152 | other_mode_burden = 1e10 153 | self.mode_weights = ['road_weight', 'intl_road_shv_weight', 154 | 'intl_road_vnm_weight', 'intl_rail_weight', 'intl_river_weight'] 155 | for edge in self.edges: 156 | self[edge[0]][edge[1]]['weight'] = self[edge[0]][edge[1]][route_optimization_weight] 157 | self[edge[0]][edge[1]]['capacity_weight'] = self[edge[0]][edge[1]][route_optimization_weight] 158 | self[edge[0]][edge[1]]['road_weight'] = self[edge[0]][edge[1]][route_optimization_weight] 159 | self[edge[0]][edge[1]]['intl_road_shv_weight'] = self[edge[0]][edge[1]][route_optimization_weight] 160 | self[edge[0]][edge[1]]['intl_road_vnm_weight'] = self[edge[0]][edge[1]][route_optimization_weight] 161 | self[edge[0]][edge[1]]['intl_rail_weight'] = self[edge[0]][edge[1]][route_optimization_weight] 162 | self[edge[0]][edge[1]]['intl_river_weight'] = self[edge[0]][edge[1]][route_optimization_weight] 163 | 164 | # road_weight => burden any multimodal links 165 | multimodal_links_to_exclude_dic = { 166 | "road_weight": [ 167 | 'railways-maritime', 168 | 'waterways-maritime', 169 | 'roads-railways', 170 | 'roads-waterways', 171 | 'roads-maritime-shv', 172 | 'roads-maritime-vnm' 173 | ], 174 | "intl_road_shv_weight": [ 175 | 'railways-maritime', 176 | 'waterways-maritime', 177 | 'roads-railways', 178 | 'roads-waterways', 179 | 'roads-maritime-vnm' 180 | ], 181 | "intl_road_vnm_weight": [ 182 | 'railways-maritime', 183 | 'waterways-maritime', 184 | 'roads-railways', 185 | 'roads-waterways', 186 | 'roads-maritime-shv' 187 | ], 188 | "intl_rail_weight": [ 189 | 'waterways-maritime', 190 | 'roads-maritime-shv', 191 | 'roads-maritime-vnm', 192 | 'roads-waterways' 193 | ], 194 | "intl_river_weight": [ 195 | 'railways-maritime', 196 | 'roads-maritime-shv', 197 | 'roads-maritime-vnm', 198 | 'roads-railways' 199 | ], 200 | } 201 | for mode, multimodal_links_to_exclude in multimodal_links_to_exclude_dic.items(): 202 | if self[edge[0]][edge[1]]['multimodes'] in multimodal_links_to_exclude: 203 | self[edge[0]][edge[1]][mode] = other_mode_burden 204 | 205 | 206 | 207 | def locate_firms_on_nodes(self, firm_list): 208 | for node_id in self.nodes: 209 | self.node[node_id]['firms_there'] = [] 210 | for firm in firm_list: 211 | if firm.odpoint != -1: 212 | try: 213 | self.node[firm.odpoint]['firms_there'].append(firm.pid) 214 | except KeyError: 215 | logging.error('Transport network has no node numbered: '+str(firm.odpoint)) 216 | 217 | 218 | def provide_shortest_route(self, origin_node, destination_node, route_weight): 219 | ''' 220 | nx.shortest_path returns path as list of nodes 221 | we transform it into a route, which contains nodes and edges: 222 | [(1,), (1,5), (5,), (5,8), (8,)] 223 | ''' 224 | if (origin_node not in self.nodes): 225 | logging.warning("Origin node "+str(origin_node)+" not in the available transport network") 226 | return None 227 | 228 | elif (destination_node not in self.nodes): 229 | logging.warning("Destination node "+str(destination_node)+" not in the available transport network") 230 | return None 231 | 232 | elif nx.has_path(self, origin_node, destination_node): 233 | sp = nx.shortest_path(self, origin_node, destination_node, weight=route_weight) 234 | route = [[(sp[0],)]] + [[(sp[i], sp[i+1]), (sp[i+1],)] for i in range(0,len(sp)-1)] 235 | route = [item for item_tuple in route for item in item_tuple] 236 | return route 237 | 238 | else: 239 | logging.warning("There is no path between "+str(origin_node)+" and "+str(destination_node)) 240 | return None 241 | 242 | 243 | def sum_indicator_on_route(self, route, indicator, detail_type=False): 244 | total_indicator = 0 245 | all_edges = [item for item in route if len(item) == 2] 246 | # if indicator == "intl_rail_weight": 247 | # print("self[2586][2579]['intl_rail_weight']",self[2586][2579]["intl_rail_weight"]) 248 | for edge in all_edges: 249 | # if indicator == "intl_rail_weight": 250 | # print(edge, self[edge[0]][edge[1]][indicator]) 251 | total_indicator += self[edge[0]][edge[1]][indicator] 252 | 253 | # If detail_type == True, we print the indicator per edge categories 254 | details = [] 255 | if detail_type: 256 | for edge in all_edges: 257 | new_edge = {} 258 | new_edge['id'] = self[edge[0]][edge[1]]['id'] 259 | new_edge['type'] = self[edge[0]][edge[1]]['type'] 260 | new_edge['multimodes'] = self[edge[0]][edge[1]]['multimodes'] 261 | new_edge['special'] = self[edge[0]][edge[1]]['special'] 262 | new_edge[indicator] = self[edge[0]][edge[1]][indicator] 263 | details += [new_edge] 264 | details = pd.DataFrame(details).fillna('N/A') 265 | # print(details) 266 | detail_per_cat = details.groupby(['type', 'multimodes', 'special'])[indicator].sum() 267 | print(detail_per_cat) 268 | return total_indicator 269 | 270 | 271 | 272 | def get_undisrupted_network(self): 273 | available_nodes = [node for node in self.nodes if self.node[node]['disruption_duration']==0] 274 | available_subgraph = self.subgraph(available_nodes) 275 | available_edges = [edge for edge in self.edges if self[edge[0]][edge[1]]['disruption_duration']==0] 276 | available_subgraph = available_subgraph.edge_subgraph(available_edges) 277 | return TransportNetwork(available_subgraph) 278 | 279 | 280 | # def get_available_network(self): 281 | # available_edges = [ 282 | # edge 283 | # for edge in self.edges 284 | # if self[edge[0]][edge[1]]['current_load'] < self[edge[0]][edge[1]]['capacity'] 285 | # ] 286 | # available_subgraph = self.edge_subgraph(available_edges) 287 | # return TransportNetwork(available_subgraph) 288 | 289 | 290 | def disrupt_roads(self, disruption): 291 | # Disrupting nodes 292 | for node_id in disruption['node']: 293 | logging.info('Road node '+str(node_id)+ 294 | ' gets disrupted for '+str(disruption['duration'])+ ' time steps') 295 | self.node[node_id]['disruption_duration'] = disruption['duration'] 296 | # Disrupting edges 297 | for edge in self.edges: 298 | if self[edge[0]][edge[1]]['type'] == 'virtual': 299 | continue 300 | else: 301 | if self[edge[0]][edge[1]]['id'] in disruption['edge']: 302 | logging.info('Road edge '+str(self[edge[0]][edge[1]]['id'])+ 303 | ' gets disrupted for '+str(disruption['duration'])+ ' time steps') 304 | self[edge[0]][edge[1]]['disruption_duration'] = disruption['duration'] 305 | 306 | 307 | def update_road_state(self): 308 | for node in self.nodes: 309 | if self.node[node]['disruption_duration'] > 0: 310 | self.node[node]['disruption_duration'] -= 1 311 | for edge in self.edges: 312 | if self[edge[0]][edge[1]]['disruption_duration'] > 0: 313 | self[edge[0]][edge[1]]['disruption_duration'] -= 1 314 | #return subset of self 315 | 316 | 317 | def transport_shipment(self, commercial_link): 318 | # Select the route to transport the shimpment: main or alternative 319 | if commercial_link.current_route == 'main': 320 | route_to_take = commercial_link.route 321 | elif commercial_link.current_route == 'alternative': 322 | route_to_take = commercial_link.alternative_route 323 | else: 324 | route_to_take = [] 325 | 326 | # Propagate the shipments 327 | for route_segment in route_to_take: 328 | if len(route_segment) == 2: #pass shipments to edges 329 | self[route_segment[0]][route_segment[1]]['shipments'][commercial_link.pid] = { 330 | "from": commercial_link.supplier_id, 331 | "to": commercial_link.buyer_id, 332 | "quantity": commercial_link.delivery, 333 | "tons": commercial_link.delivery_in_tons, 334 | "product_type": commercial_link.product, 335 | "flow_category": commercial_link.category, 336 | "price": commercial_link.price 337 | } 338 | elif len(route_segment) == 1: #pass shipments to nodes 339 | self.node[route_segment[0]]['shipments'][commercial_link.pid] = { 340 | "from": commercial_link.supplier_id, 341 | "to": commercial_link.buyer_id, 342 | "quantity": commercial_link.delivery, 343 | "tons": commercial_link.delivery_in_tons, 344 | "product_type": commercial_link.product, 345 | "flow_category": commercial_link.category, 346 | "price": commercial_link.price 347 | } 348 | 349 | # Propagate the load 350 | self.update_load_on_route(route_to_take, commercial_link.delivery_in_tons) 351 | 352 | 353 | def update_load_on_route(self, route, load): 354 | 355 | '''Affect a load to a route 356 | 357 | The current_load attribute of each edge in the route will be incread by the new load. 358 | A load is typically expressed in tons. 359 | 360 | If the current_load exceeds the capacity, then capacity_burden is added to both the 361 | mode_weight and the capacity_weight. 362 | This will prevent firms from choosing this route 363 | ''' 364 | # logging.info("Edge (2610, 2589): current_load "+str(self[2610][2589]['current_load'])) 365 | capacity_burden = 1e5 366 | all_edges = [item for item in route if len(item) == 2] 367 | # if 'railways' in self.give_route_mode(route): 368 | # print("self[2586][2579]['current_load']", self[2586][2579]['current_load']) 369 | for edge in all_edges: 370 | # if (edge[0] == 2610) & (edge[1] == 2589): 371 | # logging.info('Edge '+str(edge)+": current_load "+str(self[edge[0]][edge[1]]['current_load'])) 372 | # check that the edge to be loaded is not already over capacity 373 | # if (self[edge[0]][edge[1]]['current_load'] > self[edge[0]][edge[1]]['capacity']): 374 | # logging.info('Edge '+str(edge)+" ("+self[edge[0]][edge[1]]['type']\ 375 | # +") is already over capacity and will be loaded more!") 376 | # Add the load 377 | self[edge[0]][edge[1]]['current_load'] += load 378 | # If it exceeds capacity, add the capacity_burden to both the mode_weight and the capacity_weight 379 | if (self[edge[0]][edge[1]]['current_load'] > self[edge[0]][edge[1]]['capacity']): 380 | logging.info('Edge '+str(edge)+" ("+self[edge[0]][edge[1]]['type']\ 381 | +") has exceeded its capacity") 382 | self[edge[0]][edge[1]]["capacity_weight"] += capacity_burden 383 | for mode_weight in self.mode_weights: 384 | self[edge[0]][edge[1]][mode_weight] += capacity_burden 385 | # print("self[edge[0]][edge[1]][current_load]", self[edge[0]][edge[1]]['current_load']) 386 | 387 | 388 | def reset_current_loads(self, route_optimization_weight): 389 | '''Reset current_load to 0 390 | 391 | If an edge was burdened due to capacity exceedance, we remove the burden 392 | ''' 393 | for edge in self.edges: 394 | self[edge[0]][edge[1]]['current_load'] = 0 395 | 396 | self.defineWeights(route_optimization_weight) 397 | 398 | 399 | def give_route_mode(self, route): 400 | '''Which mode is used on the route? 401 | 402 | Return the list of transport mode used along the route 403 | ''' 404 | modes = [] 405 | all_edges = [item for item in route if len(item) == 2] 406 | for edge in all_edges: 407 | modes += [self[edge[0]][edge[1]]['type']] 408 | return list(dict.fromkeys(modes)) 409 | 410 | 411 | def check_edge_in_route(self, route, searched_edge): 412 | all_edges = [item for item in route if len(item) == 2] 413 | for edge in all_edges: 414 | if (searched_edge[0] == edge[0]) and (searched_edge[1] == edge[1]): 415 | return True 416 | return False 417 | 418 | 419 | def remove_shipment(self, commercial_link): 420 | """Look for the shipment corresponding to the commercial link 421 | in any edges and nodes of the main and alternative route, 422 | and remove it 423 | """ 424 | route_to_take = commercial_link.route + commercial_link.alternative_route 425 | for route_segment in route_to_take: 426 | if len(route_segment) == 2: #segment is an edge 427 | if commercial_link.pid in self[route_segment[0]][route_segment[1]]['shipments'].keys(): 428 | del self[route_segment[0]][route_segment[1]]['shipments'][commercial_link.pid] 429 | elif len(route_segment) == 1: #segment is a node 430 | if commercial_link.pid in self.node[route_segment[0]]['shipments'].keys(): 431 | del self.node[route_segment[0]]['shipments'][commercial_link.pid] 432 | 433 | 434 | def compute_flow_per_segment(self, flow_types=['total']): 435 | """ 436 | Sum all flow of each 'flow_type' per transport edge 437 | 438 | The flow type are given as a list in the flow_types argument. 439 | It can corresponds to: 440 | - "total": sum of all flows 441 | - one of the CommercialLink.category, i.e., 'domestic_B2B', 442 | 'domestic_B2C', 'import', 'export' 443 | - one of the CommerialLink.product, i.e., the sectors 444 | 445 | Parameters 446 | ---------- 447 | flow_types : list of string 448 | Flow type to evaluate 449 | 450 | Returns 451 | ------- 452 | Nothing 453 | """ 454 | for edge in self.edges(): 455 | if self[edge[0]][edge[1]]['type'] != 'virtual': 456 | for flow_type in flow_types: 457 | if flow_type == 'total': 458 | self[edge[0]][edge[1]]['flow_'+flow_type] = sum([ 459 | shipment['quantity'] 460 | for shipment in self[edge[0]][edge[1]]["shipments"].values() 461 | ]) 462 | elif flow_type in ['domestic_B2B', 'import', 'export', 'transit']: 463 | self[edge[0]][edge[1]]['flow_'+flow_type] = sum([ 464 | shipment['quantity'] 465 | for shipment in self[edge[0]][edge[1]]["shipments"].values() 466 | if shipment['flow_category'] == flow_type 467 | ]) 468 | else: 469 | self[edge[0]][edge[1]]['flow_'+flow_type] = sum([ 470 | shipment['quantity'] 471 | for shipment in self[edge[0]][edge[1]]["shipments"].values() 472 | if shipment['product_type'] == flow_type 473 | ]) 474 | 475 | 476 | def evaluate_normal_traffic(self, sectorId_to_volumeCoef=None): 477 | self.evaluate_traffic(sectorId_to_volumeCoef) 478 | self.congestionned_edges = [] 479 | for edge in self.edges(): 480 | if self[edge[0]][edge[1]]['type'] == 'virtual': 481 | continue 482 | self[edge[0]][edge[1]]['traffic_normal'] = self[edge[0]][edge[1]]['traffic_current'] 483 | self[edge[0]][edge[1]]['congestion'] = 0 484 | 485 | 486 | def evaluate_congestion(self, sectorId_to_volumeCoef=None): 487 | self.evaluate_traffic(sectorId_to_volumeCoef) 488 | self.congestionned_edges = [] 489 | for edge in self.edges(): 490 | if self[edge[0]][edge[1]]['type'] == 'virtual': 491 | continue 492 | self[edge[0]][edge[1]]['congestion'] = congestion_function( 493 | self[edge[0]][edge[1]]['traffic_current'], 494 | self[edge[0]][edge[1]]['traffic_normal'] 495 | ) 496 | if self[edge[0]][edge[1]]['congestion'] > 1e-6: 497 | self.congestionned_edges += [edge] 498 | 499 | 500 | def evaluate_traffic(self, sectorId_to_volumeCoef=None): 501 | # If we have a correspondance of sector moneraty flow to volume, 502 | # we identify the sector that generate volume 503 | if sectorId_to_volumeCoef is not None: 504 | sectors_causing_congestion = [ 505 | sector 506 | for sector, coefficient in sectorId_to_volumeCoef.items() 507 | if coefficient > 0 508 | ] 509 | 510 | for edge in self.edges(): 511 | if self[edge[0]][edge[1]]['type'] == 'virtual': 512 | continue 513 | # If we have a correspondance of sector moneraty flow to volume, 514 | # we use volume 515 | if sectorId_to_volumeCoef is not None: 516 | volume = 0 517 | for sector_id in sectors_causing_congestion: 518 | list_montetary_flows = [ 519 | shipment['quantity'] 520 | for shipment in self[edge[0]][edge[1]]["shipments"].values() 521 | if shipment['product_type'] == sector_id 522 | ] 523 | volume += sectorId_to_volumeCoef[sector_id] * sum(list_montetary_flows) 524 | self[edge[0]][edge[1]]['traffic_current'] = volume 525 | # Otherwise we use montery flow directly 526 | else: 527 | monetary_value_of_flows = sum([ 528 | shipment['quantity'] 529 | for shipment in self[edge[0]][edge[1]]["shipments"].values() 530 | ]) 531 | self[edge[0]][edge[1]]['traffic_current'] = monetary_value_of_flows 532 | 533 | 534 | def reinitialize_flows_and_disruptions(self): 535 | for node in self.nodes: 536 | self.nodes[node]['disruption_duration'] = 0 537 | self.nodes[node]['shipments'] = {} 538 | for edge in self.edges: 539 | self[edge[0]][edge[1]]['disruption_duration'] = 0 540 | self[edge[0]][edge[1]]['shipments'] = {} 541 | self[edge[0]][edge[1]]['congestion'] = 0 542 | self[edge[0]][edge[1]]['current_load'] = 0 543 | -------------------------------------------------------------------------------- /code/export_functions.py: -------------------------------------------------------------------------------- 1 | import os 2 | import logging 3 | import json 4 | import pandas as pd 5 | import geopandas as gpd 6 | from builder import extractEdgeList 7 | 8 | 9 | def exportFirmODPointTable(firm_list, firm_table, odpoint_table, filepath_road_nodes, 10 | export_firm_table, export_odpoint_table, export_folder): 11 | 12 | imports = pd.Series({firm.pid: sum([val for key, val in firm.purchase_plan.items() if str(key)[0]=="C"]) for firm in firm_list}, name='imports') 13 | production_table = pd.Series({firm.pid: firm.production_target for firm in firm_list}, name='total_production') 14 | b2c_share = pd.Series({firm.pid: firm.clients[-1]['share'] if -1 in firm.clients.keys() else 0 for firm in firm_list}, name='b2c_share') 15 | export_share = pd.Series({firm.pid: sum([firm.clients[x]['share'] for x in firm.clients.keys() if isinstance(x, str)]) for firm in firm_list}, name='export_share') 16 | production_table = pd.concat([production_table, b2c_share, export_share, imports], axis=1) 17 | production_table['production_toC'] = production_table['total_production']*production_table['b2c_share'] 18 | production_table['production_toB'] = production_table['total_production']-production_table['production_toC'] 19 | production_table['production_exported'] = production_table['total_production']*production_table['export_share'] 20 | production_table.index.name = 'id' 21 | firm_table = firm_table.merge(production_table.reset_index(), on="id", how="left") 22 | prod_per_sector_ODpoint_table = firm_table.groupby(['odpoint', 'sector'])['total_production'].sum().unstack().fillna(0).reset_index() 23 | odpoint_table = odpoint_table.merge(prod_per_sector_ODpoint_table, on='odpoint', how="left") 24 | odpoint_table = odpoint_table.merge( 25 | firm_table.groupby('odpoint', as_index=False)[ 26 | ['total_production', 'production_toC', 'production_toB', 'production_exported'] 27 | ].sum(), 28 | on="odpoint", 29 | how="left").fillna(0) 30 | 31 | if export_firm_table: 32 | firm_table.to_csv(os.path.join(export_folder, 'firm_table.csv'), index=False) 33 | 34 | if export_odpoint_table: 35 | odpoint_table.to_csv(os.path.join(export_folder, 'odpoint_production_table.csv'), index=False) 36 | exportODPointLayer(odpoint_table, filepath_road_nodes, export_folder) 37 | 38 | 39 | def exportODPointLayer(odpoint_table, filepath_road_nodes, export_folder): 40 | road_nodes = gpd.read_file(filepath_road_nodes) 41 | res = road_nodes[['id', 'geometry']].merge(odpoint_table, left_on='id', right_on="odpoint", how='right') 42 | res.to_file(os.path.join(export_folder, 'odpoints_output.geojson'), driver='GeoJSON') 43 | 44 | 45 | 46 | def exportDistrictSectorTable(filtered_district_sector_table, export_folder): 47 | filtered_district_sector_table.to_csv(os.path.join(export_folder, 'filtered_district_sector_table.csv'), index=False) 48 | 49 | 50 | def exportCountryTable(country_list, export_folder): 51 | country_table = pd.DataFrame({ 52 | 'country':[country.pid for country in country_list], 53 | 'purchases':[sum(country.purchase_plan.values()) for country in country_list], 54 | 'purchases_from_countries':[sum([value if isinstance(key, str) else 0 for key, value in country.purchase_plan.items()]) for country in country_list], 55 | 'purchases_from_firms':[sum([value if isinstance(key, int) else 0 for key, value in country.purchase_plan.items()]) for country in country_list] 56 | }) 57 | # country_table['purchases_from_firms'] = \ 58 | # country_table['purchases'] - country_table['purchases_from_countries'] 59 | country_table.to_csv(os.path.join(export_folder, 'country_table.csv'), index=False) 60 | 61 | 62 | def exportEdgelistTable(supply_chain_network, export_folder): 63 | # Concern only Firm-Firm relationships, not Country, not Households 64 | edgelist_table = pd.DataFrame(extractEdgeList(supply_chain_network)) 65 | edgelist_table.to_csv(os.path.join(export_folder, 'edgelist_table.csv'), index=False) 66 | 67 | # Log some key indicators, taking into account all firms 68 | av_km_btw_supplier_buyer = edgelist_table['distance'].mean() 69 | logging.info("Average distance between supplier and buyer: "+ 70 | "{:.01f}".format(av_km_btw_supplier_buyer) + " km") 71 | av_km_btw_supplier_buyer_weighted_by_flow = \ 72 | (edgelist_table['distance'] * edgelist_table['flow']).sum() / \ 73 | edgelist_table['flow'].sum() 74 | logging.info("Average distance between supplier and buyer, weighted by the traded quantity: "+ 75 | "{:.01f}".format(av_km_btw_supplier_buyer_weighted_by_flow) + " km") 76 | 77 | # Log some key indicators, excluding service firms 78 | boolindex = (edgelist_table['supplier_odpoint'] !=-1 ) & (edgelist_table['buyer_odpoint'] != -1) 79 | av_km_btw_supplier_buyer_no_service = edgelist_table.loc[boolindex, 'distance'].mean() 80 | logging.info("Average distance between supplier and buyer, excluding service firms: "+ 81 | "{:.01f}".format(av_km_btw_supplier_buyer_no_service) + " km") 82 | av_km_btw_supplier_buyer_weighted_by_flow_no_service = \ 83 | (edgelist_table.loc[boolindex, 'distance'] * edgelist_table.loc[boolindex, 'flow']).sum() / \ 84 | edgelist_table.loc[boolindex, 'flow'].sum() 85 | logging.info("Average distance between supplier and buyer, excluding service firms"+ 86 | ", weighted by the traded quantity: "+ 87 | "{:.01f}".format(av_km_btw_supplier_buyer_weighted_by_flow_no_service) + " km") 88 | 89 | 90 | def exportInventories(firm_list, export_folder): 91 | # Evaluate total inventory per good type 92 | inventories = {} 93 | for firm in firm_list: 94 | for input_id, inventory in firm.inventory.items(): 95 | if input_id not in inventories.keys(): 96 | inventories[input_id] = inventory 97 | else: 98 | inventories[input_id] += inventory 99 | 100 | pd.Series(inventories).to_csv(os.path.join(export_folder, 'inventories.csv')) 101 | 102 | 103 | def exportTransportFlows(observer, export_folder): 104 | with open(os.path.join(export_folder, 'transport_flows.json'), 'w') as jsonfile: 105 | json.dump(observer.flows_snapshot, jsonfile) 106 | 107 | 108 | def exportSpecificFlows(observer, export_folder): 109 | export_filename = os.path.join(export_folder, 'specific_flows.csv') 110 | pd.DataFrame(observer.specific_flows).transpose().to_csv(export_filename) 111 | 112 | 113 | def exportTransportFlowsLayer(observer, export_folder, time_step, transport_edges): 114 | #extract flows of the desired time step 115 | flow_table = pd.DataFrame(observer.flows_snapshot[time_step]).transpose() 116 | flow_table['id'] = flow_table.index.astype(int) 117 | # print(flow_table) 118 | transport_edges_with_flows = transport_edges.merge(flow_table, on='id', how='left') 119 | transport_edges_with_flows.to_file( 120 | os.path.join(export_folder, 'flow_table_'+str(time_step)+'.geojson'), 121 | driver='GeoJSON' 122 | ) 123 | 124 | def exportShipmentsLayer(observer, export_folder, time_step, transport_edges): 125 | #extract flows of the desired time step 126 | output_layer = transport_edges.copy() 127 | output_layer['shipments'] = output_layer['id'].map(observer.shipments_snapshot[time_step]) 128 | output_layer.to_file( 129 | os.path.join(export_folder, 'shipments_'+str(time_step)+'.geojson'), 130 | driver='GeoJSON' 131 | ) 132 | 133 | def exportAgentData(observer, export_folder): 134 | agent_data = { 135 | 'firms': observer.firms, 136 | 'countries': observer.countries, 137 | 'households': observer.households 138 | } 139 | with open(os.path.join(export_folder, 'agent_data.json'), 'w') as jsonfile: 140 | json.dump(agent_data, jsonfile) 141 | 142 | 143 | def analyzeSupplyChainFlows(sc_network, firm_list, export_folder): 144 | # Collect all flows 145 | io_flows = [[sc_network[edge[0]][edge[1]]['object'].delivery, sc_network[edge[0]][edge[1]]['object'].supplier_id, sc_network[edge[0]][edge[1]]['object'].buyer_id] for edge in sc_network.edges] 146 | io_flows = pd.DataFrame(columns=['quantity', 'supplier_id', 'buyer_id'], data=io_flows) 147 | 148 | # Analyze domestic B2C flows 149 | domestic_flows = io_flows[(io_flows['supplier_id'].apply(lambda x: isinstance(x, int))) & (io_flows['buyer_id'].apply(lambda x: isinstance(x, int)))] 150 | dic_firmid_to_sectorid = {firm.pid: firm.sector for firm in firm_list} 151 | domestic_b2c_flows = domestic_flows[domestic_flows['buyer_id'] == -1].copy() 152 | domestic_b2c_flows['from_sector'] = domestic_flows['supplier_id'].map(dic_firmid_to_sectorid) 153 | domestic_b2c_flows_per_sector = domestic_b2c_flows.groupby('from_sector')['quantity'].sum().reset_index() 154 | 155 | # Analyze domestic B2B flows 156 | domestic_b2b_flows = domestic_flows[domestic_flows['buyer_id'] >= 0].copy() 157 | domestic_b2b_flows['from_sector'] = domestic_b2b_flows['supplier_id'].map(dic_firmid_to_sectorid) 158 | domestic_b2b_flows['to_sector'] = domestic_b2b_flows['buyer_id'].map(dic_firmid_to_sectorid) 159 | domestic_b2b_flows_per_sector = domestic_b2b_flows.groupby(['from_sector', 'to_sector'])['quantity'].sum().reset_index() 160 | 161 | # Produce B2B io sector-to-sector matrix 162 | domestic_sectors = list(domestic_b2c_flows_per_sector['from_sector'].sort_values()) 163 | observed_io_matrix = pd.DataFrame(index=domestic_sectors, columns=domestic_sectors, data=0) 164 | for i in range(domestic_b2b_flows_per_sector.shape[0]): 165 | observed_io_matrix.loc[domestic_b2b_flows_per_sector['from_sector'].iloc[i], domestic_b2b_flows_per_sector['to_sector'].iloc[i]] = domestic_b2b_flows_per_sector['quantity'].iloc[i] 166 | 167 | # Analyze import B2B flows 168 | import_flows = io_flows[(io_flows['supplier_id'].apply(lambda x: isinstance(x, str))) & (io_flows['buyer_id'].apply(lambda x: isinstance(x, int)))] 169 | import_b2b_flows = import_flows[import_flows['buyer_id'] >= 0].copy() 170 | import_b2b_flows_per_country = import_b2b_flows.groupby('supplier_id')['quantity'].sum().reset_index() 171 | import_b2b_flows['to_sector'] = import_b2b_flows['buyer_id'].map(dic_firmid_to_sectorid) 172 | import_b2b_flows_per_sector = import_b2b_flows.groupby('to_sector')['quantity'].sum().reset_index() 173 | 174 | # Analyze import B2C flows 175 | import_b2c_flows = import_flows[import_flows['buyer_id'] == -1].copy() 176 | 177 | # Analyze export flows 178 | export_flows = io_flows[(io_flows['supplier_id'].apply(lambda x: isinstance(x, int))) & (io_flows['buyer_id'].apply(lambda x: isinstance(x, str)))].copy() 179 | export_flows_per_country = export_flows.groupby('buyer_id')['quantity'].sum().reset_index() 180 | export_flows['from_sector'] = export_flows['supplier_id'].map(dic_firmid_to_sectorid) 181 | export_flows_per_sector = export_flows.groupby('from_sector')['quantity'].sum().reset_index() 182 | 183 | # Analyze transit flows 184 | transit_flows = io_flows[(io_flows['supplier_id'].apply(lambda x: isinstance(x, str))) & (io_flows['buyer_id'].apply(lambda x: isinstance(x, str)))].copy() 185 | transit_countries = pd.Series(list(set(transit_flows['supplier_id']) | set(transit_flows['buyer_id']))).sort_values().tolist() 186 | country_to_country_transit_matrix = pd.DataFrame(index=transit_countries, columns=transit_countries, data=0) 187 | for i in range(transit_flows.shape[0]): 188 | country_to_country_transit_matrix.loc[transit_flows['supplier_id'].iloc[i], transit_flows['buyer_id'].iloc[i]] = transit_flows['quantity'].iloc[i] 189 | 190 | # Form final consumption 191 | final_consumption = domestic_b2c_flows_per_sector.append(pd.DataFrame(index=['import'], data={'from_sector':'import', 'quantity':import_b2c_flows['quantity'].sum()})) 192 | 193 | # Enrich io matrix with import and export flows 194 | observed_io_matrix = pd.concat([ 195 | pd.concat([ 196 | observed_io_matrix, 197 | import_b2b_flows_per_sector.set_index('to_sector').rename(columns={'quantity':'total'}).transpose() 198 | ], axis=0, sort=True), 199 | export_flows_per_sector.set_index('from_sector').rename(columns={'quantity':'total'})], 200 | axis=1, sort=True).fillna(0) 201 | 202 | # Regional io matrix 203 | # legacy, should be removed, we shall do these kind of analysis outside of the core model 204 | if False: 205 | dic_firmid_to_region = {firm.pid: dic_odpoint_to_region[firm.odpoint] for firm in firm_list} 206 | 207 | domestic_b2b_flows['from_region'] = domestic_b2b_flows['supplier_id'].map(dic_firmid_to_region) 208 | domestic_b2b_flows['to_region'] = domestic_b2b_flows['buyer_id'].map(dic_firmid_to_region) 209 | 210 | domestic_b2b_flows_per_region = domestic_b2b_flows.groupby(['from_region', 'to_region'])['quantity'].sum().reset_index() 211 | 212 | regions = pd.Series(list(set(dic_odpoint_to_region.values()))).sort_values().tolist() 213 | region_to_region_io_matrix = pd.DataFrame(index=regions, columns=regions, data=0) 214 | for i in range(domestic_b2b_flows_per_region.shape[0]): 215 | region_to_region_io_matrix.loc[domestic_b2b_flows_per_region['from_region'].iloc[i], domestic_b2b_flows_per_region['to_region'].iloc[i]] = domestic_b2b_flows_per_region['quantity'].iloc[i] 216 | 217 | # Export Report 218 | final_consumption.to_csv(os.path.join(export_folder, 'initial_final_consumption.csv'), index=False) 219 | observed_io_matrix.to_csv(os.path.join(export_folder, 'initial_sector_io_matrix.csv'), index=True) 220 | if False: # legacy, should be removed, we shall do these kind of analysis outside of the core model 221 | region_to_region_io_matrix.to_csv(os.path.join(export_folder, 'initial_region_to_region_io_matrix.csv'), index=True) 222 | country_to_country_transit_matrix.to_csv(os.path.join(export_folder, 'initial_transit_matrix.csv'), index=True) 223 | import_b2b_flows_per_country.to_csv(os.path.join(export_folder, 'initial_import_b2b_flows_per_country.csv'), index=False) 224 | export_flows_per_country.to_csv(os.path.join(export_folder, 'initial_export_flows_per_country.csv'), index=False) 225 | 226 | 227 | def exportFirmTS(what, observer, export_folder): 228 | ts = pd.DataFrame( 229 | {t: \ 230 | {firm_id: val[what] for firm_id, val in observer.firms[t].items()} 231 | for t in observer.firms.keys()} 232 | ).transpose() 233 | ts.to_csv(os.path.join(export_folder, 'firm_'+what+'_ts.csv'), sep=',') 234 | return ts 235 | 236 | 237 | def exportCountriesTS(what, observer, export_folder): 238 | ts = pd.DataFrame( 239 | {t: \ 240 | {country_id: val[what] for country_id, val in observer.countries[t].items()} 241 | for t in observer.firms.keys()} 242 | ).transpose() 243 | ts.to_csv(os.path.join(export_folder, 'countries_'+what+'_ts.csv'), sep=',') 244 | return ts 245 | 246 | 247 | def exportHouseholdsTS(what, observer, export_folder): 248 | ts = pd.DataFrame( 249 | {t: val[what] for t, val in observer.households.items()} 250 | ).transpose() 251 | ts.to_csv(os.path.join(export_folder, 'households_'+what+'_ts.csv'), sep=',') 252 | return ts 253 | 254 | 255 | def exportTimeSeries(observer, export_folder): 256 | # Export time series per agent 257 | firm_production_ts = exportFirmTS('production', observer, export_folder) 258 | firm_profit_ts = exportFirmTS('profit', observer, export_folder) 259 | firm_transportcost_ts = exportFirmTS('transport_cost', observer, export_folder) 260 | firm_inputcost_ts = exportFirmTS('input_cost', observer, export_folder) 261 | firm_othercost_ts = exportFirmTS('other_cost', observer, export_folder) 262 | 263 | firm_avinventoryduration_ts = pd.DataFrame( 264 | {t: \ 265 | {firm_id: sum(val['inventory_duration'].values())/len(val['inventory_duration'].values()) for firm_id, val in observer.firms[t].items()} 266 | for t in observer.firms.keys()} 267 | ).transpose() 268 | firm_avinventoryduration_ts.to_csv(os.path.join(export_folder, 'firm_avinventoryduration_ts.csv'), sep=',') 269 | 270 | households_consumption_ts = exportHouseholdsTS('consumption', observer, export_folder) 271 | households_spending_ts = exportHouseholdsTS('spending', observer, export_folder) 272 | 273 | countries_spending_ts = exportCountriesTS('spending', observer, export_folder) 274 | 275 | # Export aggregated time series 276 | agg_df = pd.DataFrame({ 277 | 'firm_production': firm_production_ts.sum(axis=1), 278 | 'firm_profit': firm_profit_ts.sum(axis=1), 279 | 'firm_transportcost': firm_transportcost_ts.mean(axis=1), 280 | 'firm_avinventoryduration': firm_avinventoryduration_ts.mean(axis=1), 281 | 'households_consumption': households_consumption_ts.sum(axis=1), 282 | 'households_spending': households_spending_ts.sum(axis=1), 283 | "countres_spending": countries_spending_ts.sum(axis=1) 284 | }) 285 | agg_df.to_csv(os.path.join(export_folder, 'aggregate_ts.csv'), sep=',', index=False) 286 | 287 | 288 | def initializeCriticalityExportFile(export_folder): 289 | criticality_export_file = open(os.path.join(export_folder, 'criticality.csv'), "w") 290 | criticality_export_file.write( 291 | 'disrupted_node' \ 292 | + ',' + 'disrupted_edge' \ 293 | + ',' + 'disruption_duration' \ 294 | + ',' + 'households_extra_spending' \ 295 | + ',' + 'households_extra_spending_local' \ 296 | + ',' + 'spending_recovered' \ 297 | + ',' + 'countries_extra_spending' \ 298 | + ',' + 'countries_consumption_loss' \ 299 | + ',' + 'households_consumption_loss' \ 300 | + ',' + 'households_consumption_loss_local' \ 301 | + ',' + 'consumption_recovered' \ 302 | + ',' + 'generalized_cost_normal' \ 303 | + ',' + 'generalized_cost_disruption' \ 304 | + ',' + 'generalized_cost_country_normal' \ 305 | + ',' + 'generalized_cost_country_disruption' \ 306 | + ',' + 'usd_transported_normal' \ 307 | + ',' + 'usd_transported_disruption' \ 308 | + ',' + 'tons_transported_normal' \ 309 | + ',' + 'tons_transported_disruption' \ 310 | + ',' + 'tonkm_transported_normal' \ 311 | + ',' + 'tonkm_transported_disruption' \ 312 | + ',' + 'computing_time' \ 313 | + "\n") 314 | return criticality_export_file 315 | 316 | 317 | def initializeResPerFirmExportFile(export_folder, firm_list): 318 | extra_spending_export_file = open(os.path.join(export_folder, 'hh_extra_spending_per_firm.csv'), 'w') 319 | missing_consumption_export_file = open(os.path.join(export_folder, 'hh_missing_consumption_per_firm.csv'), 'w') 320 | list_firm_id = [str(firm.pid) for firm in firm_list] 321 | extra_spending_export_file.write('disrupted_node,disrupted_edge,'+','.join(list_firm_id)+"\n") 322 | missing_consumption_export_file.write('disrupted_node,disrupted_edge,'+','.join(list_firm_id)+"\n") 323 | return extra_spending_export_file, missing_consumption_export_file 324 | 325 | 326 | def writeCriticalityResults(criticality_export_file, observer, disruption, 327 | disruption_duration, computation_time): 328 | criticality_export_file.write(str(disruption['node']) \ 329 | + ', '+ str(disruption['edge']) \ 330 | + ',' + str(disruption_duration) \ 331 | + ',' + str(observer.households_extra_spending) \ 332 | + ',' + str(observer.households_extra_spending_local) \ 333 | + ',' + str(observer.spending_recovered) \ 334 | + ',' + str(observer.countries_extra_spending) \ 335 | + ',' + str(observer.countries_consumption_loss) \ 336 | + ',' + str(observer.households_consumption_loss) \ 337 | + ',' + str(observer.households_consumption_loss_local) \ 338 | + ',' + str(observer.consumption_recovered) \ 339 | + ',' + str(observer.generalized_cost_normal) \ 340 | + ',' + str(observer.generalized_cost_disruption) \ 341 | + ',' + str(observer.generalized_cost_country_normal) \ 342 | + ',' + str(observer.generalized_cost_country_disruption) \ 343 | + ',' + str(observer.usd_transported_normal) \ 344 | + ',' + str(observer.usd_transported_disruption) \ 345 | + ',' + str(observer.tons_transported_normal) \ 346 | + ',' + str(observer.tons_transported_disruption) \ 347 | + ',' + str(observer.tonkm_transported_normal) \ 348 | + ',' + str(observer.tonkm_transported_disruption) \ 349 | + ',' + str(computation_time/60) \ 350 | + "\n") 351 | 352 | def writeResPerFirmResults(extra_spending_export_file, missing_consumption_export_file, 353 | observer, disruption): 354 | val = [str(x) for x in observer.households_extra_spending_per_firm.tolist()] 355 | extra_spending_export_file.write( 356 | str(disruption['node'])+','+str(disruption['edge'])+','+",".join(val)+"\n" 357 | ) 358 | val = [str(x) for x in observer.households_consumption_loss_per_firm.tolist()] 359 | missing_consumption_export_file.write( 360 | str(disruption['node'])+','+str(disruption['edge'])+','+",".join(val)+"\n" 361 | ) 362 | # pd.DataFrame({str(disruption): obs.households_extra_spending_per_firm}).transpose().to_csv(extra_spending_export_file, header=False, mode='a') 363 | # pd.DataFrame({str(disruption): obs.households_consumption_loss_per_firm}).transpose().to_csv(f, header=False) 364 | 365 | 366 | def exportParameters(exp_folder): 367 | copy_command = { 368 | 'nt': 'copy', 369 | 'posix': "cp" 370 | } 371 | os.system(copy_command[os.name]+" "+ 372 | os.path.join('parameter', "parameters.py")+" "+ 373 | os.path.join(exp_folder, "parameters.txt") 374 | ) 375 | logging.info("Parameters exported in "+os.path.join(exp_folder, "parameters.txt")) -------------------------------------------------------------------------------- /code/functions.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import networkx as nx 3 | import pandas as pd 4 | import geopandas as gpd 5 | import math 6 | import json 7 | import os 8 | import random 9 | 10 | def identify_special_transport_nodes(transport_nodes, special): 11 | res = transport_nodes.dropna(subset=['special']) 12 | res = res.loc[res['special'].str.contains(special), "id"].tolist() 13 | return res 14 | 15 | 16 | def congestion_function(current_traffic, normal_traffic): 17 | if (current_traffic==0) & (normal_traffic==0): 18 | return 0 19 | 20 | elif (current_traffic>0) & (normal_traffic==0): 21 | return 0.5 22 | 23 | elif (current_traffic==0) & (normal_traffic>0): 24 | return 0 25 | 26 | elif (current_traffic < normal_traffic): 27 | return 0 28 | 29 | elif (current_traffic < 1.5*normal_traffic): 30 | return 0 31 | else: 32 | excess_traffic = current_traffic - 1.5*normal_traffic 33 | return 4 * (1 - math.exp(-(excess_traffic))) 34 | 35 | 36 | def production_function(inputs, input_mix, function_type="Leontief"): 37 | # Leontief 38 | if function_type == "Leontief": 39 | try: 40 | return min([inputs[input_id] / input_mix[input_id] for input_id, val in input_mix.items()]) 41 | except KeyError: 42 | return 0 43 | 44 | else: 45 | raise ValueError("Wrong mode selected") 46 | 47 | 48 | def purchase_planning_function(estimated_need, inventory, inventory_duration_target=1, reactivity_rate=1): 49 | """Decide the quantity of each input to buy according to a dynamical rule 50 | """ 51 | target_inventory = (1 + inventory_duration_target) * estimated_need 52 | if inventory >= target_inventory + estimated_need: 53 | return 0 54 | elif inventory >= target_inventory: 55 | return target_inventory + estimated_need - inventory 56 | else: 57 | return (1 - reactivity_rate) * estimated_need + reactivity_rate * (estimated_need + target_inventory - inventory) 58 | 59 | 60 | def evaluate_inventory_duration(estimated_need, inventory): 61 | if estimated_need == 0: 62 | return None 63 | else: 64 | return inventory / estimated_need - 1 65 | 66 | 67 | def set_initial_conditions(graph, firm_list, households, country_list, mode="equilibrium"): 68 | households.reset_variables() 69 | for edge in graph.in_edges(households): 70 | graph[edge[0]][households]['object'].reset_variables() 71 | for firm in firm_list: 72 | firm.reset_variables() 73 | for edge in graph.in_edges(firm): 74 | graph[edge[0]][firm]['object'].reset_variables() 75 | for country in country_list: 76 | country.reset_variables() 77 | for edge in graph.in_edges(country): 78 | graph[edge[0]][country]['object'].reset_variables() 79 | 80 | if mode=="equilibrium": 81 | initilize_at_equilibrium(graph, firm_list, households, country_list) 82 | elif mode =="dynamic": 83 | initializeFirmsHouseholds(G, firm_list, households) 84 | else: 85 | print("Wrong mode chosen") 86 | 87 | 88 | 89 | def buildFinalDemandVector(households, country_list, firm_list): 90 | '''Create a numpy.Array of the final demand per firm, including exports 91 | 92 | Households and countries should already have set their purchase plan 93 | 94 | Returns 95 | ------- 96 | numpy.Array of dimension (len(firm_list), 1) 97 | ''' 98 | if len(households.purchase_plan) == 0: 99 | raise ValueError('Households purchase plan is empty') 100 | 101 | if (len(country_list)>0) \ 102 | & (sum([len(country.purchase_plan) for country in country_list]) == 0): 103 | raise ValueError('The purchase plan of all countries is empty') 104 | 105 | final_demand_vector = np.zeros((len(firm_list), 1)) 106 | # Collect households final demand. They buy only from firms. 107 | for firm_id, quantity in households.purchase_plan.items(): 108 | final_demand_vector[(firm_id,0)] += quantity 109 | # Collect country final demand. They buy from firms and countries. 110 | # We need to filter the demand directed to firms only. 111 | for country in country_list: 112 | for supplier_id, quantity in country.purchase_plan.items(): 113 | if isinstance(supplier_id, int): # we only consider purchase from firms 114 | final_demand_vector[(supplier_id,0)] += quantity 115 | 116 | return final_demand_vector 117 | 118 | 119 | def initilize_at_equilibrium(graph, firm_list, households, country_list): 120 | """Initialize the supply chain network at the input--output equilibrium 121 | 122 | We will use the matrix forms to solve the following equation for X (production): 123 | D + E + AX = X + I 124 | where: 125 | D: final demand from households 126 | E: exports 127 | I: imports 128 | X: firm productions 129 | A: the input-output matrix 130 | These vectors and matrices are in the firm-and-country space. 131 | 132 | Parameters 133 | ---------- 134 | graph : NetworkX.DiGraph 135 | The supply chain network. Nodes are firms, countries, or households. 136 | Edges are commercial links. 137 | 138 | firm_list : list of Firms 139 | List of firms 140 | 141 | households : Households 142 | households 143 | 144 | country_list : list of Countries 145 | List of countries 146 | 147 | Returns 148 | ------- 149 | Nothing 150 | """ 151 | 152 | # Get the wieghted connectivity matrix. 153 | # Weight is the sectoral technical coefficient, if there is only one supplier for the input 154 | # It there are several, the technical coefficient is multiplied by the share of input of 155 | # this type that the firm buys to this supplier. 156 | firm_connectivity_matrix = nx.adjacency_matrix( 157 | graph.subgraph(list(graph.nodes)[:-1]), 158 | weight='weight', 159 | nodelist=firm_list 160 | ).todense() 161 | # Imports are considered as "a sector". We get the weight per firm for these inputs. 162 | # !!! aren't I computing the same thing as the IMP tech coef? 163 | import_weight_per_firm = [ 164 | sum([ 165 | graph[supply_edge[0]][supply_edge[1]]['weight'] 166 | for supply_edge in graph.in_edges(firm) 167 | if graph[supply_edge[0]][supply_edge[1]]['object'].category == 'import' 168 | ]) 169 | for firm in firm_list 170 | ] 171 | n = len(firm_list) 172 | 173 | # Build final demand vector per firm, of length n 174 | # Exports are considered as final demand 175 | final_demand_vector = buildFinalDemandVector(households, country_list, firm_list) 176 | 177 | # Solve the input--output equation 178 | eq_production_vector = np.linalg.solve( 179 | np.eye(n) - firm_connectivity_matrix, 180 | final_demand_vector 181 | ) 182 | 183 | # Initialize households variables 184 | households.initialize_var_on_purchase_plan() 185 | 186 | # Compute costs 187 | ## Input costs 188 | domestic_input_cost_vector = np.multiply( 189 | firm_connectivity_matrix.sum(axis=0).reshape((n,1)), 190 | eq_production_vector 191 | ) 192 | import_input_cost_vector = np.multiply( 193 | np.array(import_weight_per_firm).reshape((n,1)), 194 | eq_production_vector 195 | ) 196 | input_cost_vector = domestic_input_cost_vector + import_input_cost_vector 197 | ## Transport costs 198 | proportion_of_transport_cost_vector = 0.2*np.ones((n,1)) 199 | transport_cost_vector = np.multiply(eq_production_vector, proportion_of_transport_cost_vector) 200 | ## Compute other costs based on margin 201 | margin = np.array([firm.target_margin for firm in firm_list]).reshape((n,1)) 202 | other_cost_vector = np.multiply(eq_production_vector, (1-margin))\ 203 | - input_cost_vector - transport_cost_vector 204 | 205 | # Based on these calculus, update agents variables 206 | ## Firm operational variables 207 | for firm in firm_list: 208 | firm.initialize_ope_var_using_eq_production( 209 | eq_production=eq_production_vector[(firm.pid, 0)] 210 | ) 211 | ## Firm financial variables 212 | for firm in firm_list: 213 | firm.initialize_fin_var_using_eq_cost( 214 | eq_production=eq_production_vector[(firm.pid, 0)], 215 | eq_input_cost=input_cost_vector[(firm.pid,0)], 216 | eq_transport_cost=transport_cost_vector[(firm.pid,0)], 217 | eq_other_cost=other_cost_vector[(firm.pid,0)] 218 | ) 219 | ## Commercial links: agents set their order 220 | households.send_purchase_orders(graph) 221 | for country in country_list: 222 | country.send_purchase_orders(graph) 223 | for firm in firm_list: 224 | firm.send_purchase_orders(graph) 225 | ##the following is just to set once for all the share of sales of each client 226 | for firm in firm_list: 227 | firm.retrieve_orders(graph) 228 | firm.aggregate_orders() 229 | firm.eq_total_order = firm.total_order 230 | firm.calculate_client_share_in_sales() 231 | 232 | ## Set price to 1 233 | reset_prices(graph) 234 | 235 | 236 | def reset_prices(graph): 237 | # set prices to 1 238 | for edge in graph.edges: 239 | graph[edge[0]][edge[1]]['object'].price = 1 240 | 241 | 242 | def generate_weights(nb_values): 243 | rdm_values = np.random.uniform(0,1, size=nb_values) 244 | return list(rdm_values / sum(rdm_values)) 245 | 246 | def generate_weights_from_list(list_nb): 247 | sum_list = sum(list_nb) 248 | return [nb/sum_list for nb in list_nb] 249 | 250 | 251 | def determine_suppliers_and_weights(potential_supplier_pid, 252 | nb_selected_suppliers, firm_list, mode): 253 | 254 | # Get importance for each of them 255 | if "importance_export" in mode.keys(): 256 | importance_of_each = rescale_values([ 257 | firm_list[firm_pid].importance * mode['importance_export']['bonus'] 258 | if firm_list[firm_pid].odpoint in mode['importance_export']['export_odpoints'] 259 | else firm_list[firm_pid].importance 260 | for firm_pid in potential_supplier_pid 261 | ]) 262 | elif "importance" in mode.keys(): 263 | importance_of_each = rescale_values([ 264 | firm_list[firm_pid].importance 265 | for firm_pid in potential_supplier_pid 266 | ]) 267 | 268 | # Select supplier 269 | prob_to_be_selected = np.array(importance_of_each) 270 | prob_to_be_selected /= prob_to_be_selected.sum() 271 | selected_supplier_ids = np.random.choice(potential_supplier_pid, 272 | p=prob_to_be_selected, size=nb_selected_suppliers, replace=False).tolist() 273 | 274 | # Compute weights, based on importance only 275 | supplier_weights = generate_weights_from_list([ 276 | firm_list[firm_pid].importance 277 | for firm_pid in selected_supplier_ids 278 | ]) 279 | 280 | return selected_supplier_ids, supplier_weights 281 | 282 | 283 | def identify_firms_in_each_sector(firm_list): 284 | firm_id_each_sector = pd.DataFrame({ 285 | 'firm': [firm.pid for firm in firm_list], 286 | 'sector': [firm.sector for firm in firm_list]}) 287 | dic_sector_to_firmid = firm_id_each_sector\ 288 | .groupby('sector')['firm']\ 289 | .apply(lambda x: list(x))\ 290 | .to_dict() 291 | return dic_sector_to_firmid 292 | 293 | 294 | def allFirmsSendPurchaseOrders(G, firm_list): 295 | for firm in firm_list: 296 | firm.send_purchase_orders(G) 297 | 298 | 299 | def allAgentsSendPurchaseOrders(G, firm_list, households, country_list): 300 | households.send_purchase_orders(G) 301 | for firm in firm_list: 302 | firm.send_purchase_orders(G) 303 | for country in country_list: 304 | country.send_purchase_orders(G) 305 | 306 | 307 | def allFirmsRetrieveOrders(G, firm_list): 308 | for firm in firm_list: 309 | firm.retrieve_orders(G) 310 | 311 | def allFirmsPlanProduction(firm_list, graph, price_fct_input=True): 312 | for firm in firm_list: 313 | firm.aggregate_orders() 314 | firm.decide_production_plan() 315 | if price_fct_input: 316 | firm.calculate_price(graph, firm_list) 317 | 318 | def allFirmsPlanPurchase(firm_list): 319 | for firm in firm_list: 320 | firm.evaluate_input_needs() 321 | firm.decide_purchase_plan() #mode="reactive" 322 | 323 | 324 | def initializeFirmsHouseholds(G, firm_list, households): 325 | '''For dynamic initialization''' 326 | # Initialize dictionary 327 | for firm in firm_list: 328 | firm.inventory = {input_id: 0 for input_id, mix in firm.input_mix.items()} 329 | firm.input_needs = firm.inventory 330 | # Initialize orders 331 | households.send_purchase_orders(G) 332 | allFirmsRetrieveOrders(G, firm_list) 333 | allFirmsPlanProduction(firm_list, G) 334 | allFirmsPlanPurchase(firm_list) 335 | for i in range(0,10): 336 | allAgentsSendPurchaseOrders(G, firm_list, country_list) 337 | allFirmsRetrieveOrders(G, firm_list) 338 | allFirmsPlanProduction(firm_list, G) 339 | allFirmsPlanPurchase(firm_list) 340 | # Initialize inventories 341 | for firm in firm_list: 342 | firm.inventory = firm.input_needs 343 | # Initialize production plan 344 | for firm in firm_list: 345 | firm.production_target = firm.total_order 346 | 347 | 348 | def allFirmsProduce(firm_list): 349 | for firm in firm_list: 350 | firm.produce() 351 | 352 | # def allFirmsDeliver(G, firm_list, T, rationing_mode, route_optimization_weight): 353 | # for firm in firm_list: 354 | # firm.deliver_products(G, T, rationing_mode, route_optimization_weight) 355 | 356 | def allAgentsDeliver(G, firm_list, country_list, T, rationing_mode, explicit_service_firm, 357 | monetary_unit_transport_cost="USD", monetary_unit_flow="mUSD", cost_repercussion_mode="type1"): 358 | for country in country_list: 359 | country.deliver_products(G, T, 360 | monetary_unit_transport_cost="USD", monetary_unit_flow="mUSD", 361 | cost_repercussion_mode=cost_repercussion_mode, 362 | explicit_service_firm=explicit_service_firm) 363 | for firm in firm_list: 364 | firm.deliver_products(G, T, rationing_mode, 365 | monetary_unit_transport_cost="USD", monetary_unit_flow="mUSD", 366 | cost_repercussion_mode=cost_repercussion_mode, 367 | explicit_service_firm=explicit_service_firm) 368 | 369 | 370 | def allAgentsReceiveProducts(G, firm_list, households, country_list, T): 371 | for firm in firm_list: 372 | firm.receive_products_and_pay(G, T) 373 | households.receive_products_and_pay(G) 374 | for country in country_list: 375 | country.receive_products_and_pay(G, T) 376 | for firm in firm_list: 377 | firm.evaluate_profit(G) 378 | 379 | 380 | def allAgentsPrintInfo(firm_list, households): 381 | for firm in firm_list: 382 | firm.print_info() 383 | households.print_info() 384 | 385 | 386 | def rescale_values(input_list, minimum=0.1, maximum=1, max_val=None, alpha=1): 387 | max_val = max_val or max(input_list) 388 | min_val = min(input_list) 389 | if max_val == min_val: 390 | return [0.5 * maximum] * len(input_list) 391 | else: 392 | return [minimum + (((val - min_val) / (max_val - min_val))**alpha) * (maximum - minimum) for val in input_list] 393 | 394 | 395 | def compute_distance(x0, y0, x1, y1): 396 | return math.sqrt((x1-x0)**2+(y1-y0)**2) 397 | 398 | 399 | def compute_distance_from_arcmin(x0, y0, x1, y1): 400 | EW_dist = (x1-x0)*112.5 401 | NS_dist = (y1-y0)*111 402 | return math.sqrt(EW_dist**2+NS_dist**2) 403 | 404 | 405 | def evaluate_sectoral_shock(firm_table, disrupted_node): 406 | disrupted_sectoral_production = firm_table[firm_table['odpoint'].isin(disrupted_node)].groupby('sector_id')['total_production'].sum() 407 | normal_sectoral_production = firm_table.groupby('sector_id')['total_production'].sum() 408 | consolidated_table = pd.concat([disrupted_sectoral_production, normal_sectoral_production], axis=1).fillna(0) 409 | return consolidated_table.iloc[:,0] / consolidated_table.iloc[:,1] 410 | 411 | 412 | def apply_sectoral_shocks(sectoral_shock, firm_list): 413 | for firm in firm_list: 414 | if firm.sector in sectoral_shock.index: 415 | firm.production_capacity = firm.production_target * (1 - sectoral_shock[firm.sector]) 416 | 417 | 418 | def recover_from_sectoral_shocks(firm_list): 419 | for firm in firm_list: 420 | firm.production_capacity = firm.eq_production_capacity 421 | 422 | 423 | def transformUSDtoTons(monetary_flow, monetary_unit, usd_per_ton): 424 | # Load monetary units 425 | monetary_unit_factor = { 426 | "mUSD": 1e6, 427 | "kUSD": 1e3, 428 | "USD": 1 429 | } 430 | factor = monetary_unit_factor[monetary_unit] 431 | 432 | #sector_to_usdPerTon = sector_table.set_index('sector')['usd_per_ton'] 433 | 434 | return monetary_flow / (usd_per_ton/factor) -------------------------------------------------------------------------------- /code/mainBase.py: -------------------------------------------------------------------------------- 1 | # Check that the module is used correctly 2 | import sys 3 | if (len(sys.argv)<=1): 4 | raise ValueError('Syntax: python36 code/main.py (reuse_data 1 0)') 5 | 6 | # Import modules 7 | import os 8 | import networkx as nx 9 | import pandas as pd 10 | import time 11 | import random 12 | import logging 13 | import json 14 | import yaml 15 | from datetime import datetime 16 | import importlib 17 | import geopandas as gpd 18 | import pickle 19 | 20 | # Import functions and classes 21 | from builder import * 22 | from functions import * 23 | from simulations import * 24 | from export_functions import * 25 | from class_firm import Firm 26 | from class_observer import Observer 27 | from class_transport_network import TransportNetwork 28 | 29 | # Import parameters. It should be in this specific order. 30 | project_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) 31 | sys.path.insert(1, project_path) 32 | from parameter.parameters_default import * 33 | from parameter.parameters import * 34 | from parameter.filepaths_default import * 35 | from parameter.filepaths import * 36 | 37 | # Start run 38 | t0 = time.time() 39 | timestamp = datetime.now().strftime('%Y%m%d_%H%M%S') 40 | 41 | # If there is sth to export, then we create the output folder 42 | if any(list(export.values())): 43 | exp_folder = os.path.join('output', input_folder, timestamp) 44 | if not os.path.isdir(os.path.join('output', input_folder)): 45 | os.mkdir(os.path.join('output', input_folder)) 46 | os.mkdir(exp_folder) 47 | exportParameters(exp_folder) 48 | else: 49 | exp_folder = None 50 | 51 | # Set logging parameters 52 | if export['log']: 53 | log_filename = os.path.join(exp_folder, 'exp.log') 54 | importlib.reload(logging) 55 | logging.basicConfig( 56 | filename=log_filename, 57 | level=logging_level, 58 | format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' 59 | ) 60 | logging.getLogger().addHandler(logging.StreamHandler()) 61 | else: 62 | importlib.reload(logging) 63 | logging.basicConfig( 64 | level=logging_level, 65 | format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' 66 | ) 67 | 68 | logging.info('Simulation '+timestamp+' starting using '+input_folder+' input data.') 69 | 70 | # Create transport network 71 | with open(filepaths['transport_parameters'], "r") as yamlfile: 72 | transport_params = yaml.load(yamlfile, Loader=yaml.FullLoader) 73 | 74 | ## Creating the transport network consumes time 75 | ## To accelerate, we enable storing the transport network as pickle for later reuse 76 | ## With new input data, run the model with first arg = 0, it generates the pickle 77 | ## Then use first arg = 1, to skip network building and use directly the pickle 78 | pickle_suffix = '_'.join([mode[:3] for mode in transport_modes])+'_pickle' 79 | extra_road_log = "" 80 | if extra_roads: 81 | pickle_suffix = '_'.join([mode[:3] for mode in transport_modes])+'ext_pickle' 82 | extra_road_log = " with extra roads" 83 | pickle_transNet_filename = "transNet_"+pickle_suffix 84 | pickle_transEdg_filename = "transEdg_"+pickle_suffix 85 | pickle_transNod_filename = "transNod_"+pickle_suffix 86 | if sys.argv[1] == "0": 87 | logging.info('Creating transport network'+extra_road_log) 88 | T, transport_nodes, transport_edges = createTransportNetwork(transport_modes, filepaths, transport_params) 89 | logging.info('Transport network'+extra_road_log+' created.') 90 | pickle.dump(T, open(os.path.join('tmp', pickle_transNet_filename), 'wb')) 91 | pickle.dump(transport_edges, open(os.path.join('tmp', pickle_transEdg_filename), 'wb')) 92 | pickle.dump(transport_nodes, open(os.path.join('tmp', pickle_transNod_filename), 'wb')) 93 | logging.info('Transport network saved in tmp folder: '+pickle_transNet_filename) 94 | else: 95 | T = pickle.load(open(os.path.join('tmp', pickle_transNet_filename), 'rb')) 96 | transport_edges = pickle.load(open(os.path.join('tmp', pickle_transEdg_filename), 'rb')) 97 | transport_nodes = pickle.load(open(os.path.join('tmp', pickle_transNod_filename), 'rb')) 98 | logging.info('Transport network'+extra_road_log+' generated from temp file.') 99 | # Generate weight 100 | logging.info('Generating shortest-path weights on transport network') 101 | T.defineWeights(route_optimization_weight) 102 | # Print data on km per modes 103 | km_per_mode = pd.DataFrame({"km": nx.get_edge_attributes(T, "km"), "type": nx.get_edge_attributes(T, "type")}) 104 | km_per_mode = km_per_mode.groupby('type')['km'].sum().to_dict() 105 | logging.info("Total length of transport network is: "+ 106 | "{:.0f} km".format(sum(km_per_mode.values()))) 107 | for mode, km in km_per_mode.items(): 108 | logging.info(mode+": {:.0f} km".format(km)) 109 | logging.info('Nb of nodes: '+str(len(T.nodes))+', Nb of edges: '+str(len(T.edges))) 110 | # Export transport network 111 | transport_nodes.to_file(os.path.join(exp_folder, "transport_nodes.geojson"), driver='GeoJSON') 112 | transport_edges.to_file(os.path.join(exp_folder, "transport_edges.geojson"), driver='GeoJSON') 113 | 114 | ### Filter sectors 115 | logging.info('Filtering the sectors based on their output. '+ 116 | "Cutoff type is "+cutoff_sector_output['type']+ 117 | ", cutoff value is "+str(cutoff_sector_output['value'])) 118 | sector_table = pd.read_csv(filepaths['sector_table']) 119 | filtered_sectors = filterSector(sector_table, cutoff=cutoff_sector_output['value'], 120 | cutoff_type=cutoff_sector_output['type'], 121 | sectors_to_include=sectors_to_include) 122 | logging.info('The filtered sectors are: '+str(filtered_sectors)) 123 | 124 | ### Create firms 125 | # Filter district sector combination 126 | logging.info('Generating the firm table. '+ 127 | 'Districts included: '+str(districts_to_include)+ 128 | ', district sector cutoff: '+str(district_sector_cutoff)) 129 | firm_table, odpoint_table, filtered_district_sector_table = \ 130 | rescaleNbFirms(filepaths['district_sector_importance'], sector_table, 131 | transport_nodes, district_sector_cutoff, nb_top_district_per_sector, 132 | explicit_service_firm=explicit_service_firm, 133 | sectors_to_include=filtered_sectors, districts_to_include=districts_to_include) 134 | #firm_table.to_csv(os.path.join("output", "Test", 'firm_table.csv')) 135 | logging.info('Firm and OD tables generated') 136 | 137 | # Creating the firms 138 | nb_firms = 'all' 139 | logging.info('Creating firm_list. nb_firms: '+str(nb_firms)+ 140 | ' reactivity_rate: '+str(reactivity_rate)+ 141 | ' utilization_rate: '+str(utilization_rate)) 142 | firm_list = createFirms(firm_table, nb_firms, reactivity_rate, utilization_rate) 143 | n = len(firm_list) 144 | present_sectors = list(set([firm.sector for firm in firm_list])) 145 | present_sectors.sort() 146 | flow_types_to_export = present_sectors+['domestic_B2B', 'transit', 'import', 'export', 'total'] 147 | logging.info('Firm_list created, size is: '+str(n)) 148 | logging.info('Sectors present are: '+str(present_sectors)) 149 | 150 | # Loading the technical coefficients 151 | import_code = sector_table.loc[sector_table['type']=='imports', 'sector'].iloc[0] 152 | firm_list = loadTechnicalCoefficients(firm_list, filepaths['tech_coef'], io_cutoff, import_code) 153 | logging.info('Technical coefficient loaded. io_cutoff: '+str(io_cutoff)) 154 | 155 | # Loading the inventories 156 | firm_list = loadInventories(firm_list, inventory_duration_target=inventory_duration_target, 157 | filepath_inventory_duration_targets=filepaths['inventory_duration_targets'], 158 | extra_inventory_target=extra_inventory_target, 159 | inputs_with_extra_inventories=inputs_with_extra_inventories, 160 | buying_sectors_with_extra_inventories=buying_sectors_with_extra_inventories, 161 | min_inventory=1) 162 | logging.info('Inventory duration targets loaded, inventory_duration_target: '+str(inventory_duration_target)) 163 | if extra_inventory_target: 164 | logging.info("Extra inventory duration: "+str(extra_inventory_target)+\ 165 | " for inputs "+str(inputs_with_extra_inventories)+\ 166 | " for buying sectors "+str(buying_sectors_with_extra_inventories)) 167 | 168 | # inventories = {sec:{} for sec in present_sectors} 169 | # for firm in firm_list: 170 | # for input_id, inventory in firm.inventory_duration_target.items(): 171 | # if input_id not in inventories[firm.sector].keys(): 172 | # inventories[firm.sector][input_id] = inventory 173 | # else: 174 | # inventories[firm.sector][input_id] = inventory 175 | # with open(os.path.join("output", "Test", "inventories.json"), 'w') as f: 176 | # json.dump(inventories, f) 177 | 178 | # Adding the firms onto the nodes of the transport network 179 | T.locate_firms_on_nodes(firm_list) 180 | logging.info('Firms located on the transport network') 181 | 182 | 183 | ### Create agents: Countries 184 | logging.info('Creating country_list. Countries included: '+str(countries_to_include)) 185 | country_list = createCountries(filepaths['imports'], filepaths['exports'], 186 | filepaths['transit_matrix'], transport_nodes, 187 | present_sectors, countries_to_include=countries_to_include, 188 | time_resolution=time_resolution, 189 | target_units=monetary_units_in_model, input_units=monetary_units_inputed) 190 | logging.info('Country_list created: '+str([country.pid for country in country_list])) 191 | # [Deprecated] Linking the countries to the the transport network via their transit point. 192 | # This creates "virtual nodes" in the transport network that corresponds to the countries. 193 | # We create a copy of the transport network without such nodes, it will be used for plotting purposes 194 | # for country in country_list: 195 | # T.connect_country(country) 196 | 197 | ### Specify the weight of a unit worth of good, which may differ according to sector, or even to each firm/countries 198 | # Note that for imports, i.e. for the goods delivered by a country, and for transit flows, we do not disentangle sectors 199 | # In this case, we use an average. 200 | firm_list, country_list, sector_to_usdPerTon = loadTonUsdEquivalence(sector_table, firm_list, country_list) 201 | 202 | ### Create agents: Households 203 | logging.info('Defining the final demand to each firm. time_resolution: '+str(time_resolution)) 204 | firm_table = defineFinalDemand(firm_table, odpoint_table, 205 | filepath_population=filepaths['population'], filepath_final_demand=filepaths['final_demand'], 206 | time_resolution=time_resolution, 207 | target_units=monetary_units_in_model, input_units=monetary_units_inputed) 208 | logging.info('Creating households and loaded their purchase plan') 209 | households = createHouseholds(firm_table) 210 | logging.info('Households created') 211 | 212 | 213 | ### Create supply chain network 214 | logging.info('The supply chain graph is being created. nb_suppliers_per_input: '+str(nb_suppliers_per_input)) 215 | G = nx.DiGraph() 216 | 217 | logging.info('Tanzanian households are selecting their Tanzanian retailers (domestic B2C flows)') 218 | households.select_suppliers(G, firm_list, mode='inputed') 219 | 220 | logging.info('Tanzanian exporters are being selected by purchasing countries (export B2B flows)') 221 | logging.info('and trading countries are being connected (transit flows)') 222 | for country in country_list: 223 | country.select_suppliers(G, firm_list, country_list, sector_table, transport_nodes) 224 | 225 | logging.info('Tanzanian firms are selecting their Tanzanian and international suppliers (import B2B flows) (domestric B2B flows). Weight localisation is '+str(weight_localization)) 226 | for firm in firm_list: 227 | firm.select_suppliers(G, firm_list, country_list, nb_suppliers_per_input, weight_localization, 228 | import_code=import_code) 229 | 230 | logging.info('The nodes and edges of the supplier--buyer have been created') 231 | if export['sc_network_summary']: 232 | exportSupplyChainNetworkSummary(G, firm_list, exp_folder) 233 | 234 | logging.info('Compute the orders on each supplier--buyer link') 235 | setInitialSCConditions(T, G, firm_list, 236 | country_list, households, initialization_mode="equilibrium") 237 | 238 | ### Coupling transportation network T and production network G 239 | logging.info('The supplier--buyer graph is being connected to the transport network') 240 | logging.info('Each B2B and transit edge is being linked to a route of the transport network') 241 | transport_modes = pd.read_csv(filepaths['transport_modes']) 242 | logging.info('Routes for transit flows and import flows are being selected by trading countries finding routes to their clients') 243 | for country in country_list: 244 | country.decide_initial_routes(G, T, transport_modes, account_capacity, monetary_units_in_model) 245 | logging.info('Routes for export flows and B2B domestic flows are being selected by Tanzanian firms finding routes to their clients') 246 | for firm in firm_list: 247 | if firm.sector_type not in ['services', 'utility', 'transport']: 248 | firm.decide_initial_routes(G, T, transport_modes, account_capacity, monetary_units_in_model) 249 | logging.info('The supplier--buyer graph is now connected to the transport network') 250 | 251 | logging.info("Initialization completed, "+str((time.time()-t0)/60)+" min") 252 | 253 | 254 | if disruption_analysis is None: 255 | logging.info("No disruption. Simulation of the initial state") 256 | t0 = time.time() 257 | 258 | # comments: not sure if the other initialization mode is (i) working and (ii) useful 259 | logging.info("Calculating the equilibrium") 260 | setInitialSCConditions(transport_network=T, sc_network=G, firm_list=firm_list, 261 | country_list=country_list, households=households, initialization_mode="equilibrium") 262 | 263 | obs = Observer(firm_list, 0) 264 | 265 | if export['district_sector_table']: 266 | exportDistrictSectorTable(filtered_district_sector_table, export_folder=exp_folder) 267 | 268 | if export['firm_table'] or export['odpoint_table']: 269 | exportFirmODPointTable(firm_list, firm_table, odpoint_table, filepaths['roads_nodes'], 270 | export_firm_table=export['firm_table'], export_odpoint_table=export['odpoint_table'], 271 | export_folder=exp_folder) 272 | 273 | if export['country_table']: 274 | exportCountryTable(country_list, export_folder=exp_folder) 275 | 276 | if export['edgelist_table']: 277 | exportEdgelistTable(supply_chain_network=G, export_folder=exp_folder) 278 | 279 | if export['inventories']: 280 | exportInventories(firm_list, export_folder=exp_folder) 281 | 282 | ### Run the simulation at the initial state 283 | logging.info("Simulating the initial state") 284 | runOneTimeStep(transport_network=T, sc_network=G, firm_list=firm_list, 285 | country_list=country_list, households=households, 286 | disruption=None, 287 | congestion=congestion, 288 | route_optimization_weight=route_optimization_weight, 289 | explicit_service_firm=explicit_service_firm, 290 | propagate_input_price_change=propagate_input_price_change, 291 | rationing_mode=rationing_mode, 292 | observer=obs, 293 | time_step=0, 294 | export_folder=exp_folder, 295 | export_flows=export['flows'], 296 | flow_types_to_export = flow_types_to_export, 297 | transport_edges = transport_edges, 298 | export_sc_flow_analysis=export['sc_flow_analysis'], 299 | monetary_unit_transport_cost="USD", 300 | monetary_unit_flow=monetary_units_in_model, 301 | cost_repercussion_mode=cost_repercussion_mode) 302 | 303 | if export['agent_data']: 304 | exportAgentData(obs, exp_folder) 305 | 306 | logging.info("Simulation completed, "+str((time.time()-t0)/60)+" min") 307 | 308 | 309 | else: 310 | logging.info("Criticality analysis. Defining the list of disruptions") 311 | disruption_list = defineDisruptionList(disruption_analysis, transport_network=T, 312 | nodes=transport_nodes, edges=transport_edges, 313 | nodeedge_tested_topn=nodeedge_tested_topn, nodeedge_tested_skipn=nodeedge_tested_skipn) 314 | logging.info(str(len(disruption_list))+" disruptions to simulates.") 315 | 316 | if export['criticality']: 317 | criticality_export_file = initializeCriticalityExportFile(export_folder=exp_folder) 318 | 319 | if export['impact_per_firm']: 320 | extra_spending_export_file, missing_consumption_export_file = \ 321 | initializeResPerFirmExportFile(exp_folder, firm_list) 322 | 323 | ### Disruption Loop 324 | for disruption in disruption_list: 325 | logging.info("Simulating disruption "+str(disruption)) 326 | t0 = time.time() 327 | 328 | ### Set initial conditions and create observer 329 | logging.info("Calculating the equilibrium") 330 | setInitialSCConditions(transport_network=T, sc_network=G, firm_list=firm_list, 331 | country_list=country_list, households=households, initialization_mode="equilibrium") 332 | 333 | Tfinal = duration_dic[disruption['duration']] 334 | obs = Observer(firm_list, Tfinal) 335 | 336 | if disruption == disruption_list[0]: 337 | export_flows = export['flows'] 338 | else: 339 | export_flows = False 340 | 341 | logging.info("Simulating the initial state") 342 | runOneTimeStep(transport_network=T, sc_network=G, firm_list=firm_list, 343 | country_list=country_list, households=households, 344 | disruption=None, 345 | congestion=congestion, 346 | route_optimization_weight=route_optimization_weight, 347 | explicit_service_firm=explicit_service_firm, 348 | propagate_input_price_change=propagate_input_price_change, 349 | rationing_mode=rationing_mode, 350 | observer=obs, 351 | time_step=0, 352 | export_folder=exp_folder, 353 | export_flows=export_flows, 354 | flow_types_to_export = flow_types_to_export, 355 | transport_edges = transport_edges, 356 | export_sc_flow_analysis=export['sc_flow_analysis'], 357 | monetary_unit_transport_cost="USD", 358 | monetary_unit_flow=monetary_units_in_model, 359 | cost_repercussion_mode=cost_repercussion_mode) 360 | 361 | if disruption == disruption_list[0]: 362 | if export['district_sector_table']: 363 | exportDistrictSectorTable(filtered_district_sector_table, export_folder=exp_folder) 364 | 365 | if export['firm_table'] or export['odpoint_table']: 366 | exportFirmODPointTable(firm_list, firm_table, odpoint_table, filepaths['roads_nodes'], 367 | export_firm_table=export['firm_table'], export_odpoint_table=export['odpoint_table'], 368 | export_folder=exp_folder) 369 | 370 | if export['country_table']: 371 | exportCountryTable(country_list, export_folder=exp_folder) 372 | 373 | if export['edgelist_table']: 374 | exportEdgelistTable(supply_chain_network=G, export_folder=exp_folder) 375 | 376 | if export['inventories']: 377 | exportInventories(firm_list, export_folder=exp_folder) 378 | 379 | obs.disruption_time = 1 380 | logging.info('Simulation will last '+str(Tfinal)+' time steps.') 381 | logging.info('A disruption will occur at time 1, it will affect '+ 382 | str(len(disruption['node']))+' nodes and '+ 383 | str(len(disruption['edge']))+' edges for '+ 384 | str(disruption['duration']) +' time steps.') 385 | 386 | logging.info("Starting time loop") 387 | for t in range(1, Tfinal+1): 388 | logging.info('Time t='+str(t)) 389 | runOneTimeStep(transport_network=T, sc_network=G, firm_list=firm_list, 390 | country_list=country_list, households=households, 391 | disruption=disruption, 392 | congestion=congestion, 393 | route_optimization_weight=route_optimization_weight, 394 | explicit_service_firm=explicit_service_firm, 395 | propagate_input_price_change=propagate_input_price_change, 396 | rationing_mode=rationing_mode, 397 | observer=obs, 398 | time_step=t, 399 | export_folder=exp_folder, 400 | export_flows=export_flows, 401 | flow_types_to_export=flow_types_to_export, 402 | transport_edges = transport_edges, 403 | export_sc_flow_analysis=False, 404 | monetary_unit_transport_cost="USD", 405 | monetary_unit_flow=monetary_units_in_model, 406 | cost_repercussion_mode=cost_repercussion_mode) 407 | logging.debug('End of t='+str(t)) 408 | 409 | if (t > 1) and epsilon_stop_condition: 410 | if (households.extra_spending <= epsilon_stop_condition) & \ 411 | (households.consumption_loss <= epsilon_stop_condition): 412 | logging.info('Household extra spending and consumption loss are at pre-disruption value. '\ 413 | +"Simulation stops.") 414 | break 415 | 416 | computation_time = time.time()-t0 417 | logging.info("Time loop completed, {:.02f} min".format(computation_time/60)) 418 | 419 | 420 | obs.evaluate_results(T, households, disruption, disruption_analysis['duration'], 421 | per_firm=export['impact_per_firm']) 422 | 423 | if export['time_series']: 424 | exportTimeSeries(obs, exp_folder) 425 | 426 | if export['criticality']: 427 | writeCriticalityResults(criticality_export_file, obs, disruption, 428 | disruption_analysis['duration'], computation_time) 429 | 430 | if export['impact_per_firm']: 431 | writeResPerFirmResults(extra_spending_export_file, 432 | missing_consumption_export_file, obs, disruption) 433 | 434 | # if export['agent_data']: 435 | # exportAgentData(obs, export_folder=exp_folder) 436 | 437 | del obs 438 | 439 | if export['criticality']: 440 | criticality_export_file.close() 441 | 442 | logging.info("End of simulation") 443 | -------------------------------------------------------------------------------- /code/simulations.py: -------------------------------------------------------------------------------- 1 | # Core function of the simulation loop 2 | 3 | import logging 4 | from functions import * 5 | from export_functions import * 6 | from check_functions import * 7 | 8 | def setInitialSCConditions(transport_network, sc_network, firm_list, 9 | country_list, households, initialization_mode="equilibrium"): 10 | """ 11 | Set the initial supply chain conditions and reinitialize the transport network at 0. 12 | 13 | Parameters 14 | ---------- 15 | transport_network : TransportNetwork 16 | Transport network graph 17 | sc_network : networkx.DiGraph 18 | Supply chain network graph 19 | firm_list : list of Firms 20 | List of firms 21 | country_list : list of Countries 22 | List of countries 23 | households : Households 24 | Households 25 | 26 | Returns 27 | ------- 28 | Nothing 29 | """ 30 | logging.info("Setting initial supply-chain conditions") 31 | transport_network.reinitialize_flows_and_disruptions() 32 | set_initial_conditions(sc_network, firm_list, households, country_list, 33 | initialization_mode) 34 | logging.info("Initial supply-chain conditions set") 35 | 36 | 37 | def runOneTimeStep(transport_network, sc_network, firm_list, 38 | country_list, households, observer, 39 | disruption=None, 40 | congestion=False, 41 | route_optimization_weight="cost_per_ton", 42 | explicit_service_firm=True, 43 | propagate_input_price_change=True, 44 | rationing_mode="household_first", 45 | time_step=0, 46 | export_folder=None, 47 | export_flows=False, 48 | flow_types_to_export=['total'], 49 | transport_edges=None, 50 | export_sc_flow_analysis=False, 51 | monetary_unit_transport_cost="USD", 52 | monetary_unit_flow="mUSD", 53 | cost_repercussion_mode="type1"): 54 | """ 55 | Run one time step 56 | 57 | Parameters 58 | ---------- 59 | transport_network : TransportNetwork 60 | Transport network graph 61 | sc_network : networkx.DiGraph 62 | Supply chain network graph 63 | firm_list : list of Firms 64 | List of firms 65 | country_list : list of Countries 66 | List of countries 67 | households : Households 68 | Households 69 | observer : Observer 70 | Observer 71 | disruption : dic 72 | Dictionary {'node':disrupted node id, 'node':disrupted edge id, 'duration':disruption duration} 73 | congestion : Boolean 74 | Whether or not to measure congestion 75 | propagate_input_price_change : Boolean 76 | Whether or not firms should readjust their price to changes in input prices 77 | rationing_mode : string 78 | How firms ration their clients if they cannot meet all demand. Possible values: 79 | - 'equal': all clients are equally rationned in proportion of their order 80 | - 'household_first': if the firm sells to both households and other firms, 81 | then households are served first 82 | time_step : int 83 | Used by observer to know the time of its observation 84 | export_folder : string or None 85 | Where output are exported, if any 86 | 87 | 88 | Returns 89 | ------- 90 | Nothing 91 | """ 92 | transport_network.reset_current_loads(route_optimization_weight) 93 | 94 | if (disruption is not None) and (time_step == 1): 95 | transport_network.disrupt_roads(disruption) 96 | 97 | allFirmsRetrieveOrders(sc_network, firm_list) 98 | 99 | allFirmsPlanProduction(firm_list, sc_network, price_fct_input=propagate_input_price_change) 100 | 101 | allFirmsPlanPurchase(firm_list) 102 | 103 | allAgentsSendPurchaseOrders(sc_network, firm_list, households, country_list) 104 | 105 | allFirmsProduce(firm_list) 106 | 107 | allAgentsDeliver(sc_network, firm_list, country_list, transport_network, 108 | rationing_mode, explicit_service_firm, 109 | monetary_unit_transport_cost="USD", monetary_unit_flow="mUSD", 110 | cost_repercussion_mode=cost_repercussion_mode) 111 | 112 | if congestion: 113 | if (time_step == 0): 114 | transport_network.evaluate_normal_traffic() 115 | else: 116 | transport_network.evaluate_congestion() 117 | if len(transport_network.congestionned_edges) > 0: 118 | logging.info("Nb of congestionned segments: "+ 119 | str(len(transport_network.congestionned_edges))) 120 | for firm in firm_list: 121 | firm.add_congestion_malus2(sc_network, transport_network) 122 | for country in country_list: 123 | country.add_congestion_malus2(sc_network, transport_network) 124 | 125 | if (time_step in [0,1,2]) and (export_flows): #should be done at this stage, while the goods are on their way 126 | collect_shipments = True 127 | transport_network.compute_flow_per_segment(flow_types_to_export) 128 | observer.collect_transport_flows(transport_network, 129 | time_step=time_step, flow_types=flow_types_to_export, 130 | collect_shipments=collect_shipments) 131 | exportTransportFlows(observer, export_folder) 132 | exportTransportFlowsLayer(observer, export_folder, time_step=time_step, 133 | transport_edges=transport_edges) 134 | collect_specific_flows = True 135 | if collect_specific_flows: 136 | observer.collect_specific_flows(transport_network) 137 | exportSpecificFlows(observer, export_folder) 138 | if collect_shipments: 139 | exportShipmentsLayer(observer, export_folder, time_step=time_step, 140 | transport_edges=transport_edges) 141 | 142 | if (time_step == 0) and (export_sc_flow_analysis): #should be done at this stage, while the goods are on their way 143 | analyzeSupplyChainFlows(sc_network, firm_list, export_folder) 144 | 145 | 146 | allAgentsReceiveProducts(sc_network, firm_list, households, 147 | country_list, transport_network) 148 | 149 | transport_network.update_road_state() 150 | 151 | observer.collect_agent_data(firm_list, households, country_list, 152 | time_step=time_step) 153 | 154 | compareProductionPurchasePlans(firm_list, country_list, households) 155 | -------------------------------------------------------------------------------- /data_requirements.md: -------------------------------------------------------------------------------- 1 | # Data requirements for the DisruptSupplyChain model 2 | 3 | ## Transport 4 | 5 | - Road network 6 | - The ideal structure is one shapefile for the links, another for the nodes. 7 | - For each link: 8 | - road class (primary, secondary, etc.) 9 | - road condition (e.g., paved or unpaved) 10 | - nb of lanes (optional) 11 | - whether it is a bridge or not (optional) 12 | - Transport cost and capacity: 13 | - average transport price per km and per ton or $-worth of goods 14 | - speed per type of road 15 | - capacity per type of road 16 | - some measure of road unreliability (variation of travel time) 17 | - If available, a selection of the "main" nodes, i.e., those with most traffic 18 | 19 | - If applicable, same information for other transport network (e.g., railways, waterways) 20 | 21 | - International transport nodes (ports, airports, border crossings) 22 | 23 | 24 | ## Firm data and inventories 25 | 26 | - Business census / firm registry, and, for each firm: 27 | - its sector, coded in a standard classification (e.g., ISIC), 28 | - its location (gps coordinates or administrative unit), 29 | - its size (nb of employees or turnover) 30 | If not available, then total production per sector and per administrative unit (the more disaggregated, the better supply chain flows can be mapped onto the transport network) 31 | - Amount of inventories per sector (e.g., from a survey) 32 | - Agricultural / fishing / logging production per km2 or administrative units. These economic activities are often not registered in any firm census, in particular fishing, logging, agriculture. 33 | 34 | 35 | ## Trade & national account 36 | 37 | - Input-output table (e.g., GTAP, produced by the country) 38 | - International trade date (e.g., GTAP, UNComtrade), in particular detailed trade flows that go through the transport system of the modeled country. 39 | -------------------------------------------------------------------------------- /parameter/filepaths.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ccolon/disrupt-supply-chain-model/e683315095cc60c1c7e5f22fca0904e2b961c0ea/parameter/filepaths.py -------------------------------------------------------------------------------- /parameter/filepaths_default.py: -------------------------------------------------------------------------------- 1 | from . import parameters_default 2 | from . import parameters 3 | 4 | if hasattr(parameters, "input_folder"): 5 | input_folder = parameters.input_folder 6 | else: 7 | input_folder = parameters_default.input_folder 8 | 9 | 10 | # Filepaths 11 | import os 12 | ## Transport 13 | filepaths = { 14 | "transport_parameters": os.path.join('input', input_folder, 'Transport', 'transport_parameters.yaml'), 15 | "transport_modes": os.path.join('input', input_folder, 'Transport', 'transport_modes.csv'), 16 | # "roads_nodes": os.path.join('input', input_folder, 'Transport', 'Roads', 'roads_nodes.shp'), 17 | # "roads_edges": os.path.join('input', input_folder, 'Transport', 'Roads', 'roads_edges.shp'), 18 | # "railways_nodes": os.path.join('input', input_folder, 'Transport', 'Railways', 'railways_nodes.shp'), 19 | # "railways_edges": os.path.join('input', input_folder, 'Transport', 'Railways', 'railways_edges.shp'), 20 | # "waterways_nodes": os.path.join('input', input_folder, 'Transport', 'Waterways', 'waterways_nodes.shp'), 21 | # "waterways_edges": os.path.join('input', input_folder, 'Transport', 'Waterways', 'waterways_edges.shp'), 22 | # "multimodal_edges": os.path.join('input', input_folder, 'Transport', 'Multimodal', 'multimodal_edges.shp'), 23 | # "maritime_nodes": os.path.join('input', input_folder, 'Transport', 'Maritime', 'maritime_nodes.shp'), 24 | # "maritime_edges": os.path.join('input', input_folder, 'Transport', 'Maritime', 'maritime_edges.shp'), 25 | # "extra_roads_edges": os.path.join('input', input_folder, 'Transport', 'Roads', "road_edges_extra.shp"), 26 | "roads_nodes": os.path.join('input', input_folder, 'Transport', 'roads_nodes.geojson'), 27 | "roads_edges": os.path.join('input', input_folder, 'Transport', 'roads_edges.geojson'), 28 | "railways_nodes": os.path.join('input', input_folder, 'Transport', 'railways_nodes.geojson'), 29 | "railways_edges": os.path.join('input', input_folder, 'Transport', 'railways_edges.geojson'), 30 | "waterways_nodes": os.path.join('input', input_folder, 'Transport', 'waterways_nodes.geojson'), 31 | "waterways_edges": os.path.join('input', input_folder, 'Transport', 'waterways_edges.geojson'), 32 | "multimodal_edges": os.path.join('input', input_folder, 'Transport', 'multimodal_edges.geojson'), 33 | "maritime_nodes": os.path.join('input', input_folder, 'Transport', 'maritime_nodes.geojson'), 34 | "maritime_edges": os.path.join('input', input_folder, 'Transport', 'maritime_edges.geojson'), 35 | "extra_roads_edges": os.path.join('input', input_folder, 'Transport', "road_edges_extra.geojson"), 36 | ## Supply 37 | "district_sector_importance": os.path.join('input', input_folder, 'Supply', 'district_sector_importance.csv'), 38 | "sector_table": os.path.join('input', input_folder, 'Supply', "sector_table.csv"), 39 | "tech_coef": os.path.join('input', input_folder, 'Supply', "tech_coef_matrix.csv"), 40 | "inventory_duration_targets": os.path.join('input', input_folder, 'Supply', "inventory_duration_targets.csv"), 41 | ## Trade 42 | # "entry_points": os.path.join('input', input_folder, 'Trade', "entry_points.csv"), 43 | "imports": os.path.join('input', input_folder, 'Trade', "import_table.csv"), 44 | "exports": os.path.join('input', input_folder, 'Trade', "export_table.csv"), 45 | "transit_matrix": os.path.join('input', input_folder, 'Trade', "transit_matrix.csv"), 46 | ## Demand 47 | "population": os.path.join('input', input_folder, 'Demand', "population.csv"), 48 | "final_demand": os.path.join('input', input_folder, 'Demand', "final_demand.csv") 49 | } 50 | -------------------------------------------------------------------------------- /parameter/parameters.py: -------------------------------------------------------------------------------- 1 | from parameter.parameters_default import * 2 | import logging 3 | import os 4 | 5 | input_folder = "Cambodia" 6 | inventory_duration_target = "inputed" 7 | 8 | countries_to_include = ['AFR', 'AME', 'ASI', 'EUR', 'OCE', 'THA', 'VNM'] 9 | 10 | # top_10_nodes = [2608, 2404, 2386, 2380, 2379, 2376, 2373, 2366, 2363, 2361] 11 | top_10_nodes = [1473, 1619, 992, 1832, 1269, 428, 224] 12 | floodable_road_battambang = 3170 13 | tsubasa_bridge = 2001 14 | disruption_analysis = { 15 | "disrupt_nodes_or_edges": "edges", 16 | # "nodeedge_tested": ["primary", "trunk"], 17 | # "nodeedge_tested": [1487, 1462, 1525, 1424], 18 | "nodeedge_tested": [tsubasa_bridge], 19 | # "nodeedge_tested": os.path.join('input', input_folder, 'top_hh_loss_nodes.csv'), 20 | # "nodeedge_tested": ["Sihanoukville international port"], 21 | # "identified_by": "name", 22 | "identified_by": "id", 23 | # "identified_by": "class", 24 | "duration": 1 25 | } 26 | disruption_analysis = None 27 | congestion = True 28 | 29 | district_sector_cutoff = 0.003 30 | cutoff_sector_output = { 31 | 'type': 'percentage', 32 | 'value': 0.01 33 | } 34 | io_cutoff = 0.02 35 | transport_modes = ['roads', 'railways', 'waterways', 'maritime'] 36 | 37 | route_optimization_weight = "agg_cost" #cost_per_ton time_cost agg_cost 38 | 39 | export = {key: True for key in export.keys()} 40 | 41 | cost_repercussion_mode = "type1" 42 | 43 | # logging_level = logging.DEBUG 44 | -------------------------------------------------------------------------------- /parameter/parameters_default.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | # Indicate the subfolder of the input folder that contains the input files 4 | input_folder = "Tanzania" 5 | 6 | export = { 7 | # Save a log file in the output folder, called "exp.log" 8 | "log": True, 9 | 10 | # Save the main result in a "criticality.csv" file in the output folder 11 | # Each line is a simulation, it saves what is disrupted and for how long, and aggregate observables 12 | "criticality": True, 13 | 14 | # Save the amount of good flowing on each transport segment 15 | # It saves a flows.json file in the output folder 16 | # The structure is a dic {"timestep: {"transport_link_id: {"sector_id: flow_quantity}} 17 | # Can be True or False 18 | "flows": False, 19 | 20 | # Export information on aggregate supply chain flow at initial conditions 21 | # Used only if "disruption_analysis: None" 22 | # See analyzeSupplyChainFlows function for details 23 | "sc_flow_analysis": False, 24 | 25 | # Whether or not to export data for each agent for each time steps 26 | # See exportAgentData function for details. 27 | "agent_data": False, 28 | 29 | # Save firm-level impact results 30 | # It creates an "extra_spending.csv" file and an "extra_consumption.csv" file in the output folder 31 | # Each line is a simulation, it saves what was disrupted and the corresponding impact for each firm 32 | "impact_per_firm": False, 33 | 34 | # Save aggregated time series 35 | # It creates an "aggregate_ts.csv" file in the output folder 36 | # Each columns is a time series 37 | # Exports: 38 | # - aggregate production 39 | # - total profit, 40 | # - household consumption, 41 | # - household expenditure, 42 | # - total transport costs, 43 | # - average inventories. 44 | "time_series": False, 45 | 46 | # Save the firm table 47 | # It creates a "firm_table.xlsx" file in the output folder 48 | # It gives the properties of each firm, along with production, sales to households, to other firms, exports 49 | "firm_table": True, 50 | 51 | 52 | # Save the OD point table 53 | # It creates a "odpoint_table.xlsx" file in the output folder 54 | # It gives the properties of each OD point, along with production, sales to households, to other firms, exports 55 | "odpoint_table": True, 56 | 57 | # Save the country table 58 | # It creates a "country_table.xlsx" file in the output folder 59 | # It gives the trade profile of each country 60 | "country_table": True, 61 | 62 | # Save the edgelist table 63 | # It creates a "edgelist_table.xlsx" file in the output folder 64 | # It gives, for each supplier-buyer link, the distance and amounts of good that flows 65 | "edgelist_table": True, 66 | 67 | # Save inventories per sector 68 | # It creates an "inventories.xlsx" file in the output folder 69 | "inventories": False, 70 | 71 | # Save the combination of district and sector that are over the cutoffs value 72 | # It creates an "filtered_district_sector.xlsx" file in the output folder 73 | "district_sector_table": False, 74 | 75 | # Whether or not to export a csv summarizing some topological caracteristics of the supply chain network 76 | "sc_network_summary": True 77 | } 78 | 79 | # Logging level 80 | logging_level = logging.INFO 81 | 82 | # List of transport modes to include 83 | transport_modes = ['roads'] 84 | 85 | # Monetary units to use in model. 'USD', 'kUSD', 'mUSD' 86 | monetary_units_in_model = "mUSD" 87 | 88 | # Monetary units in input files. 'USD', 'kUSD', 'mUSD' 89 | monetary_units_inputed = "USD" 90 | 91 | # Whether or not to model congestion 92 | congestion = True 93 | 94 | # Whether or not firms should readjust their price to changes in input prices 95 | propagate_input_price_change = True 96 | 97 | # Which sectors to include. Possible values: 98 | # - 'all': all the sectors are kept 99 | # - list of sectors 100 | sectors_to_include = "all" 101 | 102 | # Filter out sectors whose output is below that cutoff value 103 | # - if 'type' is 'percentage', test cutoff againt the sector's relative output 104 | # - if 'type' is 'absolute', test cutoff againt the sector's absolute output 105 | cutoff_sector_output = { 106 | 'type': 'percentage', 107 | 'value': 0.01 108 | } 109 | 110 | # Which districts to include. Possible values: 111 | # - 'all': all the districts are kept 112 | # - list of districts 113 | districts_to_include = "all" 114 | 115 | # Which countries to include. Possible values: 116 | # - 'all': all the districts are kept 117 | # - list of countries 118 | countries_to_include = "all" 119 | 120 | # Any sector in a district that have an importance lower than this value is discarded 121 | # 2 exceptions apply: 122 | # - for each sector, the most important district is kept, even if its importance is lower. 123 | # It avoids having no firm at all from a sector 124 | # - for agriculture, the cutoff value is twice lower. If we apply the same cutoff value as the other sector 125 | # all districts are filtered out. This is because, agriculture is generally spread over 126 | # the country, such that importance values are low and rather uniformly distributed. 127 | district_sector_cutoff = 0.003 128 | 129 | # For each sector, how many of the most important districts will be kept, whatever the 'district_sector_cutoff' value 130 | # Possible values: 131 | # - None: no extra district will be kept using this method 132 | # - Integer > 0: nb of top district to keep 133 | nb_top_district_per_sector = 1 134 | 135 | # If True, the service firms (utility, transport, service) are modeled as the nonservice firms 136 | # Their output will not use the transport network, but their inputs, if it comes from nonservice firms, will. 137 | # If False, then two firms per service sector will be modeled for the whole country 138 | # Neither their input nor output will use the transport network. 139 | explicit_service_firm = True 140 | 141 | # Duration target for the firm inventory 142 | # When firm holds the inventories meeting this target, then the firms can keep meeting their pre-disruption 143 | # production targets without ordering more inputs for the specified number of time steps. 144 | # Two types of value are possible: 145 | # - an integer value: all firms have the same inventory duration target for all inputs 146 | # - 'inputed': values are provided in an external file, which a specific duration target to each input-sector, buying-firm-sector combination 147 | inventory_duration_target = 2 148 | 149 | # Extend the inventory duration targets of all firms, uniformly 150 | # Possible values: 151 | # - integer, which represents the number of time steps to add 152 | # - None, no extra inventory 153 | # The type of inputs for which the inventory is to be extended is determined by the inputs_with_extra_inventories parameter 154 | extra_inventory_target = None 155 | 156 | # List of inputs, identified by their producing sector, for which an extra inventory duration target is given 157 | # Possible values: 158 | # - 'all': all types of inputs 159 | # - list of sector id, e.g., ['AGR','FOR','TRD','IMP'] 160 | inputs_with_extra_inventories = "all" 161 | 162 | # List of sectors for which an extra inventory duration target is given 163 | # Possible values: 164 | # - 'all': all sectors 165 | # - list of sector id, e.g., ['AGR','FOR','TRD','IMP'] 166 | buying_sectors_with_extra_inventories = "all" 167 | 168 | # Determines the speed at which firms try to reach their inventory duration target 169 | # See Henriet, Hallegatte, and Tabourier 2011 for the formulas 170 | # A too large value leads to dynamic instabilities, called Bullwhip effect 171 | reactivity_rate = 0.1 172 | 173 | # Determines the initial utilization rate of firms 174 | # It is used to determine, based on the production of the input-output equilibrium, the production capacity of the firms 175 | # E.g., if a firm produces 80 and has a 0.8 utilization rate, then its production capacity is set to 100. 176 | # It applies uniformly to all firms. 177 | utilization_rate = 0.8 178 | 179 | # Determines which inputs will be kept in the firms' Leontief production function 180 | # It sets to 0 the elements of the technical coefficient IO matrix that are below this cutoff 181 | # E.g., if sector A uses 0.3 input of sector B and 0.005 input of sector C to produce 1 unit of output (data from 182 | # the technical coefficient matrix), then, with a io_cutoff of 0.01, firms for sector A will only source inputs from 183 | # sector B firms. 184 | io_cutoff = 0.01 185 | 186 | # Determines the way firms ration their clients if they cannot meet all demand 187 | # Possible values are: 188 | # - 'equal': all clients are equally rationned in proportion of their order 189 | # - 'household_first': if the firm sells to both households and other firms, then households are served first 190 | rationing_mode = "household_first" 191 | 192 | # Set the number of supplier that firms have for each type of input 193 | # Possible values are: 194 | # - 1: each firms select one supplier per input 195 | # - 2: each firms select two suppliers per input 196 | # - a decimal number between 1 and 2: firms choose either one or two suppliers per input, such that, in average 197 | # the average number of supplier per sector is equal to the specified number. 198 | nb_suppliers_per_input = 1 199 | 200 | # Determines how important it is for firms to choose suppliers close to them 201 | # The chance of choosing a firm as a supplier for an input, depends on its importance score and on its distance 202 | # It is (importance) / (distance)^w where w is the weight_localization parameter 203 | weight_localization = 1 204 | 205 | # Determines the type of disruption analysis to be done 206 | # Possible values are: 207 | # - None: no disruption is made. The model is only initialized at t=0, and stops. 208 | # Metrics describing the initial state are exported. 209 | # - a dic: { 210 | # "disrupt_nodes_or_edges": "nodes" or "edges", 211 | # "nodeedge_tested": object 212 | # } 213 | # - "disrupt_nodes_or_edges" determines whether nodes or edges are to be dirupted. 214 | # - "nodeedge_tested" determines" what is disrupted. Possible values are: 215 | # - integer: a single node or edge ID. The time series are exported. 216 | # - a list of node/edge ids. Criticality exports. 217 | # - "all": test all nodes or edges, ordered by their ID. Criticality exports. 218 | # - a path to a list of node or edges. Criticality exports. 219 | # - "duration" determines" the duration of a disruption 220 | disruption_analysis = None 221 | 222 | # What time interval does a time step represents 223 | # Possible values are: 'day', 'week', 'month', 'year' 224 | time_resolution = 'week' 225 | 226 | # Number of nodeedge to test 227 | # It will take only the first N in the list of nodes or edges of the criticality loop 228 | # Possible values are: 229 | # - None: all elements are tested 230 | # - an integer N: the N first are tested 231 | nodeedge_tested_topn = None 232 | 233 | # Skip the first elements in the list of nodes or edges to disrupt in the criticality loop 234 | # Possible values are None or integer values. It should be lower than nodeedge_tested_topn, if such value is given. 235 | # If nodeedge_tested_topn is N and nodeedge_tested_skipn is M, the we test list[M,N] 236 | nodeedge_tested_skipn = None 237 | 238 | # Run the model in the "Aggregate IO mode" 239 | # Instead of disrupting the transport network, we evaluate how much production would be blocked if all the firms 240 | # located in the disrupted nodes were unable to produce. Then we uniformly distribute this drop of production on 241 | # all the firm of the corresponding sector. 242 | model_IO = False 243 | 244 | # Provides default simulation duration Tfinal for different disruption duration 245 | duration_dic = { 246 | 1:4, 247 | 2:8, 248 | 3:11, 249 | 4:14 250 | } 251 | 252 | # Whether or not to load extra roads in the model 253 | extra_roads = False 254 | 255 | # Value of the household extra spending and consumption loss under which we stop the simulation 256 | # Units is the monetary_units_in_model 257 | # If None, then we use the default simulation duration Tfinal, see duration_dic 258 | epsilon_stop_condition = 1e-3 259 | 260 | # Characteristic of the transport edges that the path chosen by firms to 261 | # deliver to their to clients should minimized 262 | # Possible values are: 263 | # - time_cost 264 | # - cost_per_ton 265 | route_optimization_weight = "time_cost" 266 | 267 | # How to translate an increase in transport cost into increase in prices 268 | cost_repercussion_mode = "type1" 269 | 270 | # Whether or not to account for transport network capacity 271 | # If True, then each shipment adds a "load" on transport edges 272 | # it the load exceed the capacity, then the edges cannot be used anymore 273 | account_capacity = True -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | Fiona==1.8.4 2 | geopandas==0.3.0 3 | networkx==2.1 4 | numpy==1.15.4 5 | pandas==0.23.0 6 | Shapely==1.6.4.post2 7 | scipy==1.1.0 8 | PyYAML==5.3.1 --------------------------------------------------------------------------------