├── LICENSE ├── PD.py ├── README.md ├── ensight.py ├── materials.py └── requirements.txt /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. -------------------------------------------------------------------------------- /PD.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | #!/usr/bin/env python 4 | 5 | # Copyright 2018-2014 John T. Foster 6 | # 7 | # Licensed under the Apache License, Version 2.0 (the "License"); 8 | # you may not use this file except in compliance with the License. 9 | # You may obtain a copy of the License at 10 | # 11 | # http://www.apache.org/licenses/LICENSE-2.0 12 | # 13 | # Unless required by applicable law or agreed to in writing, software 14 | # distributed under the License is distributed on an "AS IS" BASIS, 15 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 | # See the License for the specific language governing permissions and 17 | # limitations under the License. 18 | import math 19 | import sys 20 | 21 | import numpy as np 22 | import numpy.ma as ma 23 | import scipy.spatial 24 | import matplotlib.path as path 25 | 26 | from progressbar import ProgressBar 27 | from progressbar import Percentage 28 | from progressbar import Bar 29 | from progressbar import RotatingMarker 30 | from ensight import Ensight 31 | import materials 32 | 33 | from PyTrilinos import Epetra 34 | from PyTrilinos import Teuchos 35 | from PyTrilinos import Isorropia 36 | 37 | comm = Epetra.PyComm() 38 | rank = comm.MyPID() 39 | size = comm.NumProc() 40 | 41 | ### Peridynamic functions ### 42 | 43 | # Internal force calculation 44 | def compute_internal_force(force_x, force_y, pos_x, pos_y, disp_x, disp_y, 45 | families, ref_mag_state, volumes, youngs_modulus, poisson_ratio, 46 | influence_state, num_owned): 47 | """ Computes the peridynamic internal force due to deformations.""" 48 | 49 | #Compute the deformed positions of the nodes 50 | def_x = pos_x + disp_x 51 | def_y = pos_y + disp_y 52 | 53 | #Compute deformation state 54 | def_state_x = ma.masked_array(def_x[families] - def_x[:num_owned,None], 55 | mask=families.mask) 56 | def_state_y = ma.masked_array(def_y[families] - def_y[:num_owned,None], 57 | mask=families.mask) 58 | 59 | #Compute deformation magnitude state 60 | def_mag_state = (def_state_x * def_state_x + 61 | def_state_y * def_state_y) ** 0.5 62 | 63 | #Compute deformation unit state 64 | def_unit_state_x = def_state_x / def_mag_state 65 | def_unit_state_y = def_state_y / def_mag_state 66 | 67 | #Compute scalar extension state 68 | exten_state = def_mag_state - ref_mag_state 69 | 70 | #Apply a critical stretch damage model 71 | influence_state[exten_state > 0.005] = 0.0 72 | 73 | #Compute dilatation 74 | dilatation = (3.0 * weighted_volume[:,None] * influence_state * 75 | ref_mag_state * exten_state * volumes[families]).sum(axis=1) 76 | 77 | #Compute scalar force state 78 | #scalar_force_state = materials.elastic_material(youngs_modulus, 79 | #poisson_ratio, dilatation, exten_state, ref_mag_state, 80 | #weighted_volume, influence_state) 81 | scalar_force_state = materials.bond_based_elastic_material(exten_state, 82 | weighted_volume, youngs_modulus, poisson_ratio, influence_state) 83 | 84 | #Compute the force state 85 | force_state_x = scalar_force_state * def_unit_state_x 86 | force_state_y = scalar_force_state * def_unit_state_y 87 | 88 | #Integrate nodal forces 89 | #Sum the force contribution from j nodes to i node 90 | force_x[:num_owned] += (force_state_x * volumes[families]).sum(axis=1) 91 | force_y[:num_owned] += (force_state_y * volumes[families]).sum(axis=1) 92 | 93 | #Subtract the force contribution from i nodes from j, the bincount() 94 | #operation is a trick to keep it fast in Numpy. See: 95 | # for details 97 | tmp_x = np.bincount(families.compressed(), (force_state_x * 98 | volumes[:num_owned,None]).compressed()) 99 | tmp_y = np.bincount(families.compressed(), (force_state_y * 100 | volumes[:num_owned,None]).compressed()) 101 | force_x[:len(tmp_x)] -= tmp_x 102 | force_y[:len(tmp_y)] -= tmp_y 103 | 104 | return 105 | 106 | #Compute stable time step function 107 | def compute_stable_time_step(families, ref_mag_state, volumes, num_nodes, 108 | bulk_modulus,rho,horizon): 109 | 110 | spring_constant = 18.0 * bulk_modulus / math.pi / horizon**4.0 111 | 112 | crit_time_step_denom = np.array([spring_constant * volumes[families[i]] / 113 | ref_mag_state[i] for i in range(num_nodes)])**0.5 114 | 115 | critical_time_steps = np.sqrt(2.0 * rho) / crit_time_step_denom 116 | 117 | nodal_min_time_step = [ np.amin(item) for item in critical_time_steps ] 118 | 119 | return np.amin(nodal_min_time_step) 120 | 121 | #Helper function that tests to see if two lines intersect 122 | def test_line_seg_intersect(line1,line2): 123 | """Tests to see if two lines segments intersect. The lines are defined as: 124 | 125 | line1 = [ p0_x, p0_y, p1_x, p1_y ] 126 | line2 = [ p2_x, p2_y, p3_x, p3_y ] 127 | 128 | """ 129 | #See http://stackoverflow.com/questions/563198/how-do-you-detect-where- 130 | #two-line-segments-intersect for algorithm details 131 | 132 | #Read in individual point x,y positions from arguments 133 | p0_x, p0_y, p1_x, p1_y = line1 134 | p2_x, p2_y, p3_x, p3_y = line2 135 | 136 | s1_x = p1_x - p0_x 137 | s1_y = p1_y - p0_y 138 | s2_x = p3_x - p2_x 139 | s2_y = p3_y - p2_y 140 | 141 | denom = (-s2_x * s1_y + s1_x * s2_y) 142 | 143 | num_s = (-s1_y * (p0_x - p2_x) + s1_x * (p0_y - p2_y)) 144 | num_t = ( s2_x * (p0_y - p2_y) - s2_y * (p0_x - p2_x)) 145 | 146 | #Detect if lines are parallel or coincident 147 | if -1e-10 < denom < 1e-10: 148 | if abs(num_s) - abs(num_t) < 1e-5: 149 | #Lines are coincident 150 | return True 151 | else: 152 | #Lines are parallel, but not coincident 153 | return False 154 | 155 | s = num_s / denom 156 | t = num_t / denom 157 | 158 | if 0.0 <= s <= 1.0 and 0.0 <= t <= 1.0: 159 | #Lines intersect (or meet at endpoints) 160 | return True 161 | else: 162 | #Lines do not intersect 163 | return False 164 | 165 | #Inserts a crack by removing neighbors from family lists 166 | def insert_crack(crack, tree, horizon, x_pos, y_pos, families): 167 | """ 168 | Inserts crack by setting influence_state to zero for bonds that cross 169 | crack path. 170 | """ 171 | 172 | #Read in crack endpoints 173 | min_x, min_y, max_x, max_y = crack 174 | #Calculate crack length 175 | crack_length_x = max_x - min_x 176 | crack_length_y = max_y - min_y 177 | crack_length = np.sqrt(crack_length_x ** 2.0 + crack_length_y ** 2.0) 178 | 179 | #Number of discrete points along crack length 180 | number_points_along_crack = int(math.ceil(crack_length / horizon * 4.0)) 181 | 182 | #Find slope of crack line 183 | slope_denom = max_x - min_x 184 | j = np.complex(0,1) 185 | if -1e-10 < slope_denom < 1e-10: 186 | #Crack line is vertical, discrete points along crack path 187 | x_points = [min_x for _ in range(number_points_along_crack)] 188 | y_points = np.r_[min_y:max_y:number_points_along_crack*j] 189 | else: 190 | slope = (max_y - min_y) / slope_denom 191 | line_eqn = lambda x: slope * (x - min_x) + min_y 192 | #Find the discrete points along crack path 193 | x_points = np.r_[min_x:max_x:number_points_along_crack*j] 194 | y_points = [ line_eqn(x) for x in x_points ] 195 | 196 | #Create a tuple required by the scipy nearest neighbor search 197 | points_along_crack = zip(x_points,y_points) 198 | 199 | #Find all nodes that could possibly have bonds that cross crack path 200 | _, nodes_near_crack = tree.query(points_along_crack, 201 | k=MAX_NEIGHBORS_RETURNED, eps=0.0, p=2, 202 | distance_upper_bound=2.0*horizon) 203 | 204 | #The search above will produce duplicate neighbor nodes, make them into a 205 | #unique 1-dimensional list 206 | nodes_near_crack_flat = np.array(np.unique(nodes_near_crack),dtype=np.int) 207 | 208 | #Remove the dummy entries 209 | nodes_near_crack_flat = nodes_near_crack_flat[nodes_near_crack_flat != 210 | tree.n] 211 | 212 | #Loop over nodes near the crack to see if any bonds in the nodes family 213 | #cross the crack path 214 | for node_index in nodes_near_crack_flat: 215 | #Loop over node family 216 | node_family = families[node_index][families[node_index] != -1] 217 | for bond_index,end_point_index in enumerate(node_family): 218 | #Define the bond line segment as the line between the node and its 219 | #endpoint. 220 | bond_line_seg = [ x_pos[node_index], y_pos[node_index], 221 | x_pos[end_point_index], y_pos[end_point_index] ] 222 | #Test for intersection 223 | if test_line_seg_intersect(crack,bond_line_seg): 224 | #If we got to here that means we need to ``break'' the bond 225 | families[node_index][bond_index] = -1 226 | 227 | return 228 | 229 | 230 | def boundary_condition_set(vertices,nodes,unbalanced_map): 231 | """Finds nodes enclosed by the polygon described with vertices""" 232 | 233 | #Create a polygon object with a list of vertices, vertices must be tuples 234 | polygon = path.Path(vertices,codes=None) 235 | #Returns an array with value True if point is inside polygon, False if not 236 | bool_arr = polygon.contains_points(nodes,radius=1.e-10) 237 | #List of the local node indices 238 | node_indices = np.arange(unbalanced_map.NumMyElements(),dtype=np.int) 239 | #Returns local node indices that are inside the polygon 240 | return node_indices[bool_arr] 241 | 242 | def influence_function(ref_mag_state,horizon): 243 | """Returns an influence state that has the form 1 - zeta/delta""" 244 | return 1. - ref_mag_state/horizon 245 | 246 | #This line begins the main program. This is not nescarry, but can be helpful 247 | #if we want to load this file as a module from another Python script 248 | if __name__ == "__main__": 249 | 250 | ##################### 251 | ### Main Program #### 252 | ##################### 253 | #INPUTS 254 | GRIDSIZE = 100 255 | HORIZON = 3.015 256 | TIME_STEP = 1.e-5 257 | #TIME_STEP = None 258 | YOUNGS_MODULUS = 200.0e9 259 | POISSON_RATIO = 0.29 260 | RHO = 7800 261 | SAFTEY_FACTOR = 0.5 262 | MAX_ITER = 4000 263 | PLOT_DUMP_FREQ = 100 264 | VERBOSE = False 265 | if sys.argv[-1] == '-v': 266 | VERBOSE = True 267 | CRACKS = [[GRIDSIZE/2., -1., GRIDSIZE/2., GRIDSIZE/10.],[GRIDSIZE/2., 268 | 9*GRIDSIZE/10. , GRIDSIZE/2., GRIDSIZE+1.],[5.,10,20.,40.]] 269 | MAX_NEIGHBORS_RETURNED = 300 270 | BC1_POLYGON = [(0.0,0.0),(HORIZON,0.0),(HORIZON,GRIDSIZE), 271 | (0.0,GRIDSIZE),(0.0,0.0)] 272 | BC2_POLYGON = [(GRIDSIZE-HORIZON,0.0),(GRIDSIZE,0.0),(GRIDSIZE,GRIDSIZE), 273 | (GRIDSIZE-HORIZON,GRIDSIZE),(GRIDSIZE-HORIZON,0.0)] 274 | BC1_VALUE = -5. 275 | BC2_VALUE = 5. 276 | VIZ_PATH='/Applications/paraview.app/Contents/MacOS/paraview' 277 | 278 | #Print version statement 279 | if rank == 0: print("PD.py version 0.4.0\n") 280 | 281 | #Set up the grid 282 | global_number_of_nodes = GRIDSIZE*GRIDSIZE 283 | 284 | #Populate the grid on the rank 0 processor only 285 | if rank == 0: 286 | #Create grid 287 | grid = np.mgrid[0:GRIDSIZE:1.,0:GRIDSIZE:1] 288 | 289 | #Create x,y tuple of node positions 290 | nodes = np.array(zip(grid[0].ravel(), grid[1].ravel()), 291 | dtype=np.double) 292 | 293 | #Create a kdtree to do nearest neighbor search 294 | tree = scipy.spatial.cKDTree(nodes) 295 | 296 | #Get all families 297 | _, families = tree.query(nodes, k=100, eps=0.0, p=2, 298 | distance_upper_bound=HORIZON) 299 | #Replace the default integers at the end of the arrays with -1's 300 | families = np.delete(np.where(families == tree.n, -1, families),0,1) 301 | #Find the maximum length of any family, we will use this to recreate 302 | #the families array such that it minimizes masked entries. 303 | max_family_length = np.max((families != -1).sum(axis=1)) 304 | #Recast the families array to be of minimum size possible 305 | families = families[:,:max_family_length] 306 | 307 | #insert cracks 308 | if len(CRACKS) != 0: 309 | print("Inserting precracks...\n") 310 | for crack in CRACKS: 311 | #Loop over and insert precracks. The 1e-10 term is there to reduce the 312 | #chance of the crack directly intersecting any node and should not affect 313 | #the results at all for grid spacings on the order of 1. 314 | insert_crack(np.array(crack)+1e-10, tree, HORIZON, nodes[:,0], 315 | nodes[:,1], families) 316 | 317 | else: 318 | #Setup empty data on other ranks 319 | max_family_length = 0 320 | nodes = np.array([],dtype=np.double) 321 | families = np.array([],dtype=np.double) 322 | 323 | #Create node map with all the data on the rank 0 processor 324 | unbalanced_map = Epetra.Map(global_number_of_nodes, len(nodes), 0, comm) 325 | 326 | #Create and populate distributed Epetra vector to the hold the unbalanced 327 | #data. 328 | my_nodes = Epetra.MultiVector(unbalanced_map, 2) 329 | my_nodes[:] = nodes.T 330 | #Create and populate an Epetra mulitvector to store the families data 331 | max_family_length = comm.MaxAll(max_family_length) 332 | my_families = Epetra.MultiVector(unbalanced_map, max_family_length) 333 | my_families[:] = families.T 334 | 335 | #Load balance 336 | if rank == 0: print "Load balancing...\n" 337 | #Create Teuchos parameter list to pass parameter to ZOLTAN for load 338 | #balancing 339 | parameter_list = Teuchos.ParameterList() 340 | parameter_list.set("Partitioning Method","RCB") 341 | if not VERBOSE: 342 | parameter_sublist = parameter_list.sublist("ZOLTAN") 343 | parameter_sublist.set("DEBUG_LEVEL", "0") 344 | #Create a partitioner to load balance the grid 345 | partitioner = Isorropia.Epetra.Partitioner(my_nodes, parameter_list) 346 | #And a redistributer 347 | redistributer = Isorropia.Epetra.Redistributor(partitioner) 348 | #Redistribute nodes 349 | my_nodes_balanced = redistributer.redistribute(my_nodes) 350 | #The new load balanced map 351 | balanced_map = my_nodes_balanced.Map() 352 | #Create importer and exporters to move data between banlanced and 353 | #unbalanced maps 354 | importer = Epetra.Import(balanced_map, unbalanced_map) 355 | exporter = Epetra.Export(balanced_map, unbalanced_map) 356 | #Create distributed vectors to store the balanced node positions 357 | my_x = Epetra.Vector(balanced_map) 358 | my_y = Epetra.Vector(balanced_map) 359 | my_families_balanced = Epetra.MultiVector(balanced_map, max_family_length) 360 | #Import the balanced node positions and family information 361 | my_x.Import(my_nodes[0],importer, Epetra.Insert) 362 | my_y.Import(my_nodes[1],importer, Epetra.Insert) 363 | my_families_balanced.Import(my_families,importer, Epetra.Insert) 364 | #Convert to integer data type for indexing purposes later 365 | my_families = np.array(my_families_balanced.T, dtype=np.int32) 366 | #Create a flattened list of all family global indices (locally owned 367 | #+ ghosts) 368 | my_global_ids_required = np.unique(my_families[my_families != -1]) 369 | #Create a list of locally owned global ids 370 | my_owned_ids = np.array(balanced_map.MyGlobalElements()) 371 | #And its length 372 | my_num_owned = len(my_owned_ids) 373 | #The ghost indices required by the local processor is the relative complement 374 | #of my_global_ids_required and my_owned_ids 375 | my_ghost_ids = np.setdiff1d(my_global_ids_required, my_owned_ids) 376 | #And its length 377 | my_num_ghosts = len(my_ghost_ids) 378 | #Get total length of worker array, this is len(owned) + len(ghosts) 379 | #summed over all processors 380 | length_of_global_worker_arr = comm.SumAll(len(my_owned_ids) 381 | + len(my_ghost_ids)) 382 | #Worker ids 383 | my_worker_ids = np.concatenate((my_owned_ids, my_ghost_ids)) 384 | ##Create the map that will be used by worker vectors 385 | my_worker_map = Epetra.Map(length_of_global_worker_arr, 386 | my_worker_ids, 0, comm) 387 | #Create the worker import/export operators to move data between the grid 388 | #data and the worker data 389 | worker_importer = Epetra.Import(my_worker_map, balanced_map) 390 | worker_exporter = Epetra.Export(my_worker_map, balanced_map) 391 | #Create worker vectors (owned + ghosts) 392 | my_x_worker = Epetra.Vector(my_worker_map) 393 | my_y_worker = Epetra.Vector(my_worker_map) 394 | #Import the needed components for local operations 395 | my_x_worker.Import(my_x, worker_importer, Epetra.Insert) 396 | my_y_worker.Import(my_y, worker_importer, Epetra.Insert) 397 | #Convert the global node ids in the family array to local ids 398 | my_families_local = np.array([my_worker_map.LID(i) 399 | for i in my_families.flatten()]) 400 | #Mask local family array 401 | my_families_local.shape = (len(my_families),-1) 402 | my_families_local = ma.masked_equal(my_families_local, -1) 403 | my_families_local.harden_mask() 404 | 405 | #Compute reference position state of all nodes 406 | my_ref_pos_state_x = ma.masked_array(my_x_worker[[my_families_local]] - 407 | my_x_worker[:my_num_owned,None], mask=my_families_local.mask) 408 | my_ref_pos_state_y = ma.masked_array(my_y_worker[[my_families_local]] - 409 | my_y_worker[:my_num_owned,None], mask=my_families_local.mask) 410 | 411 | #Compute reference magnitude state of all nodes 412 | my_ref_mag_state = (my_ref_pos_state_x * my_ref_pos_state_x + 413 | my_ref_pos_state_y * my_ref_pos_state_y) ** 0.5 414 | 415 | #Initialize influence state 416 | my_influence_state = influence_function(my_ref_mag_state,HORIZON) 417 | my_influence_state.harden_mask() 418 | #Create a reference copy, used for normalizing damage to a reference state 419 | my_ref_influence_state = my_influence_state.copy() 420 | 421 | #Initialize the dummy volumes 422 | my_volumes = np.ones_like(my_x_worker,dtype=np.double) 423 | 424 | #Compute weighted volume 425 | weighted_volume = (my_influence_state * my_ref_mag_state * 426 | my_ref_mag_state * my_volumes[my_families_local]).sum(axis=1) 427 | 428 | #Create distributed vectors (owned only) 429 | my_disp_x = Epetra.Vector(balanced_map) 430 | my_disp_y = Epetra.Vector(balanced_map) 431 | my_force_x = Epetra.Vector(balanced_map) 432 | my_force_y = Epetra.Vector(balanced_map) 433 | 434 | #Create distributed worker vectors (owned + ghosts) 435 | my_disp_x_worker = Epetra.Vector(my_worker_map) 436 | my_disp_y_worker = Epetra.Vector(my_worker_map) 437 | my_force_x_worker = Epetra.Vector(my_worker_map) 438 | my_force_y_worker = Epetra.Vector(my_worker_map) 439 | 440 | #Temporary arrays 441 | my_velocity_x = np.zeros_like(my_disp_x) 442 | my_velocity_y = np.zeros_like(my_disp_y) 443 | my_accel_x = np.zeros_like(my_disp_x) 444 | my_accel_y = np.zeros_like(my_disp_y) 445 | my_damage = np.zeros_like(my_x) 446 | 447 | #Initialize output files 448 | vector_variables = ['displacement'] 449 | scalar_variables = ['damage'] 450 | #Instantiate output file object 451 | outfile = Ensight('output', vector_variables, scalar_variables, comm, 452 | viz_path=VIZ_PATH) 453 | #Print the temporary output arrays 454 | if rank == 0: 455 | print("Output variables requested:\n") 456 | for item in vector_variables: 457 | print(" " + item) 458 | for item in scalar_variables: 459 | print(" " + item) 460 | print(" ") 461 | 462 | #Find local nodes where boundary conditions should be applied 463 | bc1_local_node_set = boundary_condition_set(BC1_POLYGON,zip(my_x.ravel(),my_y.ravel()),balanced_map) 464 | bc2_local_node_set = boundary_condition_set(BC2_POLYGON,zip(my_x.ravel(),my_y.ravel()),balanced_map) 465 | 466 | #Calculate a stable time step or use the user defined 467 | if TIME_STEP == None: 468 | time_step = SAFTEY_FACTOR*compute_stable_time_step(my_families, 469 | my_ref_mag_state, my_volumes, my_num_owned, YOUNGS_MODULUS, 470 | RHO, HORIZON) 471 | else: 472 | time_step = TIME_STEP 473 | 474 | #Begin the main explicit time stepping loop, the VERBOSE variable sets either 475 | #a progress bar or verbose output here. 476 | if rank == 0: 477 | if VERBOSE: 478 | print("Running...") 479 | else: 480 | #Set up the progress bar 481 | widgets = ['Running: ', Percentage(), ' ', Bar(marker=RotatingMarker())] 482 | progress = ProgressBar(widgets=widgets, maxval=MAX_ITER).start() 483 | 484 | #Time stepping loop 485 | for iteration in range(MAX_ITER): 486 | 487 | #Set current time 488 | time = iteration * time_step 489 | 490 | #Print an information line 491 | if VERBOSE and rank ==0: 492 | print("iter = " + str(iteration) + " , time step = " + 493 | str(time_step) + " , sim time = " + str(time)) 494 | 495 | #Enforce boundary conditions 496 | my_disp_x[[bc1_local_node_set]] = time * BC1_VALUE 497 | my_disp_y[[bc1_local_node_set]]= 0.0 498 | my_velocity_x[[bc1_local_node_set]] = BC1_VALUE 499 | my_velocity_y[[bc1_local_node_set]] = 0.0 500 | # 501 | my_disp_x[[bc2_local_node_set]] = time * BC2_VALUE 502 | my_disp_y[[bc2_local_node_set]]= 0.0 503 | my_velocity_x[[bc2_local_node_set]] = BC2_VALUE 504 | my_velocity_y[[bc2_local_node_set]] = 0.0 505 | 506 | #Clear the internal force vectors 507 | my_force_x[:] = 0.0 508 | my_force_y[:] = 0.0 509 | my_force_x_worker[:] = 0.0 510 | my_force_y_worker[:] = 0.0 511 | 512 | #Communicate the displacements (previous or boundary condition imposed) 513 | #to the worker vectors to be used at 514 | my_disp_x_worker.Import(my_disp_x, worker_importer, Epetra.Insert) 515 | my_disp_y_worker.Import(my_disp_y, worker_importer, Epetra.Insert) 516 | 517 | #Compute the internal force 518 | compute_internal_force(my_force_x_worker, my_force_y_worker, 519 | my_x_worker, my_y_worker, my_disp_x_worker, my_disp_y_worker, 520 | my_families_local, my_ref_mag_state, my_volumes, YOUNGS_MODULUS, 521 | POISSON_RATIO, my_influence_state, my_num_owned) 522 | 523 | #Communicate values from worker vectors (owned + ghosts) back to owned only 524 | my_force_x.Export(my_force_x_worker, worker_exporter, Epetra.Add) 525 | my_force_y.Export(my_force_y_worker, worker_exporter, Epetra.Add) 526 | 527 | #Compute the nodal acceleration 528 | my_accel_x_old = my_accel_x.copy() 529 | my_accel_y_old = my_accel_y.copy() 530 | my_accel_x = my_force_x / RHO 531 | my_accel_y = my_force_y / RHO 532 | 533 | #Compute the nodal velocity 534 | my_velocity_x += 0.5 * (my_accel_x_old + my_accel_x) * time_step 535 | my_velocity_y += 0.5 * (my_accel_y_old + my_accel_y) * time_step 536 | 537 | #Compute the new displacements 538 | my_disp_x += my_velocity_x * time_step + (0.5 * my_accel_x * 539 | time_step * time_step) 540 | my_disp_y += my_velocity_y * time_step + (0.5 * my_accel_y * 541 | time_step * time_step) 542 | 543 | #Compute stable time step 544 | #time_step = compute_stable_time_step(my_x, my_y, my_disp_x, my_disp_y, 545 | #my_families, my_ref_mag_state, my_weighted_volume, my_volumes, 546 | #my_number_of_nodes, BULK_MODULUS,RHO) 547 | 548 | #Dump plots 549 | if iteration % PLOT_DUMP_FREQ == 0 or iteration == (MAX_ITER-1): 550 | if VERBOSE and rank == 0: 551 | print "Writing plot file..." 552 | 553 | #Compute the damage 554 | my_damage = 1.0 - ma.mean(my_influence_state / 555 | my_ref_influence_state,axis=1) 556 | 557 | outfile.write_geometry_file_time_step(my_x, my_y) 558 | outfile.write_vector_variable_time_step('displacement', 559 | [my_disp_x,my_disp_y], time) 560 | outfile.write_scalar_variable_time_step('damage', 561 | my_damage, time) 562 | outfile.append_time_step(time) 563 | outfile.write_case_file(comm) 564 | 565 | #Update the progress bar 566 | if not VERBOSE and rank == 0: 567 | progress.update(iteration + 1) 568 | 569 | #Finalize plotfiles 570 | outfile.finalize() 571 | #Wrap up the progress bar printing 572 | if not VERBOSE and rank == 0: 573 | progress.finish() 574 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | _PD.py_ peridynamics example code 2 | =============================== 3 | 4 | _PD.Py_ is a 2D explicit-time integration code that serves as an example code 5 | of how one might write a peridynamics code in parallel with PyTrilinos. The 6 | code is heavily commented to provide as much insight as possible. 7 | 8 | To clone the repo: 9 | 10 | ```` 11 | git clone https://github.com/johntfoster/PDpy.git 12 | ```` 13 | 14 | ### External dependencies ### 15 | [PyTrilinos](http://trilinos.sandia.gov/packages/pytrilinos/) 16 | 17 | #### Python packages available via `pip` #### 18 | NumPy, SciPy, matplotlib, progressbar, mpi4py (required by PyTrilinos) 19 | 20 | These can be installed with the `requirements.txt` file 21 | 22 | ```` 23 | pip install -r requirements.txt 24 | ```` 25 | 26 | To run the code: 27 | 28 | ```` 29 | mpiexec -np 4 python PD.py 30 | ```` 31 | 32 | where `4` can be replaced with any arbitrary number of processsors. 33 | 34 | The results can be viewed in parallel with [Paraview](http://www.paraview.org/) 35 | 36 | -------------------------------------------------------------------------------- /ensight.py: -------------------------------------------------------------------------------- 1 | 2 | import os 3 | import socket 4 | 5 | class Ensight: 6 | 7 | def __init__(self, filename='output', vector_var_names=None, 8 | scalar_var_names=None,comm=None,viz_path=None): 9 | 10 | if comm != None and comm.NumProc() != 1: 11 | 12 | rank = comm.MyPID() 13 | size = comm.NumProc() 14 | 15 | directory = './ensight_files/' 16 | if not os.path.exists(directory): 17 | os.makedirs(directory) 18 | 19 | self.__geo_file = open(directory+filename+'.'+str(rank)+'.geo','w') 20 | 21 | self.__vv_names = vector_var_names 22 | self.__sv_names = scalar_var_names 23 | self.__fname = filename 24 | self.times = [] 25 | 26 | if self.__vv_names != None: 27 | self.__vector_var_files = [ open(directory+afilename+'.'+str(rank)+'.vec','w') 28 | for afilename in self.__vv_names ] 29 | 30 | if self.__sv_names != None: 31 | self.__scalar_var_files = [ open(directory+afilename+'.'+str(rank)+'.scl','w') 32 | for afilename in self.__sv_names ] 33 | 34 | self.__write_sos_file(comm,viz_path) 35 | 36 | else: 37 | 38 | directory = './ensight_files/' 39 | if not os.path.exists(directory): 40 | os.makedirs(directory) 41 | 42 | self.__geo_file = open(directory+filename+'.geo','w') 43 | 44 | self.__vv_names = vector_var_names 45 | self.__sv_names = scalar_var_names 46 | self.__fname = filename 47 | self.times = [] 48 | 49 | if self.__vv_names != None: 50 | self.__vector_var_files = [ open(directory+afilename+'.vec','w') 51 | for afilename in self.__vv_names ] 52 | 53 | if self.__sv_names != None: 54 | self.__scalar_var_files = [ open(directory+afilename+'.scl','w') 55 | for afilename in self.__sv_names ] 56 | 57 | 58 | return 59 | 60 | 61 | def write_case_file(self,comm=None): 62 | """Initialize Ensight case file""" 63 | 64 | if comm != None and comm.NumProc() != 1: 65 | 66 | rank = comm.MyPID() 67 | size = comm.NumProc() 68 | 69 | 70 | directory = './ensight_files/' 71 | self.__case_file = open(directory+self.__fname+'.'+str(rank)+'.case','w') 72 | 73 | print >> self.__case_file, 'FORMAT' 74 | print >> self.__case_file, 'type: ensight gold' 75 | print >> self.__case_file, 'GEOMETRY' 76 | print >> self.__case_file, 'model: 1 1 ' + self.__fname+'.'+str(rank)+'.geo' 77 | print >> self.__case_file, 'VARIABLE' 78 | 79 | if self.__vv_names != None: 80 | for item in self.__vv_names: 81 | print >> self.__case_file, ('vector per node: 1 1 ' + 82 | item + ' ' + item +'.'+str(rank)+'.vec') 83 | 84 | if self.__sv_names != None: 85 | for item in self.__sv_names: 86 | print >> self.__case_file, ('scalar per node: 1 1 ' + 87 | item + ' ' + item +'.'+str(rank)+'.scl') 88 | 89 | print >> self.__case_file, 'TIME' 90 | print >> self.__case_file, 'time set: 1' 91 | print >> self.__case_file, 'number of steps: ' + str(len(self.times)) 92 | print >> self.__case_file, 'time values: ' 93 | for item in self.times: 94 | print >> self.__case_file, item 95 | print >> self.__case_file, 'FILE' 96 | print >> self.__case_file, 'file set: 1' 97 | print >> self.__case_file, 'number of steps: ' + str(len(self.times)) 98 | 99 | self.__case_file.close() 100 | 101 | else: 102 | 103 | directory = './ensight_files/' 104 | self.__case_file = open(directory+self.__fname+'.case','w') 105 | 106 | print >> self.__case_file, 'FORMAT' 107 | print >> self.__case_file, 'type: ensight gold' 108 | print >> self.__case_file, 'GEOMETRY' 109 | print >> self.__case_file, 'model: 1 1 ' + self.__fname + '.geo' 110 | print >> self.__case_file, 'VARIABLE' 111 | 112 | if self.__vv_names != None: 113 | for item in self.__vv_names: 114 | print >> self.__case_file, ('vector per node: 1 1 ' + 115 | item + ' ' + item +'.vec') 116 | 117 | if self.__sv_names != None: 118 | for item in self.__sv_names: 119 | print >> self.__case_file, ('scalar per node: 1 1 ' + 120 | item + ' ' + item +'.scl') 121 | 122 | print >> self.__case_file, 'TIME' 123 | print >> self.__case_file, 'time set: 1' 124 | print >> self.__case_file, 'number of steps: ' + str(len(self.times)) 125 | print >> self.__case_file, 'time values: ' 126 | for item in self.times: 127 | print >> self.__case_file, item 128 | print >> self.__case_file, 'FILE' 129 | print >> self.__case_file, 'file set: 1' 130 | print >> self.__case_file, 'number of steps: ' + str(len(self.times)) 131 | 132 | self.__case_file.close() 133 | 134 | return 135 | 136 | #Create Ensight Format geometry file 137 | def write_geometry_file_time_step(self, x, y): 138 | """ Initialize Ensight geometry file""" 139 | 140 | print >> self.__geo_file, 'BEGIN TIME STEP' 141 | print >> self.__geo_file, 'Ensight Gold geometry file\n' 142 | print >> self.__geo_file, 'node id off' 143 | print >> self.__geo_file, 'element id off' 144 | print >> self.__geo_file, 'part' 145 | print >> self.__geo_file, '1' 146 | print >> self.__geo_file, 'grid' 147 | print >> self.__geo_file, 'coordinates' 148 | print >> self.__geo_file, len(x) 149 | for item in x: 150 | print >> self.__geo_file, item 151 | for item in y: 152 | print >> self.__geo_file, item 153 | for item in range(len(x)): 154 | print >> self.__geo_file, 0.0 155 | print >> self.__geo_file, 'point' 156 | print >> self.__geo_file, len(x) 157 | for item in range(len(x)): 158 | print >> self.__geo_file, item + 1 159 | print >> self.__geo_file, 'END TIME STEP' 160 | 161 | return 162 | 163 | def write_vector_variable_time_step(self, variable_name, variable, time): 164 | 165 | write_index = None 166 | for index,aname in enumerate(self.__vv_names): 167 | if variable_name == aname: 168 | write_index = index 169 | break 170 | 171 | print >> self.__vector_var_files[write_index], 'BEGIN TIME STEP' 172 | print >> self.__vector_var_files[write_index], 'time = ', time 173 | print >> self.__vector_var_files[write_index], 'part' 174 | print >> self.__vector_var_files[write_index], '1' 175 | print >> self.__vector_var_files[write_index], 'coordinates' 176 | for xyz in variable: 177 | for item in xyz: 178 | print >> self.__vector_var_files[write_index], item 179 | 180 | print >> self.__vector_var_files[write_index], 'END TIME STEP' 181 | 182 | return 183 | 184 | 185 | def write_scalar_variable_time_step(self, variable_name, variable, time): 186 | 187 | write_index = None 188 | for index,aname in enumerate(self.__sv_names): 189 | if variable_name == aname: 190 | write_index = index 191 | break 192 | 193 | print >> self.__scalar_var_files[write_index], 'BEGIN TIME STEP' 194 | print >> self.__scalar_var_files[write_index], 'time = ', time 195 | print >> self.__scalar_var_files[write_index], 'part' 196 | print >> self.__scalar_var_files[write_index], '1' 197 | print >> self.__scalar_var_files[write_index], 'coordinates' 198 | for item in variable: 199 | print >> self.__scalar_var_files[write_index], item 200 | 201 | print >> self.__scalar_var_files[write_index], 'END TIME STEP' 202 | 203 | return 204 | 205 | 206 | def append_time_step(self,time): 207 | 208 | self.times.append(time) 209 | 210 | return 211 | 212 | 213 | def finalize(self): 214 | 215 | self.__geo_file.close() 216 | 217 | if self.__vv_names != None: 218 | for item in self.__vector_var_files: 219 | item.close() 220 | 221 | if self.__sv_names != None: 222 | for item in self.__scalar_var_files: 223 | item.close() 224 | 225 | return 226 | 227 | def __write_sos_file(self,comm=None,viz_path=None): 228 | 229 | if comm != None: 230 | 231 | rank = comm.MyPID() 232 | size = comm.NumProc() 233 | 234 | directory = './ensight_files/' 235 | 236 | if rank == 0: 237 | with open(directory+self.__fname+'.sos','w') as ff: 238 | 239 | print >> ff, "FORMAT" 240 | print >> ff, "type: master_server gold" 241 | print >> ff, "SERVERS" 242 | print >> ff, "number of servers: " + str(size) 243 | 244 | for server_number in range(size): 245 | 246 | print >> ff, "#Server " + str(server_number) 247 | print >> ff, "machine id: " + socket.gethostname() 248 | 249 | if viz_path != None: 250 | print >> ff, "execuatable: " + viz_path 251 | else: 252 | print >> ff, "execuatable: paraview" 253 | print >> ff, ("casefile: " + self.__fname + '.' 254 | + str(server_number) + '.case') 255 | -------------------------------------------------------------------------------- /materials.py: -------------------------------------------------------------------------------- 1 | 2 | # Simple constitutive model 3 | def bond_based_elastic_material(exten_state, weighted_volume, youngs_modulus, 4 | poisson_ratio, influence_state): 5 | """Computes the scalar force state. This is the state-based version of a 6 | bond based material.""" 7 | bulk_modulus = youngs_modulus / (3.0 * (1.0 - 2.0 * poisson_ratio)) 8 | 9 | #Return the force 10 | return (9.0 * bulk_modulus * influence_state / weighted_volume[:,None] * 11 | exten_state) 12 | 13 | # Linear peridynamic solid model 14 | def elastic_material(youngs_modulus, poisson_ratio, dilatation, exten_state, 15 | ref_mag_state, weighted_volume, influence_state): 16 | 17 | #Convert the elastic constants 18 | bulk_modulus = youngs_modulus / (3.0 * (1.0 - 2.0 * poisson_ratio)) 19 | shear_modulus = youngs_modulus / (2.0 * (1.0 + poisson_ratio)) 20 | 21 | #Compute the pressure 22 | pressure = -bulk_modulus * dilatation 23 | 24 | #Compute the deviatoric extension state 25 | dev_exten_state = (exten_state - dilatation[:,None] * ref_mag_state / 3.0) 26 | 27 | #Compute the peridynamic shear constant 28 | alpha = 15.0 * shear_modulus / weighted_volume 29 | 30 | #Compute the isotropic and deviatoric components of the force scalar state 31 | iso_force_state = (-3.0 * pressure[:,None] / weighted_volume[:,None] * 32 | influence_state * ref_mag_state) 33 | dev_force_state = alpha[:,None] * influence_state * dev_exten_state 34 | 35 | #Return the force scalar-state 36 | return iso_force_state + dev_force_state 37 | 38 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.7.1 2 | scipy==0.12.0 3 | matplotlib==1.2.1 4 | mpi4py==1.3 5 | progressbar==2.3 6 | --------------------------------------------------------------------------------