├── Figure1-1.png
├── PIPN_elasticity.png
├── README.md
├── PIPN_Elasticity.py
└── ElasticityDataGeneration.m
/Figure1-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ali-Stanford/PhysicsInformedPointNetElasticity/HEAD/Figure1-1.png
--------------------------------------------------------------------------------
/PIPN_elasticity.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ali-Stanford/PhysicsInformedPointNetElasticity/HEAD/PIPN_elasticity.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Physics-Informed PointNet (PIPN) for Linear Elasticity
2 | 
3 |
4 | 
5 |
6 | **Author:** Ali Kashefi (kashefi@stanford.edu)
7 | **Description:** Implementation of Physics-Informed PointNet (PIPN) for weakly-supervised learning of 2D linear elasticity (plane stress) on multiple sets of irregular geometries
8 | **Version:** 1.0
9 |
10 | **Citations**
11 | If you use the code, please cite the following journal papers:
12 |
13 | #1 **[Physics-informed PointNet: On how many irregular geometries can it solve an inverse problem simultaneously? Application to linear elasticity](https://doi.org/10.1615/JMachLearnModelComput.2023050011)**
14 |
15 | @article{kashefi2023PIPNelasticity,
16 | title={Physics-informed PointNet: On how many irregular geometries can it solve an inverse problem simultaneously? Application to linear elasticity},
17 | author={Kashefi, Ali and Guibas, Leonidas J and Mukerji, Tapan},
18 | journal={Journal of Machine Learning for Modeling and Computing},
19 | volume={4},
20 | number={4},
21 | year={2023},
22 | publisher={Begel House Inc.}}
23 |
24 | #2 **[Physics-informed PointNet: A deep learning solver for steady-state incompressible flows and thermal fields on multiple sets of irregular geometries](https://doi.org/10.1016/j.jcp.2022.111510)**
25 |
26 | @article{Kashefi2022PIPN,
27 | title = {Physics-informed PointNet: A deep learning solver for steady-state incompressible flows and thermal fields on multiple sets of irregular geometries},
28 | journal = {Journal of Computational Physics},
29 | volume = {468},
30 | pages = {111510},
31 | year = {2022},
32 | issn = {0021-9991},
33 | author = {Ali Kashefi and Tapan Mukerji}}
34 |
35 | **Abstract**
36 |
37 | Regular physics-informed neural networks (PINNs) predict the solution of partial differential equations using sparse labeled data but only over a single domain. On the other hand, fully supervised learning models are first trained usually over a few thousand domains with known solutions (i.e., labeled data), and then predict the solution over a few hundred unseen domains. Physics-informed PointNet (PIPN) is primarily designed to fill this gap between PINNs (as weakly supervised learning models) and fully supervised learning models. In this article, we demonstrate that PIPN predicts the solution of desired partial differential equations over a few hundred domains simultaneously, while it only uses sparse labeled data. This framework benefits fast geometric designs in the industry when only sparse labeled data are available. Particularly, we show that PIPN predicts the solution of a plane stress problem over more than 500 domains with different geometries, simultaneously. Moreover, we pioneer implementing the concept of remarkable batch size (i.e., the number of geometries fed into PIPN at each sub-epoch) into PIPN. Specifically, we try batch sizes of 7, 14, 19, 38, 76, and 133. Additionally, the effect of the PIPN size, symmetric function in the PIPN architecture, and static and dynamic weights for the component of the sparse labeled data in the loss function are investigated.
38 |
39 | **Physics-informed PointNet on Wikipedia**
40 | A general description of physics-informed neural networks (PINNs) and their other versions such as PIPN can be found on the following Wikipedia page:
41 | [Physics-informed PointNet (PIPN) for multiple sets of irregular geometries](https://en.wikipedia.org/wiki/Physics-informed_neural_networks#Physics-informed_PointNet_(PIPN)_for_multiple_sets_of_irregular_geometries)
42 |
43 | **Physics-informed PointNet Presentation in Machine Learning + X seminar 2022 at Brown University**
44 | In case you are interested, you might watch the recorded machine learning seminar with the topic of PIPN at Brown University using the following link:
45 | [Video Presentation of PIPN at Brown University](https://www.dropbox.com/s/oafbjl6xaihotqa/GMT20220325-155140_Recording_2560x1440.mp4?dl=0)
46 | [YouTube Video](https://www.youtube.com/watch?v=faeHARnPSVE)
47 |
48 | **Questions?**
49 | If you have any questions or need assistance, please do not hesitate to contact Ali Kashefi (kashefi@stanford.edu) via email.
50 |
51 | **About the Author**
52 | Please see the author's website: [Ali Kashefi](https://web.stanford.edu/~kashefi/)
53 |
--------------------------------------------------------------------------------
/PIPN_Elasticity.py:
--------------------------------------------------------------------------------
1 | ##### In The Name of God #####
2 | ##### Physics-informed PointNet (PIPN) for weakly-supervised learning of 2D linear elasticity on multiple sets of irregular geometries #####
3 |
4 | #Author: Ali Kashefi (kashefi@stanford.edu)
5 |
6 | #Citations:
7 | #If you use the code, please cite the following journal papers:
8 |
9 |
10 | #@article{kashefi2023PIPNelasticity,
11 | # title={Physics-informed PointNet: On how many irregular geometries can it solve an inverse problem simultaneously? Application to linear elasticity},
12 | # author={Kashefi, Ali and Guibas, Leonidas J and Mukerji, Tapan},
13 | # journal={Journal of Machine Learning for Modeling and Computing},
14 | # volume={4},
15 | # number={4},
16 | # year={2023},
17 | # publisher={Begel House Inc.}}
18 |
19 |
20 | #@article{Kashefi2022PIPN,
21 | #title = {Physics-informed PointNet: A deep learning solver for steady-state incompressible flows and thermal fields on multiple sets of irregular geometries},
22 | #journal = {Journal of Computational Physics},
23 | #volume = {468},
24 | #pages = {111510},
25 | #year = {2022},
26 | #issn = {0021-9991},
27 | #author = {Ali Kashefi and Tapan Mukerji}}
28 |
29 | #Version: 1.0
30 |
31 | #Import Libraries
32 | import os
33 | import csv
34 | import linecache
35 | import math
36 | import timeit
37 | from timeit import default_timer as timer
38 | from operator import itemgetter
39 | import numpy as np
40 | from numpy import zeros
41 | import matplotlib
42 | matplotlib.use('Agg')
43 | import matplotlib.pyplot as plt
44 | plt.rcParams['font.family'] = 'serif'
45 | plt.rcParams['font.serif'] = ['Times New Roman'] + plt.rcParams['font.serif']
46 | plt.rcParams['font.size'] = '12'
47 | from mpl_toolkits.mplot3d import Axes3D
48 | import matplotlib.tri as tri
49 | import tensorflow as tf
50 | from tensorflow.python.keras import optimizers
51 | from tensorflow.python.keras import backend
52 | from tensorflow.python.keras.layers import Input, Dense
53 | from tensorflow.python.keras import optimizers
54 | from tensorflow.python.keras.layers import Input
55 | from tensorflow.python.keras.models import Model
56 | from tensorflow.python.keras.layers import Dense, Reshape
57 | from tensorflow.python.keras.layers import Convolution1D, MaxPooling1D, AveragePooling1D
58 | from tensorflow.python.keras.layers import Lambda, concatenate
59 | from tensorflow.python.keras import initializers
60 | from tensorflow import keras
61 | #import h5py
62 |
63 |
64 | #Global Variables
65 | variation = 3
66 | var_ori = 2
67 | data_square = int(variation*int(360/4)/var_ori) - 1 # 90
68 | data_pentagon = int(variation*int(360/5)/var_ori) -1 # 72
69 | data_heptagon = int(variation*int(360/7)/var_ori) # 51
70 | data_octagon = int(variation*int(360/8)/var_ori) - 1 # 45
71 | data_nonagon = int(variation*int(360/9)/var_ori) - 1 # 40
72 | data_hexagon = int(variation*int(360/6)/var_ori) # 60
73 |
74 | #total data is 536
75 | # 536 - 4, becomes 532 = 2*2*7*19
76 |
77 | #number of domains
78 |
79 | data = data_square + data_pentagon + data_hexagon + data_heptagon + data_octagon + data_nonagon
80 |
81 | Nd = 2 #dimension of problems, usually 2 or 3
82 | N_boundary = 1
83 | num_points = 1100
84 | category = 2 #number of variables, displacement_x and displacement_y
85 | full_list = [] #point number on the whole domain
86 | BC_list = [] #point number on boundary
87 | interior_list = [] #interior nodes without full, BC, sparse
88 |
89 | #Training parameters
90 | J_Loss = 0.00001
91 | LR = 0.0003 #learning rate
92 | Np = 3000 #5000 #Number of epochs
93 | Nb = 28 # 2,4,7,14,19,28,38 #batch size, note: Nb should be less than data
94 | Ns = 1.0 # 4,2,1,0.5,0.25,0.125 scaling the network
95 | pointer = np.zeros(shape=[Nb],dtype=int) #to save indices of batch numbers
96 |
97 | #Material-properties (plane stress)
98 |
99 | E = 1.0
100 | nu = 0.3
101 | G = E/(2*(1+nu))
102 | Ko = 1.0
103 | alpha = 1.0
104 |
105 | c11 = E/(1.0-nu*nu)
106 | c12 = c11*nu
107 | c22 = c11
108 | c66 = G
109 |
110 | #Some functions
111 | def TANHscale(b):
112 | return tf.tanh(1.0*b)
113 |
114 | def mat_mul(AA, BB):
115 | return tf.matmul(AA, BB)
116 |
117 | def exp_dim(global_feature, num_points):
118 | return tf.tile(global_feature, [1, num_points, 1])
119 |
120 | def compute_u(Y):
121 | return Y[0][:,:,0]
122 |
123 | def compute_v(Y):
124 | return Y[0][:,:,1]
125 |
126 | def compute_p(Y):
127 | return Y[0][:,:,2]
128 |
129 | def compute_dp_dx(X,Y):
130 | return backend.gradients(Y[0][:,:,2], X)[0][:,:,0]
131 |
132 | def compute_dp_dy(X,Y):
133 | return backend.gradients(Y[0][:,:,2], X)[0][:,:,1]
134 |
135 | def map(index):
136 | return X_train[0][index][0], X_train[0][index][1]
137 |
138 | def find(x_i,y_i,data_number,find_value):
139 | call = -1
140 | for index in range(num_points):
141 | if np.sqrt(np.power(X_train[data_number][index][0]-x_i,2.0) + np.power(X_train[data_number][index][1]-y_i,2.0)) < np.power(10.0,find_value): #np.power(10.0,-4.0):
142 | call = index
143 | break
144 | return call
145 |
146 | def plotCost(Y,name,title):
147 | plt.plot(Y)
148 | plt.yscale('log')
149 | plt.xlabel('iteration')
150 | plt.ylabel('loss')
151 | plt.title(title)
152 | plt.savefig(name+'.png',dpi = 300,bbox_inches='tight')
153 | plt.savefig(name+'.eps',bbox_inches='tight')
154 | plt.clf()
155 | #plt.show()
156 |
157 | def plotGeometry2DPointCloud(X,name,i):
158 | x_p = X[i,:,0]
159 | y_p = X[i,:,1]
160 | plt.scatter(x_p, y_p)
161 | plt.xlabel('x')
162 | plt.ylabel('y')
163 | plt.gca().set_aspect('equal', adjustable='box')
164 | plt.savefig(name+'.png',dpi=300)
165 | #plt.savefig(name+'.eps')
166 | plt.clf()
167 | #plt.show()
168 |
169 | def plotSolutions2DPointCloud(S,index,name,flag,title,coeficient):
170 | U = np.zeros(num_points,dtype=float)
171 | if flag==False:
172 | for i in range(num_points):
173 | U[i] = S[index][i]
174 | if flag == True:
175 | U = S
176 | x_p = X_train[index,:,0]
177 | y_p = X_train[index,:,1]
178 | marker_size= 1.0
179 |
180 | plt.scatter(x_p, y_p, marker_size, U, cmap='jet')
181 | cbar= plt.colorbar()
182 | plt.locator_params(axis="x", nbins=6)
183 | plt.locator_params(axis="y", nbins=6)
184 | plt.title(title)
185 | plt.xlabel('x (m)')
186 | plt.ylabel('y (m)')
187 |
188 | plt.xticks(np.arange(min(x_p), max(x_p)+0.2, 0.2))
189 | plt.yticks(np.arange(min(y_p), max(y_p)+0.2, 0.2))
190 | plt.xticks(rotation = 45)
191 |
192 | #plt.title(name)
193 | plt.gca().set_aspect('equal', adjustable='box')
194 | plt.savefig(name+'.png',bbox_inches="tight",dpi=300)
195 | #plt.savefig(name+'.eps')
196 | plt.clf()
197 | #plt.show()
198 |
199 | def plotErrors2DPointCloud(Uexact,Upredict,index,name,title, coef):
200 | Up = np.zeros(num_points,dtype=float)
201 | for i in range(num_points):
202 | Up[i] = Upredict[index][i]
203 |
204 | x_p = X_train[index,:,0]
205 | y_p = X_train[index,:,1]
206 | marker_size= 1.0
207 |
208 | plt.scatter(x_p, y_p, marker_size, np.absolute(Uexact-Up), cmap='jet')
209 | cbar= plt.colorbar()
210 | plt.locator_params(axis="x", nbins=6)
211 | plt.locator_params(axis="y", nbins=6)
212 | plt.title(title)
213 | plt.xlabel('x (m)')
214 | plt.ylabel('y (m)')
215 | #plt.title(name)
216 | plt.xticks(np.arange(min(x_p), max(x_p)+0.2, 0.2))
217 | plt.yticks(np.arange(min(y_p), max(y_p)+0.2, 0.2))
218 | plt.xticks(rotation = 45)
219 |
220 |
221 | plt.gca().set_aspect('equal', adjustable='box')
222 | plt.savefig(name+'.png',bbox_inches="tight",dpi=300)
223 | #plt.savefig(name+'.eps')
224 | plt.clf()
225 | #plt.show()
226 |
227 | def computeRMSE(Uexact,Upredict,index):
228 |
229 | Up = np.zeros(num_points,dtype=float)
230 | for i in range(num_points):
231 | Up[i] = Upredict[index][i]
232 | rmse_value = np.sqrt((1.0/num_points)*(np.sum(np.square(Uexact-Up))))
233 | return rmse_value
234 |
235 | def computeRelativeL2(Uexact,Upredict,index):
236 |
237 | Up = np.zeros(num_points,dtype=float)
238 | for i in range(num_points):
239 | Up[i] = Upredict[index][i]
240 |
241 | sum1=0
242 | sum2=0
243 | for i in range(num_points):
244 | sum1 += np.square(Up[i]-Uexact[i])
245 | sum2 += np.square(Uexact[i])
246 |
247 | return np.sqrt(sum1/sum2)
248 |
249 | #Reading Data
250 | num_gross = 6000
251 | Gross_train = np.zeros(shape=(data, num_gross, Nd),dtype=float)
252 | num_point_train = np.zeros(shape=(data),dtype=int)
253 | Gross_train_CFD = np.zeros(shape=(data, num_gross, category),dtype=float) #change 4 in the future
254 |
255 |
256 | x_fire = np.zeros(shape=(num_gross),dtype=float) + 100
257 | y_fire = np.zeros(shape=(num_gross),dtype=float) + 100
258 | u_fire = np.zeros(shape=(num_gross),dtype=float) + 100
259 | v_fire = np.zeros(shape=(num_gross),dtype=float) + 100
260 | T_fire = np.zeros(shape=(num_gross),dtype=float) + 100
261 | dTdx_fire = np.zeros(shape=(num_gross),dtype=float) + 100
262 | dTdy_fire = np.zeros(shape=(num_gross),dtype=float) + 100
263 |
264 | def readFire(number,name):
265 |
266 | coord = 0
267 | with open('/scratch/users/kashefi/PIPNSolid/data20/'+name+'x'+str(number)+'.txt', 'r') as f:
268 | for line in f:
269 | x_fire[coord] = float(line.split()[0])
270 | coord += 1
271 | f.close()
272 |
273 | coord = 0
274 | with open('/scratch/users/kashefi/PIPNSolid/data20/'+name+'y'+str(number)+'.txt', 'r') as f:
275 | for line in f:
276 | y_fire[coord] = float(line.split()[0])
277 | coord += 1
278 | f.close()
279 |
280 | coord = 0
281 | with open('/scratch/users/kashefi/PIPNSolid/data20/'+name+'u'+str(number)+'.txt', 'r') as f:
282 | for line in f:
283 | u_fire[coord] = float(line.split()[0])*1
284 | coord += 1
285 | f.close()
286 |
287 | coord = 0
288 | with open('/scratch/users/kashefi/PIPNSolid/data20/'+name+'v'+str(number)+'.txt', 'r') as f:
289 | for line in f:
290 | v_fire[coord] = float(line.split()[0])*1
291 | coord += 1
292 | f.close()
293 |
294 | coord = 0
295 | with open('/scratch/users/kashefi/PIPNSolid/data20/'+name+'dTdx'+str(number)+'.txt', 'r') as f:
296 | for line in f:
297 | dTdx_fire[coord] = float(line.split()[0])/1
298 | coord += 1
299 | f.close()
300 |
301 | coord = 0
302 | with open('/scratch/users/kashefi/PIPNSolid/data20/'+name+'dTdy'+str(number)+'.txt', 'r') as f:
303 | for line in f:
304 | dTdy_fire[coord] = float(line.split()[0])/1
305 | coord += 1
306 | f.close()
307 |
308 | coord = 0
309 | with open('/scratch/users/kashefi/PIPNSolid/data20/'+name+'T'+str(number)+'.txt', 'r') as f:
310 | for line in f:
311 | T_fire[coord] = float(line.split()[0])/1
312 | coord += 1
313 | f.close()
314 |
315 |
316 | x_fire_load = np.zeros(shape=(data,num_gross),dtype=float) + 100
317 | y_fire_load = np.zeros(shape=(data,num_gross),dtype=float) + 100
318 | u_fire_load = np.zeros(shape=(data,num_gross),dtype=float) + 100
319 | v_fire_load = np.zeros(shape=(data,num_gross),dtype=float) + 100
320 | T_fire_load = np.zeros(shape=(data,num_gross),dtype=float) + 100
321 | dTdx_fire_load = np.zeros(shape=(data,num_gross),dtype=float) + 100
322 | dTdy_fire_load = np.zeros(shape=(data,num_gross),dtype=float) + 100
323 |
324 | #Reading data
325 |
326 | list_data = [data_square, data_pentagon, data_hexagon, data_heptagon, data_octagon, data_nonagon]
327 | list_name = ['square', 'pentagon', 'hexagon', 'heptagon', 'octagon', 'nanogan']
328 |
329 | counter = 0
330 | for k in range(len(list_data)):
331 | for i in range(list_data[k]):
332 | readFire(2*i+1,list_name[k])
333 | for j in range(num_gross):
334 |
335 | x_fire_load[counter][j] = x_fire[j]
336 | y_fire_load[counter][j] = y_fire[j]
337 | u_fire_load[counter][j] = u_fire[j]
338 | v_fire_load[counter][j] = v_fire[j]
339 | T_fire_load[counter][j] = T_fire[j]
340 | dTdx_fire_load[counter][j] = dTdx_fire[j]
341 | dTdy_fire_load[counter][j] = dTdy_fire[j]
342 | counter += 1
343 |
344 |
345 | plt.scatter(x_fire,y_fire,s=1.0)
346 | plt.xlabel('x (m)')
347 | plt.ylabel('y (m)')
348 | plt.title('Location of 16 Sensors')
349 | plt.gca().set_aspect('equal', adjustable='box')
350 | plt.savefig('fire.png',dpi=300)
351 | plt.clf()
352 |
353 | #determination of boundary
354 | x1 = -1
355 | x2 = -1
356 | x3 = 1
357 | x4 = 1
358 |
359 | y1 = -1
360 | y2 = 1
361 | y3 = -1
362 | y4 = 1
363 |
364 | car_bound = 0
365 | for i in range(len(T_fire)):
366 | if (T_fire[i]==0 or T_fire[i]==1):
367 | car_bound += 1
368 |
369 | index_bound = np.zeros(shape=(data,car_bound),dtype=int)
370 |
371 | for j in range(data):
372 | bound = 0
373 | for i in range(num_gross):
374 | if (T_fire_load[j][i]==0 or T_fire_load[j][i]==1):
375 | if(bound==car_bound):
376 | continue
377 | index_bound[j][bound] = i
378 | bound += 1
379 |
380 | print('index bound')
381 | print(car_bound)
382 |
383 | N_boundary = car_bound #We do not consider any boundary points
384 | num_points = 1204 #memory sensetive
385 |
386 | interior_point = num_points - N_boundary
387 | X_train = np.random.normal(size=(data, num_points, Nd))
388 | Fire_train = np.random.normal(size=(data, num_points, category + 2))
389 | X_train_mini = np.random.normal(size=(Nb, num_points, Nd))
390 |
391 | for i in range(data):
392 | for k in range(N_boundary):
393 | X_train[i][k][0] = x_fire_load[i][index_bound[i][k]]
394 | X_train[i][k][1] = y_fire_load[i][index_bound[i][k]]
395 | Fire_train[i][k][0] = u_fire_load[i][index_bound[i][k]]
396 | Fire_train[i][k][1] = v_fire_load[i][index_bound[i][k]]
397 | Fire_train[i][k][2] = dTdx_fire_load[i][index_bound[i][k]]
398 | Fire_train[i][k][3] = dTdy_fire_load[i][index_bound[i][k]]
399 |
400 | index_rest = np.zeros(shape=(interior_point),dtype=int)
401 | cum = 0
402 | flag = True
403 | for k in range(num_points):
404 | for j in range(car_bound):
405 | if k == index_bound[i][j]:
406 | flag = False
407 | if (flag==True):
408 | if (cum==interior_point):
409 | continue
410 | index_rest[cum] = k
411 | cum += 1
412 | flag = True
413 |
414 | for k in range(N_boundary,num_points):
415 | X_train[i][k][0] = x_fire_load[i][index_rest[k-N_boundary]]
416 | X_train[i][k][1] = y_fire_load[i][index_rest[k-N_boundary]]
417 |
418 | Fire_train[i][k][0] = u_fire_load[i][index_rest[k-N_boundary]]
419 | Fire_train[i][k][1] = v_fire_load[i][index_rest[k-N_boundary]]
420 |
421 | Fire_train[i][k][2] = dTdx_fire_load[i][index_rest[k-N_boundary]]
422 | Fire_train[i][k][3] = dTdy_fire_load[i][index_rest[k-N_boundary]]
423 |
424 | #plotting
425 |
426 | plt.scatter(X_train[0,:,0],X_train[0,:,1],s=1.0)
427 | plt.xlabel('x (m)')
428 | plt.ylabel('y (m)')
429 | plt.title('Location of 16 Sensors')
430 | plt.gca().set_aspect('equal', adjustable='box')
431 | plt.savefig('pre_sparse.png',dpi=300)
432 | plt.clf()
433 |
434 | #sensor setting
435 | k_x = 9
436 | k_y = 9
437 |
438 | sparse_n = int(k_x*k_y) # 81=9*9
439 | sparse_list = [[-1 for i in range(sparse_n)] for j in range(data)]
440 |
441 | for nn in range(data):
442 | x1 = np.min(X_train[nn,:,0])
443 | x2 = np.max(X_train[nn,:,0])
444 | y1 = np.min(X_train[nn,:,1])
445 | y2 = np.max(X_train[nn,:,1])
446 | Lx = np.absolute(x2-x1)
447 | Ly = np.absolute(y2-y1)
448 | kx = 9
449 | ky = 9
450 |
451 | deltax = Lx/8.0
452 | deltay = Ly/8.0
453 |
454 | counting = 0
455 | x_pre_sparse = np.random.normal(size=(k_x*k_y))
456 | y_pre_sparse = np.random.normal(size=(k_x*k_y))
457 | for i in range(k_x):
458 | for j in range(k_y):
459 | x_pre_sparse[counting] = i*deltax + x1
460 | y_pre_sparse[counting] = j*deltay + y1
461 | counting += 1
462 |
463 | for i in range(k_x*k_y):
464 | x_i = x_pre_sparse[i]
465 | y_i = y_pre_sparse[i]
466 | di = np.random.normal(size=(num_points,2))
467 | for index in range(num_points):
468 | di[index][0] = 1.0*index
469 | di[index][1] = np.sqrt(np.power(X_train[nn][index][0]-x_i,2.0) + np.power(X_train[nn][index][1]-y_i,2.0))
470 | di = di[np.argsort(di[:, 1])]
471 | sparse_list[nn][i] = int(di[0][0])
472 |
473 | print('number of sensors')
474 | print(sparse_n)
475 |
476 | print('number of data')
477 | print(data)
478 |
479 | #plot sensor locoations
480 | sparse_list_q = np.array(sparse_list).astype(int)
481 | for s in range(data):
482 | plt.scatter(X_train[s,:,0],X_train[s,:,1],s=1.0)
483 | plt.scatter(X_train[s,sparse_list_q[s,:],0],X_train[s,sparse_list_q[s,:],1],s=20.0,color='red',marker='<')
484 | plt.xlabel('x (m)')
485 | plt.ylabel('y (m)')
486 | plt.xticks(np.arange(min(X_train[s,:,0]), max(X_train[s,:,0])+0.2, 0.2))
487 | plt.yticks(np.arange(min(X_train[s,:,1]), max(X_train[s,:,1])+0.2, 0.2))
488 | plt.xticks(rotation = 45)
489 | plt.title('Sensor locations')
490 | plt.gca().set_aspect('equal', adjustable='box')
491 | plt.savefig('sensor'+str(s)+'.png',bbox_inches="tight",dpi=300)
492 | plt.clf()
493 |
494 | def problemSet():
495 |
496 | for i in range(data):
497 | for k in range(N_boundary):
498 | BC_list.append(index_bound[i][k])
499 |
500 | for i in range(num_points):
501 | full_list.append(i)
502 |
503 | for i in range(num_points):
504 | if i in BC_list:
505 | continue
506 | interior_list.append(i)
507 |
508 | problemSet()
509 |
510 | u_sparse = np.random.normal(size=(data, sparse_n))
511 | v_sparse = np.random.normal(size=(data, sparse_n))
512 | x_sparse = np.random.normal(size=(data, sparse_n))
513 | y_sparse = np.random.normal(size=(data, sparse_n))
514 |
515 | for i in range(data):
516 | for k in range(sparse_n):
517 | u_sparse[i][k] = Fire_train[i][sparse_list[i][k]][0]
518 | v_sparse[i][k] = Fire_train[i][sparse_list[i][k]][1]
519 |
520 | x_sparse[i][k] = X_train[i][sparse_list[i][k]][0]
521 | y_sparse[i][k] = X_train[i][sparse_list[i][k]][1]
522 |
523 | fire_u = np.zeros(data*num_points)
524 | fire_v = np.zeros(data*num_points)
525 | fire_dTdx = np.zeros(data*num_points)
526 | fire_dTdy = np.zeros(data*num_points)
527 |
528 | counter = 0
529 | for j in range(data):
530 | for i in range(num_points):
531 | fire_u[counter] = Fire_train[j][i][0]
532 | fire_v[counter] = Fire_train[j][i][1]
533 | fire_dTdx[counter] = Fire_train[j][i][2]
534 | fire_dTdy[counter] = Fire_train[j][i][3]
535 | counter += 1
536 |
537 | def CFDsolution_u(index):
538 | return Fire_train[index,:,0]
539 |
540 | def CFDsolution_v(index):
541 | return Fire_train[index,:,1]
542 |
543 | #PointNet
544 | input_points = Input(shape=(num_points, Nd))
545 | g = Convolution1D(int(64*Ns), 1, input_shape=(num_points, Nd), activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(input_points) # I made 3 to 1
546 | g = Convolution1D(int(64*Ns), 1, input_shape=(num_points, Nd), activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(g) #I made 3 to 1 be
547 |
548 | seg_part1 = g
549 | g = Convolution1D(int(64*Ns), 1, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(g)
550 | g = Convolution1D(int(128*Ns), 1, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(g)
551 | g = Convolution1D(int(1024*Ns), 1, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(g)
552 |
553 | # global_feature
554 | global_feature = MaxPooling1D(pool_size=num_points)(g)
555 | #global_feature = AveragePooling1D(pool_size=num_points)(g)
556 | global_feature = Lambda(exp_dim, arguments={'num_points': num_points})(global_feature)
557 |
558 | # point_net_seg
559 | c = concatenate([seg_part1, global_feature])
560 | c = Convolution1D(int(512*Ns), 1, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(c)
561 | c = Convolution1D(int(256*Ns), 1, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(c)
562 | c = Convolution1D(int(128*Ns), 1, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(c)
563 | c = Convolution1D(int(128*Ns), 1, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(c)
564 | prediction = Convolution1D(category, 1, activation='tanh',kernel_initializer=initializers.RandomNormal(stddev=0.01), bias_initializer=initializers.Zeros())(c)
565 | model = Model(inputs=input_points, outputs=prediction)
566 |
567 | weight = np.zeros(Np)
568 | def set_weight():
569 | for i in range(len(weight)):
570 | weight[i] = 50
571 |
572 | set_weight()
573 |
574 | pose_weight = tf.placeholder(tf.float32, None)
575 |
576 | cost_BC = tf.placeholder(tf.float32, None)
577 | cost_sparse = tf.placeholder(tf.float32, None)
578 | cost_interior = tf.placeholder(tf.float32, None)
579 |
580 | pose_BC = tf.placeholder(tf.int32, None) #Taken from truth
581 | pose_sparse = tf.placeholder(tf.int32, None) #Taken from truth
582 | pose_interior = tf.placeholder(tf.int32, None) #Taken from truth
583 |
584 | pose_BC_p = tf.placeholder(tf.int32, None) #Taken from prediction
585 | pose_sparse_p = tf.placeholder(tf.int32, None) #Taken from prediction
586 | pose_interior_p = tf.placeholder(tf.int32, None) #Taken from prediction
587 |
588 | def ComputeCost_PDE(X,Y):
589 |
590 | u_in = tf.gather(tf.reshape(Y[0][:,:,0],[-1]),pose_interior_p)
591 | v_in = tf.gather(tf.reshape(Y[0][:,:,1],[-1]),pose_interior_p)
592 | du_dx_in = tf.gather(tf.reshape(backend.gradients(Y[0][:,:,0], X)[0][:,:,0],[-1]),pose_interior_p) #du/dx in domain
593 | d2u_dx2_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,0], X)[0][:,:,0], X)[0][:,:,0],[-1]),pose_interior_p) #d2u/dx2 in domain
594 | du_dy_in = tf.gather(tf.reshape(backend.gradients(Y[0][:,:,0], X)[0][:,:,1],[-1]),pose_interior_p) #du/dy in domain
595 | d2u_dy2_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,0], X)[0][:,:,1], X)[0][:,:,1], [-1]),pose_interior_p) #d2u/dy2 in domain
596 | dv_dx_in = tf.gather(tf.reshape(backend.gradients(Y[0][:,:,1], X)[0][:,:,0],[-1]),pose_interior_p) #dv/dx in domain
597 | d2v_dx2_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,1], X)[0][:,:,0], X)[0][:,:,0], [-1]),pose_interior_p) #d2v/dx2 in domain
598 | dv_dy_in = tf.gather(tf.reshape(backend.gradients(Y[0][:,:,1], X)[0][:,:,1],[-1]),pose_interior_p) #dv/dy in domain
599 | d2v_dy2_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,1], X)[0][:,:,1], X)[0][:,:,1], [-1]),pose_interior_p) #d2v/dy2 in domain
600 |
601 | dv_dx_dy_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,1], X)[0][:,:,0], X)[0][:,:,1], [-1]),pose_interior_p)
602 | du_dx_dy_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,0], X)[0][:,:,0], X)[0][:,:,1], [-1]),pose_interior_p)
603 |
604 | dT_dx_truth = tf.gather(fire_dTdx, pose_interior)
605 | dT_dx_truth = tf.cast(dT_dx_truth, dtype='float32')
606 |
607 | dT_dy_truth = tf.gather(fire_dTdy, pose_interior)
608 | dT_dy_truth = tf.cast(dT_dy_truth, dtype='float32')
609 |
610 | fx_in = -E*alpha/(1-nu)*dT_dx_truth
611 | fy_in = -E*alpha/(1-nu)*dT_dy_truth
612 |
613 | r1 = (-c11*d2u_dx2_in - c12*dv_dx_dy_in - c66*d2u_dy2_in - c66*dv_dx_dy_in) + (fx_in)
614 | r2 = (-c66*du_dx_dy_in - c66*d2v_dx2_in - c12*du_dx_dy_in - c22*d2v_dy2_in) + (fy_in)
615 |
616 | pde_loss = tf.reduce_mean(tf.square(r1)+tf.square(r2))
617 |
618 | return (1.0*pde_loss)
619 |
620 | def ComputeCost_SE(X,Y):
621 |
622 | u_in = tf.gather(tf.reshape(Y[0][:,:,0],[-1]),pose_interior_p)
623 | v_in = tf.gather(tf.reshape(Y[0][:,:,1],[-1]),pose_interior_p)
624 | du_dx_in = tf.gather(tf.reshape(backend.gradients(Y[0][:,:,0], X)[0][:,:,0],[-1]),pose_interior_p) #du/dx in domain
625 | d2u_dx2_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,0], X)[0][:,:,0], X)[0][:,:,0],[-1]),pose_interior_p) #d2u/dx2 in domain
626 | du_dy_in = tf.gather(tf.reshape(backend.gradients(Y[0][:,:,0], X)[0][:,:,1],[-1]),pose_interior_p) #du/dy in domain
627 | d2u_dy2_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,0], X)[0][:,:,1], X)[0][:,:,1], [-1]),pose_interior_p) #d2u/dy2 in domain
628 | dv_dx_in = tf.gather(tf.reshape(backend.gradients(Y[0][:,:,1], X)[0][:,:,0],[-1]),pose_interior_p) #dv/dx in domain
629 | d2v_dx2_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,1], X)[0][:,:,0], X)[0][:,:,0], [-1]),pose_interior_p) #d2v/dx2 in domain
630 | dv_dy_in = tf.gather(tf.reshape(backend.gradients(Y[0][:,:,1], X)[0][:,:,1],[-1]),pose_interior_p) #dv/dy in domain
631 | d2v_dy2_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,1], X)[0][:,:,1], X)[0][:,:,1], [-1]),pose_interior_p) #d2v/dy2 in domain
632 |
633 | dv_dx_dy_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,1], X)[0][:,:,0], X)[0][:,:,1], [-1]),pose_interior_p)
634 | du_dx_dy_in = tf.gather(tf.reshape(backend.gradients(backend.gradients(Y[0][:,:,0], X)[0][:,:,0], X)[0][:,:,1], [-1]),pose_interior_p)
635 |
636 | dT_dx_truth = tf.gather(fire_dTdx, pose_interior)
637 | dT_dx_truth = tf.cast(dT_dx_truth, dtype='float32')
638 |
639 | dT_dy_truth = tf.gather(fire_dTdy, pose_interior)
640 | dT_dy_truth = tf.cast(dT_dy_truth, dtype='float32')
641 |
642 | fx_in = -E*alpha/(1-nu)*dT_dx_truth
643 | fy_in = -E*alpha/(1-nu)*dT_dy_truth
644 |
645 | r1 = (-c11*d2u_dx2_in - c12*dv_dx_dy_in - c66*d2u_dy2_in - c66*dv_dx_dy_in) + (fx_in)
646 | r2 = (-c66*du_dx_dy_in - c66*d2v_dx2_in - c12*du_dx_dy_in - c22*d2v_dy2_in) + (fy_in)
647 |
648 | u_boundary = tf.gather(tf.reshape(Y[0][:,:,0], [-1]), pose_BC_p)
649 | u_sparse = tf.gather(tf.reshape(Y[0][:,:,0], [-1]), pose_sparse_p)
650 | v_boundary = tf.gather(tf.reshape(Y[0][:,:,1], [-1]), pose_BC_p)
651 | v_sparse = tf.gather(tf.reshape(Y[0][:,:,1], [-1]), pose_sparse_p)
652 |
653 | boundary_u_truth = tf.gather(fire_u, pose_BC)
654 | boundary_u_truth = tf.cast(boundary_u_truth, dtype='float32')
655 |
656 | sparse_u_truth = tf.gather(fire_u, pose_sparse)
657 | sparse_u_truth = tf.cast(sparse_u_truth, dtype='float32')
658 |
659 | boundary_v_truth = tf.gather(fire_v, pose_BC)
660 | boundary_v_truth = tf.cast(boundary_v_truth, dtype='float32')
661 |
662 | sparse_v_truth = tf.gather(fire_v, pose_sparse)
663 | sparse_v_truth = tf.cast(sparse_v_truth, dtype='float32')
664 |
665 | PDE_cost = tf.reduce_mean(tf.square(r1)+tf.square(r2))
666 | Sparse_cost = tf.reduce_mean(tf.square(u_sparse - sparse_u_truth) + tf.square(v_sparse - sparse_v_truth))
667 |
668 | w_sparse = tf.cast(pose_weight, dtype='float32')
669 |
670 | return (PDE_cost + w_sparse*Sparse_cost)
671 |
672 | def build_model_Elasticity():
673 |
674 | LOSS_Total = []
675 | LOSS_PDE = []
676 | min_loss = 1000
677 | converge_iteration = 0
678 | criteria = J_Loss
679 |
680 | cost = ComputeCost_SE(model.inputs,model.outputs)
681 | PDEcost = ComputeCost_PDE(model.inputs,model.outputs)
682 | vel_u = compute_u(model.outputs)
683 | vel_v = compute_v(model.outputs)
684 |
685 | u_final = np.zeros((data,num_points),dtype=float)
686 | v_final = np.zeros((data,num_points),dtype=float)
687 |
688 | optimizer = tf.train.AdamOptimizer(learning_rate = LR , beta1=0.9, beta2=0.999, epsilon=0.000001).minimize(loss = cost)
689 | init = tf.global_variables_initializer()
690 |
691 | with tf.Session() as sess:
692 | sess.run(init)
693 |
694 | start_ite = timer()
695 |
696 | # training loop
697 | for epoch in range(Np):
698 |
699 | temp_cost = 0
700 | PDE_cost = 0
701 | arr = np.arange(data)
702 | np.random.shuffle(arr)
703 | for sb in range(int(data/Nb)):
704 | pointer = arr[int(sb*Nb):int((sb+1)*Nb)]
705 |
706 | group_BC = np.zeros(int(len(pointer)*len(BC_list)), dtype=int)
707 | group_sparse = np.zeros(int(len(pointer)*sparse_n), dtype=int)
708 | group_interior = np.zeros(int(len(pointer)*len(interior_list)), dtype=int)
709 |
710 | catch = 0
711 | for ii in range(len(pointer)):
712 | for jj in range(len(BC_list)):
713 | group_BC[catch] = int(pointer[ii]*num_points + jj)
714 | catch += 1
715 |
716 | catch = 0
717 | for ii in range(len(pointer)):
718 | for jj in range(sparse_n):
719 | group_sparse[catch] = sparse_list[pointer[ii]][jj] + pointer[ii]*num_points
720 | catch += 1
721 |
722 | catch = 0
723 | for ii in range(len(pointer)):
724 | for jj in range(len(interior_list)):
725 | group_interior[catch] = int(pointer[ii]*num_points + len(BC_list) + jj)
726 | catch += 1
727 |
728 | group_BC_p = np.zeros(int(len(pointer)*len(BC_list)), dtype=int)
729 | group_sparse_p = np.zeros(int(len(pointer)*sparse_n), dtype=int)
730 | group_interior_p = np.zeros(int(len(pointer)*len(interior_list)), dtype=int)
731 |
732 | catch = 0
733 | for ii in range(Nb):
734 | for jj in range(len(BC_list)):
735 | group_BC_p[catch] = int(ii*num_points + jj)
736 | catch += 1
737 |
738 | catch = 0
739 | for ii in range(Nb):
740 | for jj in range(sparse_n):
741 | group_sparse_p[catch] = sparse_list[pointer[ii]][jj] + ii*num_points
742 | catch += 1
743 |
744 | catch = 0
745 | for ii in range(Nb):
746 | for jj in range(len(interior_list)):
747 | group_interior_p[catch] = int(ii*num_points + len(BC_list) + jj)
748 | catch += 1
749 |
750 | X_train_mini = np.take(X_train, pointer[:], axis=0)
751 |
752 | w0 = weight[epoch]
753 |
754 | gr, temp_cost_m, PDE_cost_m, gr1, gr2, gr3, gr4, gr5, gr6, gr7 = sess.run([optimizer, cost, PDEcost pose_BC, pose_sparse, pose_interior, pose_BC_p, pose_sparse_p, pose_interior_p, pose_weight], feed_dict={input_points:X_train_mini, pose_BC:group_BC, pose_sparse:group_sparse, pose_interior:group_interior, pose_BC_p:group_BC_p, pose_sparse_p:group_sparse_p, pose_interior_p:group_interior_p, pose_weight:w0})
755 |
756 | temp_cost += (temp_cost_m/int(data/Nb))
757 | PDE_cost += (PDE_cost_m/int(data/Nb))
758 |
759 | if math.isnan(temp_cost_m):
760 | print('Nan Value\n')
761 | return
762 |
763 | temp_cost = temp_cost/data
764 | LOSS_Total.append(temp_cost)
765 | LOSS_PDE.append(PDE_cost)
766 |
767 | print(epoch)
768 | print(PDE_cost)
769 | #print(temp_cost)
770 |
771 | if temp_cost < min_loss:
772 | u_out = sess.run([vel_u],feed_dict={input_points:X_train})
773 | v_out = sess.run([vel_v],feed_dict={input_points:X_train})
774 |
775 | u_final = np.power(u_out[0],1.0)
776 | v_final = np.power(v_out[0],1.0)
777 |
778 | min_loss = temp_cost
779 | converge_iteration = epoch
780 |
781 | end_ite = timer()
782 |
783 | plotCost(LOSS_Total,'bTotal','Total loss')
784 |
785 | for index in range(data):
786 | plotSolutions2DPointCloud(CFDsolution_u(index),index,'u truth '+str(index),True, 'Ground truth $\it{u}$ (m)', 100)
787 | plotSolutions2DPointCloud(u_final,index,'u prediction '+str(index),False,'Prediction $\it{u}$ (m)', 100)
788 | plotSolutions2DPointCloud(CFDsolution_v(index),index,'v truth '+str(index),True, 'Ground truth $\it{v}$ (m)', 100)
789 | plotSolutions2DPointCloud(v_final,index,'v prediction '+str(index),False,'Prediction $\it{v}$ (m)', 100)
790 |
791 | plotErrors2DPointCloud(CFDsolution_u(index),u_final,index,'error u'+str(index),'Absolute error $\it{u}$ (m)',100)
792 | plotErrors2DPointCloud(CFDsolution_v(index),v_final,index,'error v'+str(index),'Absolute error $\it{v}$ (m)',100)
793 |
794 | #Error Analysis Based on RMSE
795 | error_u = [] ;
796 | error_v = [] ;
797 |
798 | error_u_rel = [] ;
799 | error_v_rel = [] ;
800 |
801 | for index in range(data):
802 | error_u.append(computeRMSE(CFDsolution_u(index),u_final,index))
803 | error_v.append(computeRMSE(CFDsolution_v(index),v_final,index))
804 |
805 | error_u_rel.append(computeRelativeL2(CFDsolution_u(index),u_final,index))
806 | error_v_rel.append(computeRelativeL2(CFDsolution_v(index),v_final,index))
807 |
808 | for index in range(data):
809 | print('\n')
810 | print(index)
811 | print('error_u:')
812 | print(error_u[index])
813 | print('error_v:')
814 | print(error_v[index])
815 | print('error_u_rel:')
816 | print(error_u_rel[index])
817 | print('error_v_rel:')
818 | print(error_v_rel[index])
819 |
820 | print('max RMSE u:')
821 | print(max(error_u))
822 | print(error_u.index(max(error_u)))
823 | print('min RMSE u:')
824 | print(min(error_u))
825 | print(error_u.index(min(error_u)))
826 |
827 | print('\n')
828 |
829 | print('max RMSE v:')
830 | print(max(error_v))
831 | print(error_v.index(max(error_v)))
832 | print('min RMSE v:')
833 | print(min(error_v))
834 | print(error_v.index(min(error_v)))
835 |
836 | print('\n')
837 |
838 | print('max relative u:')
839 | print(max(error_u_rel))
840 | print(error_u_rel.index(max(error_u_rel)))
841 | print('min relative u:')
842 | print(min(error_u_rel))
843 | print(error_u_rel.index(min(error_u_rel)))
844 |
845 | print('\n')
846 |
847 | print('max relative v:')
848 | print(max(error_v_rel))
849 | print(error_v_rel.index(max(error_v_rel)))
850 | print('min relative v:')
851 | print(min(error_v_rel))
852 | print(error_v_rel.index(min(error_v_rel)))
853 |
854 | print('\n')
855 |
856 | print('average relative u:')
857 | print(sum(error_u_rel)/data)
858 |
859 | print('\n')
860 |
861 | print('average relative v:')
862 | print(sum(error_v_rel)/data)
863 |
864 | print('\n')
865 |
866 | print('converge iteration:')
867 | print(converge_iteration)
868 |
869 | print('\n')
870 |
871 | print('loss value average values:')
872 | print(min_loss)
873 |
874 | print('\n')
875 |
876 | print('training time (second):')
877 | print(end_ite - start_ite)
878 |
879 | print('min loss of PDE:')
880 | print(min(LOSS_Total))
881 |
882 | print('min loss of PDE iteration:')
883 | print(LOSS_Total.index(min(LOSS_Total)))
884 |
885 | print('\n')
886 |
887 | with open('bTotalLoss.txt', 'w') as fp:
888 | for i in range(len(LOSS_Total)):
889 | fp.write(str(i) + ' ' + str(LOSS_Total[i]) + '\n')
890 |
891 | with open('bPDELoss.txt', 'w') as fpp:
892 | for i in range(len(LOSS_PDE)):
893 | fpp.write(str(i) + ' ' + str(LOSS_PDE[i]) + '\n')
894 |
895 |
896 | build_model_Elasticity()
897 |
--------------------------------------------------------------------------------
/ElasticityDataGeneration.m:
--------------------------------------------------------------------------------
1 |
2 | %Geometry
3 | %%Square
4 | ng = 90; %number of geometries
5 | name = "square";
6 | L = 0.3; %0.35; %0.35; %0.3; %0.5;
7 | %L = 0.75*L;
8 | alpha = 90*pi/180.0;
9 | x1 = L; y1 = 0.0;
10 | x2 = x1*cos(alpha) - y1*sin(alpha); y2 = x1*sin(alpha) + y1*cos(alpha);
11 | x3 = x2*cos(alpha) - y2*sin(alpha); y3 = x2*sin(alpha) + y2*cos(alpha);
12 | x4 = x3*cos(alpha) - y3*sin(alpha); y4 = x3*sin(alpha) + y3*cos(alpha);
13 |
14 | min_points = 10000; %Gross Points
15 | L_out = 1.0; %1;
16 |
17 | bc = @(location,~) (sqrt(location.x*location.x + location.y*location.y) > 0.75*L_out);
18 |
19 | for i=1:ng
20 | theta = i*pi/180.0;
21 | x1r = x1*cos(theta) - y1*sin(theta);
22 | y1r = x1*sin(theta) + y1*cos(theta);
23 | x2r = x2*cos(theta) - y2*sin(theta);
24 | y2r = x2*sin(theta) + y2*cos(theta);
25 | x3r = x3*cos(theta) - y3*sin(theta);
26 | y3r = x3*sin(theta) + y3*cos(theta);
27 | x4r = x4*cos(theta) - y4*sin(theta);
28 | y4r = x4*sin(theta) + y4*cos(theta);
29 | r1 = [3 4 -L_out L_out L_out -L_out -L_out -L_out L_out L_out];
30 | r2 = [3 4 x1r x2r x3r x4r y1r y2r y3r y4r];
31 | gdm = [r1; r2]';
32 | g = decsg(gdm,'R1-R2',['R1'; 'R2']');
33 |
34 | %thermal model
35 | thermalmodel = createpde('thermal','steadystate');
36 | gm = geometryFromEdges(thermalmodel,g);
37 | %pdegplot(thermalmodel,'EdgeLabels','on');
38 | thermalBC(thermalmodel,'Edge',1,'Temperature',bc);
39 | thermalBC(thermalmodel,'Edge',2,'Temperature',bc);
40 | thermalBC(thermalmodel,'Edge',3,'Temperature',bc);
41 | thermalBC(thermalmodel,'Edge',4,'Temperature',bc);
42 | thermalBC(thermalmodel,'Edge',5,'Temperature',bc);
43 | thermalBC(thermalmodel,'Edge',6,'Temperature',bc);
44 | thermalBC(thermalmodel,'Edge',7,'Temperature',bc);
45 | thermalBC(thermalmodel,'Edge',8,'Temperature',bc);
46 | thermalProperties(thermalmodel,"ThermalConductivity",1.0);
47 | mesh = generateMesh(thermalmodel);
48 | %mesh = generateMesh(thermalmodel,'Hmax',0.105);
49 | thermalresults = solve(thermalmodel);
50 | T = thermalresults.Temperature;
51 | gradTx =thermalresults.XGradients;
52 | gradTy =thermalresults.YGradients;
53 | %Linear elasticity model (Plane Stress)
54 | structuralmodel = createpde('structural','static-planestress');
55 | structuralmodel.Geometry = gm;
56 | structuralmodel.Mesh = mesh;
57 | structuralProperties(structuralmodel,'YoungsModulus',1, ...
58 | 'PoissonsRatio',0.3, ...'
59 | 'CTE',1.0);
60 |
61 | %%%Outter cylinder
62 | structuralBC(structuralmodel,'Edge',1,'Constraint','fixed');
63 | structuralBC(structuralmodel,'Edge',2,'Constraint','fixed');
64 | structuralBC(structuralmodel,'Edge',7,'Constraint','fixed');
65 | structuralBC(structuralmodel,'Edge',8,'Constraint','fixed');
66 | %%%Inner cylinder
67 | structuralBC(structuralmodel,'Edge',3,'Constraint','fixed');
68 | structuralBC(structuralmodel,'Edge',4,'Constraint','fixed');
69 | structuralBC(structuralmodel,'Edge',5,'Constraint','fixed');
70 | structuralBC(structuralmodel,'Edge',6,'Constraint','fixed');
71 |
72 | structuralBodyLoad(structuralmodel,'Temperature',thermalresults);
73 | structuralmodel.ReferenceTemperature = 0; %No affect on the solution!
74 | thermalstressresults = solve(structuralmodel);
75 | u = thermalstressresults.Displacement.x ;
76 | v = thermalstressresults.Displacement.y ;
77 | stress = thermalstressresults.VonMisesStress ;
78 | coord = thermalresults.Mesh.Nodes';
79 | x_coord = coord(:,1);
80 | y_coord = coord(:,2);
81 | %disp(size(x_coord));
82 | if (size(x_coord)