├── notes ├── notes.out ├── s0.png ├── error.png ├── notes.pdf ├── s1000.png ├── s500.png ├── s5000.png ├── MscaleNet2.png ├── notes.synctex.gz ├── notes.aux ├── notes.tex └── notes.log ├── exp ├── u0.png ├── u500.png ├── error.png ├── u1000.png ├── u5000.png ├── sin_M │ └── 62178 │ │ ├── data.pkl │ │ ├── error.png │ │ ├── loss.png │ │ ├── loss_bd.png │ │ ├── u_epoch_0.png │ │ ├── u_epoch_1000.png │ │ ├── u_epoch_1500.png │ │ ├── u_epoch_2000.png │ │ ├── u_epoch_2500.png │ │ ├── u_epoch_3000.png │ │ ├── u_epoch_3500.png │ │ ├── u_epoch_4000.png │ │ ├── u_epoch_4500.png │ │ ├── u_epoch_500.png │ │ ├── u_epoch_5000.png │ │ └── code.py └── sin_N │ └── 80509 │ ├── data.pkl │ ├── error.png │ ├── loss.png │ ├── loss_bd.png │ ├── u_epoch_0.png │ ├── u_epoch_1000.png │ ├── u_epoch_1500.png │ ├── u_epoch_2000.png │ ├── u_epoch_2500.png │ ├── u_epoch_3000.png │ ├── u_epoch_3500.png │ ├── u_epoch_4000.png │ ├── u_epoch_4500.png │ ├── u_epoch_500.png │ ├── u_epoch_5000.png │ └── code.py ├── code ├── __pycache__ │ └── my_act.cpython-37.pyc ├── plot_result.py ├── my_act.py ├── ritzN.py └── ritzM.py └── README.md /notes/notes.out: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /exp/u0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/u0.png -------------------------------------------------------------------------------- /exp/u500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/u500.png -------------------------------------------------------------------------------- /notes/s0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/notes/s0.png -------------------------------------------------------------------------------- /exp/error.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/error.png -------------------------------------------------------------------------------- /exp/u1000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/u1000.png -------------------------------------------------------------------------------- /exp/u5000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/u5000.png -------------------------------------------------------------------------------- /notes/error.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/notes/error.png -------------------------------------------------------------------------------- /notes/notes.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/notes/notes.pdf -------------------------------------------------------------------------------- /notes/s1000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/notes/s1000.png -------------------------------------------------------------------------------- /notes/s500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/notes/s500.png -------------------------------------------------------------------------------- /notes/s5000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/notes/s5000.png -------------------------------------------------------------------------------- /notes/MscaleNet2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/notes/MscaleNet2.png -------------------------------------------------------------------------------- /notes/notes.synctex.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/notes/notes.synctex.gz -------------------------------------------------------------------------------- /exp/sin_M/62178/data.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/data.pkl -------------------------------------------------------------------------------- /exp/sin_M/62178/error.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/error.png -------------------------------------------------------------------------------- /exp/sin_M/62178/loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/loss.png -------------------------------------------------------------------------------- /exp/sin_N/80509/data.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/data.pkl -------------------------------------------------------------------------------- /exp/sin_N/80509/error.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/error.png -------------------------------------------------------------------------------- /exp/sin_N/80509/loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/loss.png -------------------------------------------------------------------------------- /exp/sin_M/62178/loss_bd.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/loss_bd.png -------------------------------------------------------------------------------- /exp/sin_N/80509/loss_bd.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/loss_bd.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_0.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_0.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_1000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_1000.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_1500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_1500.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_2000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_2000.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_2500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_2500.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_3000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_3000.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_3500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_3500.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_4000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_4000.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_4500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_4500.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_500.png -------------------------------------------------------------------------------- /exp/sin_M/62178/u_epoch_5000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_M/62178/u_epoch_5000.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_1000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_1000.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_1500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_1500.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_2000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_2000.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_2500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_2500.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_3000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_3000.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_3500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_3500.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_4000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_4000.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_4500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_4500.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_500.png -------------------------------------------------------------------------------- /exp/sin_N/80509/u_epoch_5000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/exp/sin_N/80509/u_epoch_5000.png -------------------------------------------------------------------------------- /code/__pycache__/my_act.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xuzhiqin1990/mscalednn/HEAD/code/__pycache__/my_act.cpython-37.pyc -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # mscalednn 2 | This is an example code of solving Poisson Equation by MscaleDNN. The detail of MscaleDNN can be found in paper 3 | https://arxiv.org/abs/2007.11207, CiCP 4 | The underlying idea comes from frequency principle of deep learning, see https://arxiv.org/abs/1901.06523 5 | -------------------------------------------------------------------------------- /notes/notes.aux: -------------------------------------------------------------------------------- 1 | \relax 2 | \providecommand\hyper@newdestlabel[2]{} 3 | \providecommand\HyperFirstAtBeginDocument{\AtBeginDocument} 4 | \HyperFirstAtBeginDocument{\ifx\hyper@anchor\@undefined 5 | \global\let\oldcontentsline\contentsline 6 | \gdef\contentsline#1#2#3#4{\oldcontentsline{#1}{#2}{#3}} 7 | \global\let\oldnewlabel\newlabel 8 | \gdef\newlabel#1#2{\newlabelxx{#1}#2} 9 | \gdef\newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}} 10 | \AtEndDocument{\ifx\hyper@anchor\@undefined 11 | \let\contentsline\oldcontentsline 12 | \let\newlabel\oldnewlabel 13 | \fi} 14 | \fi} 15 | \global\let\hyper@last\relax 16 | \gdef\HyperFirstAtBeginDocument#1{#1} 17 | \providecommand*\HyPL@Entry[1]{} 18 | \citation{deep-ritz} 19 | \HyPL@Entry{0<>} 20 | \citation{mscale} 21 | \@writefile{lof}{\contentsline {figure}{\numberline {1}{\ignorespaces Mscale网络结构\relax }}{3}{figure.caption.4}\protected@file@percent } 22 | \providecommand*\caption@xref[2]{\@setref\relax\@undefined{#1}} 23 | \newlabel{net}{{1}{3}{Mscale网络结构\relax }{figure.caption.4}{}} 24 | \bibcite{deep-ritz}{1} 25 | \bibcite{mscale}{2} 26 | \@writefile{lof}{\contentsline {figure}{\numberline {2}{\ignorespaces 求解过程\relax }}{12}{figure.caption.15}\protected@file@percent } 27 | \newlabel{proc}{{2}{12}{求解过程\relax }{figure.caption.15}{}} 28 | \@writefile{lof}{\contentsline {subfigure}{\numberline{(a)}{\ignorespaces {0步}}}{12}{subfigure.2.1}\protected@file@percent } 29 | \@writefile{lof}{\contentsline {subfigure}{\numberline{(b)}{\ignorespaces {500步}}}{12}{subfigure.2.2}\protected@file@percent } 30 | \@writefile{lof}{\contentsline {subfigure}{\numberline{(c)}{\ignorespaces {1000步}}}{12}{subfigure.2.3}\protected@file@percent } 31 | \@writefile{lof}{\contentsline {subfigure}{\numberline{(d)}{\ignorespaces {5000步}}}{12}{subfigure.2.4}\protected@file@percent } 32 | \@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces 误差下降曲线\relax }}{12}{figure.caption.16}\protected@file@percent } 33 | \newlabel{error}{{3}{12}{误差下降曲线\relax }{figure.caption.16}{}} 34 | -------------------------------------------------------------------------------- /code/plot_result.py: -------------------------------------------------------------------------------- 1 | # !python3 2 | # coding: utf-8 3 | # author: Ziqi Liu and Zhi-Qin John Xu 4 | # Reference: Ziqi Liu,Wei Cai,Zhi-Qin John Xu. Multi-scale Deep Neural Network (MscaleDNN) 5 | # for Solving Poisson-Boltzmann Equation in Complex Domains[J]. 2020. arXiv:2007.11207 (CiCP) 6 | 7 | import os 8 | import pickle 9 | import numpy as np 10 | import matplotlib.pyplot as plt 11 | from scipy import signal 12 | 13 | import matplotlib 14 | matplotlib.use('Agg') 15 | Leftp = 0.18 16 | Bottomp = 0.18 17 | Widthp = 0.88 - Leftp 18 | Heightp = 0.9 - Bottomp 19 | pos = [Leftp, Bottomp, Widthp, Heightp] 20 | 21 | def save_fig(pltm, fntmp,fp=0,ax=0,isax=0,iseps=0,isShowPic=0):# Save the figure 22 | if isax==1: 23 | pltm.rc('xtick',labelsize=18) 24 | pltm.rc('ytick',labelsize=18) 25 | ax.set_position(pos, which='both') 26 | fnm = '%s.png'%(fntmp) 27 | pltm.savefig(fnm) 28 | if iseps: 29 | fnm = '%s.eps'%(fntmp) 30 | pltm.savefig(fnm, format='eps', dpi=600) 31 | if fp!=0: 32 | fp.savefig("%s.pdf"%(fntmp), bbox_inches='tight') 33 | if isShowPic==1: 34 | pltm.show() 35 | elif isShowPic==-1: 36 | return 37 | else: 38 | pltm.close() 39 | 40 | 41 | with open(os.path.join("..", "exp", "sin_N", "80509", "data.pkl"), "rb") as f: 42 | R1 = pickle.load(f) 43 | with open(os.path.join("..", "exp", "sin_M", "62178", "data.pkl"), "rb") as f: 44 | R2 = pickle.load(f) 45 | 46 | 47 | record_path=os.path.join("..", "exp") 48 | plt.figure() 49 | ax = plt.gca() 50 | plt.semilogy(R1["error"], label="Normal") 51 | plt.semilogy(R2["error"], label="Mscale") 52 | plt.legend(fontsize=14) 53 | fntmp = os.path.join(record_path, 'error') 54 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 55 | 56 | for k in [0, 500, 1000, 5000]: 57 | plt.figure() 58 | plt.plot(R2["x_samp"], R2["u_samp_"+str(k)], 'r--',label='dnn') 59 | plt.plot(R2["x_samp"], R2["u_samp_true"], 'b-',label='true') 60 | plt.legend(fontsize=14) 61 | plt.title('epoch '+str(k),fontsize=14) 62 | fntmp = os.path.join(record_path, 'u'+str(k)) 63 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 64 | -------------------------------------------------------------------------------- /code/my_act.py: -------------------------------------------------------------------------------- 1 | # !python3 2 | # coding: utf-8 3 | # author: Ziqi Liu and Zhi-Qin John Xu 4 | # Reference: Ziqi Liu,Wei Cai,Zhi-Qin John Xu. Multi-scale Deep Neural Network (MscaleDNN) 5 | # for Solving Poisson-Boltzmann Equation in Complex Domains[J]. 2020. arXiv:2007.11207 (CiCP) 6 | 7 | import tensorflow as tf 8 | import numpy as np 9 | 10 | 11 | def relu2(x): 12 | return tf.nn.relu(x) ** 2 13 | 14 | 15 | def relu3(x): 16 | return tf.nn.relu(x) ** 3 17 | 18 | 19 | def srelu(x): 20 | return tf.nn.relu(1 - x) * tf.nn.relu(x) 21 | 22 | 23 | def wave(x): 24 | return ( 25 | tf.nn.relu(x) 26 | - 2 * tf.nn.relu(x - 1 / 4) 27 | + 2 * tf.nn.relu(x - 3 / 4) 28 | - tf.nn.relu(x - 1) 29 | ) 30 | 31 | 32 | def ex2(x): 33 | return tf.exp(-tf.square(x)) 34 | 35 | 36 | def xex2(x): 37 | return x * tf.exp(-tf.square(x)) 38 | 39 | 40 | def phi2(x): 41 | return tf.nn.relu(x) - 2 * tf.nn.relu(x - 1) + tf.nn.relu(x - 2) 42 | 43 | 44 | def psi2(x): 45 | return ( 46 | 1 / 12 * tf.nn.relu(2 * x) 47 | - 2 / 3 * tf.nn.relu(2 * x - 1) 48 | + 23 / 12 * tf.nn.relu(2 * x - 2) 49 | - 8 / 3 * tf.nn.relu(2 * x - 3) 50 | + 23 / 12 * tf.nn.relu(2 * x - 4) 51 | - 2 / 3 * tf.nn.relu(2 * x - 5) 52 | + 1 / 12 * tf.nn.relu(2 * x - 6) 53 | ) 54 | 55 | 56 | def phi3(x): 57 | return ( 58 | 1 / 2 * tf.nn.relu(x - 0) ** 2 59 | - 3 / 2 * tf.nn.relu(x - 1) ** 2 60 | + 3 / 2 * tf.nn.relu(x - 2) ** 2 61 | - 1 / 2 * tf.nn.relu(x - 3) ** 2 62 | ) 63 | 64 | 65 | def psi3(x): 66 | return ( 67 | 1 / 960 * tf.nn.relu(2 * x - 0) ** 2 68 | - 32 / 960 * tf.nn.relu(2 * x - 1) ** 2 69 | + 237 / 960 * tf.nn.relu(2 * x - 2) ** 2 70 | - 832 / 960 * tf.nn.relu(2 * x - 3) ** 2 71 | + 1682 / 960 * tf.nn.relu(2 * x - 4) ** 2 72 | - 2112 / 960 * tf.nn.relu(2 * x - 5) ** 2 73 | + 1682 / 960 * tf.nn.relu(2 * x - 6) ** 2 74 | - 832 / 960 * tf.nn.relu(2 * x - 7) ** 2 75 | + 237 / 960 * tf.nn.relu(2 * x - 8) ** 2 76 | - 32 / 960 * tf.nn.relu(2 * x - 9) ** 2 77 | + 1 / 960 * tf.nn.relu(2 * x - 10) ** 2 78 | ) 79 | 80 | def s2relu(x): 81 | return ( 82 | tf.sin(2*np.pi*x)*srelu(x) 83 | ) 84 | 85 | 86 | act_dict = { 87 | "relu": tf.nn.relu, 88 | "relu2": relu2, 89 | "relu3": relu3, 90 | "sigmoid": tf.nn.sigmoid, 91 | "tanh": tf.nn.tanh, 92 | "softplus": tf.nn.softplus, 93 | "sin": tf.sin, 94 | "cos": tf.cos, 95 | "ex2": ex2, 96 | "xex2": xex2, 97 | "srelu": srelu, 98 | "wave": wave, 99 | "phi2": phi2, 100 | "psi2": psi2, 101 | "phi3": phi3, 102 | "psi3": psi3, 103 | "s2relu": s2relu, 104 | } 105 | -------------------------------------------------------------------------------- /code/ritzN.py: -------------------------------------------------------------------------------- 1 | # !python3 2 | # coding: utf-8 3 | # author: Ziqi Liu and Zhi-Qin John Xu 4 | # Reference: Ziqi Liu,Wei Cai,Zhi-Qin John Xu. Multi-scale Deep Neural Network (MscaleDNN) 5 | # for Solving Poisson-Boltzmann Equation in Complex Domains[J]. 2020. arXiv:2007.11207 (CiCP) 6 | 7 | import os 8 | import time 9 | import shutil 10 | import pickle 11 | 12 | import tensorflow as tf 13 | import numpy as np 14 | import matplotlib.pyplot as plt 15 | 16 | from my_act import act_dict 17 | 18 | import matplotlib 19 | matplotlib.use('Agg') 20 | Leftp = 0.18 21 | Bottomp = 0.18 22 | Widthp = 0.88 - Leftp 23 | Heightp = 0.9 - Bottomp 24 | pos = [Leftp, Bottomp, Widthp, Heightp] 25 | 26 | def save_fig(pltm, fntmp,fp=0,ax=0,isax=0,iseps=0,isShowPic=0):# Save the figure 27 | if isax==1: 28 | pltm.rc('xtick',labelsize=18) 29 | pltm.rc('ytick',labelsize=18) 30 | ax.set_position(pos, which='both') 31 | fnm = '%s.png'%(fntmp) 32 | pltm.savefig(fnm) 33 | if iseps: 34 | fnm = '%s.eps'%(fntmp) 35 | pltm.savefig(fnm, format='eps', dpi=600) 36 | if fp!=0: 37 | fp.savefig("%s.pdf"%(fntmp), bbox_inches='tight') 38 | if isShowPic==1: 39 | pltm.show() 40 | elif isShowPic==-1: 41 | return 42 | else: 43 | pltm.close() 44 | 45 | 46 | # %% parameters 47 | R = {} 48 | 49 | R["seed"] = 0 50 | 51 | R["size_it"] = 1000 52 | R["size_bd"] = 100 53 | 54 | R["penalty"] = 1e3 55 | 56 | R["act_name"] = ("sin",) * 3 57 | R["hidden_units"] = (180,) * 3 58 | R["learning_rate"] = 1e-4 59 | R["lr_decay"] = 5e-7 60 | 61 | R["varcoe"] = 0.5 62 | 63 | R["total_step"] = 5000 64 | R["resamp_epoch"] = 1 65 | R["plot_epoch"] = 500 66 | 67 | R["record_path"] = os.path.join("..", "exp", "sin_N",str(np.random.randint(10000,99999))) 68 | 69 | # %% generate data in domain 70 | 71 | R["dimension"] = dim = 1 72 | R["area_it"] = 2 73 | R["area_bd"] = 1 74 | 75 | # interior data 76 | def rand_it(size): 77 | x_it = np.random.rand(size, dim) * 2 - 1 78 | return x_it.astype(np.float32) 79 | 80 | 81 | # boundary data 82 | def rand_bd(size): 83 | 84 | x_bd = np.random.rand(size, dim) * 2 - 1 85 | ind = np.random.randint(dim, size=size) 86 | x01 = np.random.randint(2, size=size) * 2 - 1 87 | for ii in range(size): 88 | x_bd[ii, ind[ii]] = x01[ii] 89 | 90 | return x_bd.astype(np.float32) 91 | 92 | 93 | # %% PDE problem 94 | # - (a(x) u'(x))' + c(x) u(x) = f(x) 95 | R["mu"] = mu = 6 * np.pi 96 | 97 | 98 | def u(x): 99 | u = np.sin(mu * x) 100 | return u.astype(np.float32).reshape((-1, 1)) 101 | 102 | 103 | def a(x): 104 | a = np.ones((x.shape[0], 1)) 105 | return a.astype(np.float32).reshape((-1, 1)) 106 | 107 | 108 | def c(x): 109 | c = np.zeros((x.shape[0], 1)) 110 | return c.astype(np.float32).reshape((-1, 1)) 111 | 112 | 113 | def f(x): 114 | f = mu * mu * np.sin(mu * x) 115 | return f.astype(np.float32).reshape((-1, 1)) 116 | 117 | 118 | # %% save parameters 119 | 120 | # prepare folder 121 | if not os.path.isdir(R["record_path"]): 122 | os.mkdir(R["record_path"]) 123 | 124 | # save current code 125 | save_file = os.path.join(R["record_path"], "code.py") 126 | shutil.copyfile(__file__, save_file) 127 | 128 | # %% set seed 129 | tf.set_random_seed(R["seed"]) 130 | tf.random.set_random_seed(R["seed"]) 131 | np.random.seed(R["seed"]) 132 | 133 | # %% get test points 134 | x_test = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 135 | 136 | R["x_test"] = x_test.astype(np.float32) 137 | R["u_test_true"] = u(R["x_test"]) 138 | 139 | # %% get sample points 140 | x_samp = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 141 | 142 | R["x_samp"] = x_samp.astype(np.float32) 143 | R["u_samp_true"] = u(R["x_samp"]) 144 | 145 | # %% normal neural network 146 | units = (dim,) + R["hidden_units"] + (1,) 147 | 148 | 149 | def neural_net(x): 150 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 151 | 152 | y = x 153 | 154 | for i in range(len(units) - 2): 155 | init_W = np.random.randn(units[i], units[i + 1]).astype(np.float32) 156 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 157 | init_b = np.random.randn(units[i + 1]).astype(np.float32) 158 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 159 | 160 | W = tf.get_variable(name="W" + str(i), initializer=init_W) 161 | b = tf.get_variable(name="b" + str(i), initializer=init_b) 162 | 163 | y = act_dict[R["act_name"][i]](tf.matmul(y, W) + b) 164 | 165 | init_W = np.random.randn(units[-2], units[-1]).astype(np.float32) 166 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 167 | init_b = np.random.randn(units[-1]).astype(np.float32) 168 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 169 | 170 | W = tf.get_variable(name="W" + str(len(units) - 1), initializer=init_W) 171 | b = tf.get_variable(name="b" + str(len(units) - 1), initializer=init_b) 172 | 173 | y = tf.matmul(y, W) + b 174 | 175 | return y 176 | 177 | 178 | # %% loss and optimizer ("V" for variable) 179 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 180 | 181 | Vx_it = tf.placeholder(tf.float32, shape=(None, dim)) 182 | Vx_bd = tf.placeholder(tf.float32, shape=(None, dim)) 183 | 184 | Va_it = tf.placeholder(tf.float32, shape=(None, 1)) 185 | Vc_it = tf.placeholder(tf.float32, shape=(None, 1)) 186 | Vf_it = tf.placeholder(tf.float32, shape=(None, 1)) 187 | 188 | Vu_true_bd = tf.placeholder(tf.float32, shape=(None, 1)) 189 | 190 | Vu_it = neural_net(Vx_it) 191 | Vu_bd = neural_net(Vx_bd) 192 | 193 | Vdu_it = tf.gradients(Vu_it, Vx_it)[0] 194 | 195 | Vloss_it = R["area_it"] * tf.reduce_mean( 196 | 1 / 2 * Va_it * tf.reduce_sum(tf.square(Vdu_it), axis=1, keepdims=True) 197 | + 1 / 2 * Vc_it * tf.square(Vu_it) 198 | - Vf_it * Vu_it 199 | ) 200 | Vloss_bd = R["area_bd"] * tf.reduce_mean(tf.square(Vu_bd - Vu_true_bd)) 201 | 202 | Vloss = Vloss_it + R["penalty"] * Vloss_bd 203 | 204 | learning_rate = tf.placeholder_with_default(input=1e-3, shape=[], name="lr") 205 | optimizer = tf.train.AdamOptimizer(learning_rate) 206 | train_op = optimizer.minimize(Vloss) 207 | 208 | # %% train model 209 | t0 = time.time() 210 | loss, loss_bd = [], [] 211 | error = [] 212 | lr = 1 * R["learning_rate"] 213 | 214 | config = tf.ConfigProto() 215 | config.gpu_options.allow_growth = True 216 | with tf.Session(config=config) as sess: 217 | 218 | sess.run(tf.global_variables_initializer()) 219 | 220 | for epoch in range(R["total_step"] + 1): 221 | 222 | # generate new data ("N" for number) 223 | if epoch % R["resamp_epoch"] == 0: 224 | 225 | Nx_it = rand_it(R["size_it"]) 226 | Nx_bd = rand_bd(R["size_bd"]) 227 | 228 | Na_it = a(Nx_it) 229 | Nc_it = c(Nx_it) 230 | Nf_it = f(Nx_it) 231 | 232 | Nu_bd = u(Nx_bd) 233 | 234 | # train neural network 235 | lr = (1 - R["lr_decay"]) * lr 236 | 237 | _, Nloss, Nloss_bd = sess.run( 238 | [train_op, Vloss, Vloss_bd], 239 | feed_dict={ 240 | Vx_it: Nx_it, 241 | Vx_bd: Nx_bd, 242 | Va_it: Na_it, 243 | Vc_it: Nc_it, 244 | Vf_it: Nf_it, 245 | Vu_true_bd: Nu_bd, 246 | learning_rate: lr, 247 | }, 248 | ) 249 | 250 | # get test error 251 | R["u_test"] = sess.run(Vu_it, feed_dict={Vx_it: R["x_test"]}) 252 | Nerror = R["area_it"] * np.mean((R["u_test"] - R["u_test_true"]) ** 2) 253 | 254 | loss.append(Nloss) 255 | loss_bd.append(Nloss_bd) 256 | error.append(Nerror) 257 | 258 | # show current state 259 | if epoch % R["plot_epoch"] == 0: 260 | 261 | print("epoch %d, time %.3f" % (epoch, time.time() - t0)) 262 | print("total loss %f, boundary loss %f" % (Nloss, Nloss_bd)) 263 | print("interior error %f" % (Nerror)) 264 | 265 | R["time"] = time.time() - t0 266 | R["loss"] = np.array(loss) 267 | R["loss_bd"] = np.array(loss_bd) 268 | R["error"] = np.array(error) 269 | 270 | u_samp = sess.run(Vu_it, feed_dict={Vx_it: R["x_samp"]}) 271 | R["u_samp_" + str(epoch)] = u_samp 272 | 273 | plt.figure() 274 | ax = plt.gca() 275 | plt.semilogy(R["error"]) 276 | plt.title('error',fontsize=15) 277 | fntmp = os.path.join(R["record_path"], 'error') 278 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 279 | 280 | plt.figure() 281 | ax = plt.gca() 282 | plt.plot(R["loss"]) 283 | plt.title('loss',fontsize=15) 284 | fntmp = os.path.join(R["record_path"], 'loss') 285 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 286 | 287 | plt.figure() 288 | ax = plt.gca() 289 | plt.semilogy(R["loss_bd"]) 290 | plt.title('loss_bd',fontsize=15) 291 | fntmp = os.path.join(R["record_path"], 'loss_bd') 292 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 293 | 294 | u_samp = sess.run(Vu_it, feed_dict={Vx_it: R["x_samp"]}) 295 | R["u_samp_" + str(epoch)] = u_samp 296 | 297 | if R['dimension']==1: 298 | plt.figure() 299 | ax = plt.gca() 300 | plt.plot(R["x_samp"], R["u_samp_"+str(epoch)], 'r--',label='dnn') 301 | plt.plot(R["x_samp"], R["u_samp_true"], 'b-',label='true') 302 | plt.title('u',fontsize=15) 303 | plt.legend(fontsize=18) 304 | fntmp = os.path.join(R["record_path"], 'u_epoch_%s'%(epoch)) 305 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 306 | 307 | 308 | # %% save data 309 | data_dir = os.path.join(R["record_path"], "data.pkl") 310 | with open(data_dir, "wb") as file: 311 | pickle.dump(R, file) 312 | 313 | # %% save data 314 | data_dir = os.path.join(R["record_path"], "data.pkl") 315 | with open(data_dir, "wb") as file: 316 | pickle.dump(R, file) 317 | -------------------------------------------------------------------------------- /exp/sin_N/80509/code.py: -------------------------------------------------------------------------------- 1 | # !python3 2 | # coding: utf-8 3 | # author: Ziqi Liu and Zhi-Qin John Xu 4 | # Reference: Ziqi Liu,Wei Cai,Zhi-Qin John Xu. Multi-scale Deep Neural Network (MscaleDNN) 5 | # for Solving Poisson-Boltzmann Equation in Complex Domains[J]. 2020. arXiv:2007.11207 (CiCP) 6 | 7 | import os 8 | import time 9 | import shutil 10 | import pickle 11 | 12 | import tensorflow as tf 13 | import numpy as np 14 | import matplotlib.pyplot as plt 15 | 16 | from my_act import act_dict 17 | 18 | import matplotlib 19 | matplotlib.use('Agg') 20 | Leftp = 0.18 21 | Bottomp = 0.18 22 | Widthp = 0.88 - Leftp 23 | Heightp = 0.9 - Bottomp 24 | pos = [Leftp, Bottomp, Widthp, Heightp] 25 | 26 | def save_fig(pltm, fntmp,fp=0,ax=0,isax=0,iseps=0,isShowPic=0):# Save the figure 27 | if isax==1: 28 | pltm.rc('xtick',labelsize=18) 29 | pltm.rc('ytick',labelsize=18) 30 | ax.set_position(pos, which='both') 31 | fnm = '%s.png'%(fntmp) 32 | pltm.savefig(fnm) 33 | if iseps: 34 | fnm = '%s.eps'%(fntmp) 35 | pltm.savefig(fnm, format='eps', dpi=600) 36 | if fp!=0: 37 | fp.savefig("%s.pdf"%(fntmp), bbox_inches='tight') 38 | if isShowPic==1: 39 | pltm.show() 40 | elif isShowPic==-1: 41 | return 42 | else: 43 | pltm.close() 44 | 45 | 46 | # %% parameters 47 | R = {} 48 | 49 | R["seed"] = 0 50 | 51 | R["size_it"] = 1000 52 | R["size_bd"] = 100 53 | 54 | R["penalty"] = 1e3 55 | 56 | R["act_name"] = ("sin",) * 3 57 | R["hidden_units"] = (180,) * 3 58 | R["learning_rate"] = 1e-4 59 | R["lr_decay"] = 5e-7 60 | 61 | R["varcoe"] = 0.5 62 | 63 | R["total_step"] = 5000 64 | R["resamp_epoch"] = 1 65 | R["plot_epoch"] = 500 66 | 67 | R["record_path"] = os.path.join("..", "exp", "sin_N",str(np.random.randint(10000,99999))) 68 | 69 | # %% generate data in domain 70 | 71 | R["dimension"] = dim = 1 72 | R["area_it"] = 2 73 | R["area_bd"] = 1 74 | 75 | # interior data 76 | def rand_it(size): 77 | x_it = np.random.rand(size, dim) * 2 - 1 78 | return x_it.astype(np.float32) 79 | 80 | 81 | # boundary data 82 | def rand_bd(size): 83 | 84 | x_bd = np.random.rand(size, dim) * 2 - 1 85 | ind = np.random.randint(dim, size=size) 86 | x01 = np.random.randint(2, size=size) * 2 - 1 87 | for ii in range(size): 88 | x_bd[ii, ind[ii]] = x01[ii] 89 | 90 | return x_bd.astype(np.float32) 91 | 92 | 93 | # %% PDE problem 94 | # - (a(x) u'(x))' + c(x) u(x) = f(x) 95 | R["mu"] = mu = 6 * np.pi 96 | 97 | 98 | def u(x): 99 | u = np.sin(mu * x) 100 | return u.astype(np.float32).reshape((-1, 1)) 101 | 102 | 103 | def a(x): 104 | a = np.ones((x.shape[0], 1)) 105 | return a.astype(np.float32).reshape((-1, 1)) 106 | 107 | 108 | def c(x): 109 | c = np.zeros((x.shape[0], 1)) 110 | return c.astype(np.float32).reshape((-1, 1)) 111 | 112 | 113 | def f(x): 114 | f = mu * mu * np.sin(mu * x) 115 | return f.astype(np.float32).reshape((-1, 1)) 116 | 117 | 118 | # %% save parameters 119 | 120 | # prepare folder 121 | if not os.path.isdir(R["record_path"]): 122 | os.mkdir(R["record_path"]) 123 | 124 | # save current code 125 | save_file = os.path.join(R["record_path"], "code.py") 126 | shutil.copyfile(__file__, save_file) 127 | 128 | # %% set seed 129 | tf.set_random_seed(R["seed"]) 130 | tf.random.set_random_seed(R["seed"]) 131 | np.random.seed(R["seed"]) 132 | 133 | # %% get test points 134 | x_test = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 135 | 136 | R["x_test"] = x_test.astype(np.float32) 137 | R["u_test_true"] = u(R["x_test"]) 138 | 139 | # %% get sample points 140 | x_samp = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 141 | 142 | R["x_samp"] = x_samp.astype(np.float32) 143 | R["u_samp_true"] = u(R["x_samp"]) 144 | 145 | # %% normal neural network 146 | units = (dim,) + R["hidden_units"] + (1,) 147 | 148 | 149 | def neural_net(x): 150 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 151 | 152 | y = x 153 | 154 | for i in range(len(units) - 2): 155 | init_W = np.random.randn(units[i], units[i + 1]).astype(np.float32) 156 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 157 | init_b = np.random.randn(units[i + 1]).astype(np.float32) 158 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 159 | 160 | W = tf.get_variable(name="W" + str(i), initializer=init_W) 161 | b = tf.get_variable(name="b" + str(i), initializer=init_b) 162 | 163 | y = act_dict[R["act_name"][i]](tf.matmul(y, W) + b) 164 | 165 | init_W = np.random.randn(units[-2], units[-1]).astype(np.float32) 166 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 167 | init_b = np.random.randn(units[-1]).astype(np.float32) 168 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 169 | 170 | W = tf.get_variable(name="W" + str(len(units) - 1), initializer=init_W) 171 | b = tf.get_variable(name="b" + str(len(units) - 1), initializer=init_b) 172 | 173 | y = tf.matmul(y, W) + b 174 | 175 | return y 176 | 177 | 178 | # %% loss and optimizer ("V" for variable) 179 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 180 | 181 | Vx_it = tf.placeholder(tf.float32, shape=(None, dim)) 182 | Vx_bd = tf.placeholder(tf.float32, shape=(None, dim)) 183 | 184 | Va_it = tf.placeholder(tf.float32, shape=(None, 1)) 185 | Vc_it = tf.placeholder(tf.float32, shape=(None, 1)) 186 | Vf_it = tf.placeholder(tf.float32, shape=(None, 1)) 187 | 188 | Vu_true_bd = tf.placeholder(tf.float32, shape=(None, 1)) 189 | 190 | Vu_it = neural_net(Vx_it) 191 | Vu_bd = neural_net(Vx_bd) 192 | 193 | Vdu_it = tf.gradients(Vu_it, Vx_it)[0] 194 | 195 | Vloss_it = R["area_it"] * tf.reduce_mean( 196 | 1 / 2 * Va_it * tf.reduce_sum(tf.square(Vdu_it), axis=1, keepdims=True) 197 | + 1 / 2 * Vc_it * tf.square(Vu_it) 198 | - Vf_it * Vu_it 199 | ) 200 | Vloss_bd = R["area_bd"] * tf.reduce_mean(tf.square(Vu_bd - Vu_true_bd)) 201 | 202 | Vloss = Vloss_it + R["penalty"] * Vloss_bd 203 | 204 | learning_rate = tf.placeholder_with_default(input=1e-3, shape=[], name="lr") 205 | optimizer = tf.train.AdamOptimizer(learning_rate) 206 | train_op = optimizer.minimize(Vloss) 207 | 208 | # %% train model 209 | t0 = time.time() 210 | loss, loss_bd = [], [] 211 | error = [] 212 | lr = 1 * R["learning_rate"] 213 | 214 | config = tf.ConfigProto() 215 | config.gpu_options.allow_growth = True 216 | with tf.Session(config=config) as sess: 217 | 218 | sess.run(tf.global_variables_initializer()) 219 | 220 | for epoch in range(R["total_step"] + 1): 221 | 222 | # generate new data ("N" for number) 223 | if epoch % R["resamp_epoch"] == 0: 224 | 225 | Nx_it = rand_it(R["size_it"]) 226 | Nx_bd = rand_bd(R["size_bd"]) 227 | 228 | Na_it = a(Nx_it) 229 | Nc_it = c(Nx_it) 230 | Nf_it = f(Nx_it) 231 | 232 | Nu_bd = u(Nx_bd) 233 | 234 | # train neural network 235 | lr = (1 - R["lr_decay"]) * lr 236 | 237 | _, Nloss, Nloss_bd = sess.run( 238 | [train_op, Vloss, Vloss_bd], 239 | feed_dict={ 240 | Vx_it: Nx_it, 241 | Vx_bd: Nx_bd, 242 | Va_it: Na_it, 243 | Vc_it: Nc_it, 244 | Vf_it: Nf_it, 245 | Vu_true_bd: Nu_bd, 246 | learning_rate: lr, 247 | }, 248 | ) 249 | 250 | # get test error 251 | R["u_test"] = sess.run(Vu_it, feed_dict={Vx_it: R["x_test"]}) 252 | Nerror = R["area_it"] * np.mean((R["u_test"] - R["u_test_true"]) ** 2) 253 | 254 | loss.append(Nloss) 255 | loss_bd.append(Nloss_bd) 256 | error.append(Nerror) 257 | 258 | # show current state 259 | if epoch % R["plot_epoch"] == 0: 260 | 261 | print("epoch %d, time %.3f" % (epoch, time.time() - t0)) 262 | print("total loss %f, boundary loss %f" % (Nloss, Nloss_bd)) 263 | print("interior error %f" % (Nerror)) 264 | 265 | R["time"] = time.time() - t0 266 | R["loss"] = np.array(loss) 267 | R["loss_bd"] = np.array(loss_bd) 268 | R["error"] = np.array(error) 269 | 270 | u_samp = sess.run(Vu_it, feed_dict={Vx_it: R["x_samp"]}) 271 | R["u_samp_" + str(epoch)] = u_samp 272 | 273 | plt.figure() 274 | ax = plt.gca() 275 | plt.semilogy(R["error"]) 276 | plt.title('error',fontsize=15) 277 | fntmp = os.path.join(R["record_path"], 'error') 278 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 279 | 280 | plt.figure() 281 | ax = plt.gca() 282 | plt.plot(R["loss"]) 283 | plt.title('loss',fontsize=15) 284 | fntmp = os.path.join(R["record_path"], 'loss') 285 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 286 | 287 | plt.figure() 288 | ax = plt.gca() 289 | plt.semilogy(R["loss_bd"]) 290 | plt.title('loss_bd',fontsize=15) 291 | fntmp = os.path.join(R["record_path"], 'loss_bd') 292 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 293 | 294 | u_samp = sess.run(Vu_it, feed_dict={Vx_it: R["x_samp"]}) 295 | R["u_samp_" + str(epoch)] = u_samp 296 | 297 | if R['dimension']==1: 298 | plt.figure() 299 | ax = plt.gca() 300 | plt.plot(R["x_samp"], R["u_samp_"+str(epoch)], 'r--',label='dnn') 301 | plt.plot(R["x_samp"], R["u_samp_true"], 'b-',label='true') 302 | plt.title('u',fontsize=15) 303 | plt.legend(fontsize=18) 304 | fntmp = os.path.join(R["record_path"], 'u_epoch_%s'%(epoch)) 305 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 306 | 307 | 308 | # %% save data 309 | data_dir = os.path.join(R["record_path"], "data.pkl") 310 | with open(data_dir, "wb") as file: 311 | pickle.dump(R, file) 312 | 313 | # %% save data 314 | data_dir = os.path.join(R["record_path"], "data.pkl") 315 | with open(data_dir, "wb") as file: 316 | pickle.dump(R, file) 317 | -------------------------------------------------------------------------------- /code/ritzM.py: -------------------------------------------------------------------------------- 1 | # !python3 2 | # coding: utf-8 3 | # author: Ziqi Liu and Zhi-Qin John Xu 4 | # Reference: Ziqi Liu,Wei Cai,Zhi-Qin John Xu. Multi-scale Deep Neural Network (MscaleDNN) 5 | # for Solving Poisson-Boltzmann Equation in Complex Domains[J]. 2020. arXiv:2007.11207 (CiCP) 6 | 7 | import os 8 | import time 9 | import shutil 10 | import pickle 11 | 12 | import tensorflow as tf 13 | import numpy as np 14 | import matplotlib.pyplot as plt 15 | 16 | from my_act import act_dict 17 | 18 | import matplotlib 19 | matplotlib.use('Agg') 20 | Leftp = 0.18 21 | Bottomp = 0.18 22 | Widthp = 0.88 - Leftp 23 | Heightp = 0.9 - Bottomp 24 | pos = [Leftp, Bottomp, Widthp, Heightp] 25 | 26 | def save_fig(pltm, fntmp,fp=0,ax=0,isax=0,iseps=0,isShowPic=0):# Save the figure 27 | if isax==1: 28 | pltm.rc('xtick',labelsize=18) 29 | pltm.rc('ytick',labelsize=18) 30 | ax.set_position(pos, which='both') 31 | fnm = '%s.png'%(fntmp) 32 | pltm.savefig(fnm) 33 | if iseps: 34 | fnm = '%s.eps'%(fntmp) 35 | pltm.savefig(fnm, format='eps', dpi=600) 36 | if fp!=0: 37 | fp.savefig("%s.pdf"%(fntmp), bbox_inches='tight') 38 | if isShowPic==1: 39 | pltm.show() 40 | elif isShowPic==-1: 41 | return 42 | else: 43 | pltm.close() 44 | 45 | 46 | 47 | 48 | # %% parameters 49 | R = {} 50 | 51 | R["seed"] = 0 52 | 53 | R["size_it"] = 1000 54 | R["size_bd"] = 100 55 | 56 | R["penalty"] = 1e3 57 | 58 | R["scale"] = np.array([32, 32, 32, 32, 32, 32]) 59 | 60 | R["act_name"] = ("sin",) * 3 61 | R["hidden_units"] = (30,) * 3 62 | R["learning_rate"] = 1e-4 63 | R["lr_decay"] = 5e-7 64 | 65 | R["varcoe"] = 0.5 66 | 67 | R["total_step"] = 5000 68 | R["resamp_epoch"] = 1 69 | R["plot_epoch"] = 500 70 | 71 | R["record_path"] = os.path.join("..", "exp", "sin_M",str(np.random.randint(10000,99999))) 72 | 73 | # %% generate data in domain 74 | 75 | R["dimension"] = dim = 1 76 | R["area_it"] = 2 77 | R["area_bd"] = 1 78 | 79 | # interior data 80 | def rand_it(size): 81 | x_it = np.random.rand(size, dim) * 2 - 1 82 | return x_it.astype(np.float32) 83 | 84 | 85 | # boundary data 86 | def rand_bd(size): 87 | 88 | x_bd = np.random.rand(size, dim) * 2 - 1 89 | ind = np.random.randint(dim, size=size) 90 | x01 = np.random.randint(2, size=size) * 2 - 1 91 | for ii in range(size): 92 | x_bd[ii, ind[ii]] = x01[ii] 93 | 94 | return x_bd.astype(np.float32) 95 | 96 | 97 | # %% PDE problem 98 | # - (a(x) u'(x))' + c(x) u(x) = f(x) 99 | R["mu"] = mu = 6 * np.pi 100 | 101 | 102 | def u(x): 103 | u = np.sin(mu * x) 104 | return u.astype(np.float32).reshape((-1, 1)) 105 | 106 | 107 | def a(x): 108 | a = np.ones((x.shape[0], 1)) 109 | return a.astype(np.float32).reshape((-1, 1)) 110 | 111 | 112 | def c(x): 113 | c = np.zeros((x.shape[0], 1)) 114 | return c.astype(np.float32).reshape((-1, 1)) 115 | 116 | 117 | def f(x): 118 | f = mu * mu * np.sin(mu * x) 119 | return f.astype(np.float32).reshape((-1, 1)) 120 | 121 | 122 | # %% save parameters 123 | 124 | # prepare folder 125 | if not os.path.isdir(R["record_path"]): 126 | os.mkdir(R["record_path"]) 127 | 128 | # save current code 129 | save_file = os.path.join(R["record_path"], "code.py") 130 | shutil.copyfile(__file__, save_file) 131 | 132 | # %% set seed 133 | tf.set_random_seed(R["seed"]) 134 | tf.random.set_random_seed(R["seed"]) 135 | np.random.seed(R["seed"]) 136 | 137 | # %% get test points 138 | x_test = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 139 | 140 | R["x_test"] = x_test.astype(np.float32) 141 | R["u_test_true"] = u(R["x_test"]) 142 | 143 | # %% get sample points 144 | x_samp = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 145 | 146 | R["x_samp"] = x_samp.astype(np.float32) 147 | R["u_samp_true"] = u(R["x_samp"]) 148 | 149 | # %% normal neural network 150 | units = (dim,) + R["hidden_units"] + (1,) 151 | 152 | 153 | def neural_net(x): 154 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 155 | 156 | all_y = [] 157 | for k in range(len(R["scale"])): 158 | scale_y = R["scale"][k] * x 159 | 160 | for i in range(len(units) - 2): 161 | init_W = np.random.randn(units[i], units[i + 1]).astype(np.float32) 162 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 163 | init_b = np.random.randn(units[i + 1]).astype(np.float32) 164 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 165 | 166 | W = tf.get_variable(name="W" + str(i) + str(k), initializer=init_W) 167 | b = tf.get_variable(name="b" + str(i) + str(k), initializer=init_b) 168 | 169 | scale_y = act_dict[R["act_name"][i]](tf.matmul(scale_y, W) + b) 170 | 171 | init_W = np.random.randn(units[-2], units[-1]).astype(np.float32) 172 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 173 | init_b = np.random.randn(units[-1]).astype(np.float32) 174 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 175 | 176 | W = tf.get_variable( 177 | name="W" + str(len(units) - 1) + str(k), initializer=init_W 178 | ) 179 | b = tf.get_variable( 180 | name="b" + str(len(units) - 1) + str(k), initializer=init_b 181 | ) 182 | 183 | scale_y = tf.matmul(scale_y, W) + b 184 | 185 | all_y.append(scale_y) 186 | 187 | y = sum(all_y) 188 | 189 | return y 190 | 191 | 192 | # %% loss and optimizer ("V" for variable) 193 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 194 | 195 | Vx_it = tf.placeholder(tf.float32, shape=(None, dim)) 196 | Vx_bd = tf.placeholder(tf.float32, shape=(None, dim)) 197 | 198 | Va_it = tf.placeholder(tf.float32, shape=(None, 1)) 199 | Vc_it = tf.placeholder(tf.float32, shape=(None, 1)) 200 | Vf_it = tf.placeholder(tf.float32, shape=(None, 1)) 201 | 202 | Vu_true_bd = tf.placeholder(tf.float32, shape=(None, 1)) 203 | 204 | Vu_it = neural_net(Vx_it) 205 | Vu_bd = neural_net(Vx_bd) 206 | 207 | Vdu_it = tf.gradients(Vu_it, Vx_it)[0] 208 | 209 | Vloss_it = R["area_it"] * tf.reduce_mean( 210 | 1 / 2 * Va_it * tf.reduce_sum(tf.square(Vdu_it), axis=1, keepdims=True) 211 | + 1 / 2 * Vc_it * tf.square(Vu_it) 212 | - Vf_it * Vu_it 213 | ) 214 | Vloss_bd = R["area_bd"] * tf.reduce_mean(tf.square(Vu_bd - Vu_true_bd)) 215 | 216 | Vloss = Vloss_it + R["penalty"] * Vloss_bd 217 | 218 | learning_rate = tf.placeholder_with_default(input=1e-3, shape=[], name="lr") 219 | optimizer = tf.train.AdamOptimizer(learning_rate) 220 | train_op = optimizer.minimize(Vloss) 221 | 222 | # %% train model 223 | t0 = time.time() 224 | loss, loss_bd = [], [] 225 | error = [] 226 | lr = 1 * R["learning_rate"] 227 | 228 | config = tf.ConfigProto() 229 | config.gpu_options.allow_growth = True 230 | with tf.Session(config=config) as sess: 231 | 232 | sess.run(tf.global_variables_initializer()) 233 | 234 | for epoch in range(R["total_step"] + 1): 235 | 236 | # generate new data ("N" for number) 237 | if epoch % R["resamp_epoch"] == 0: 238 | 239 | Nx_it = rand_it(R["size_it"]) 240 | Nx_bd = rand_bd(R["size_bd"]) 241 | 242 | Na_it = a(Nx_it) 243 | Nc_it = c(Nx_it) 244 | Nf_it = f(Nx_it) 245 | 246 | Nu_bd = u(Nx_bd) 247 | 248 | # train neural network 249 | lr = (1 - R["lr_decay"]) * lr 250 | 251 | _, Nloss, Nloss_bd = sess.run( 252 | [train_op, Vloss, Vloss_bd], 253 | feed_dict={ 254 | Vx_it: Nx_it, 255 | Vx_bd: Nx_bd, 256 | Va_it: Na_it, 257 | Vc_it: Nc_it, 258 | Vf_it: Nf_it, 259 | Vu_true_bd: Nu_bd, 260 | learning_rate: lr, 261 | }, 262 | ) 263 | 264 | # get test error 265 | R["u_test"] = sess.run(Vu_it, feed_dict={Vx_it: R["x_test"]}) 266 | Nerror = R["area_it"] * np.mean((R["u_test"] - R["u_test_true"]) ** 2) 267 | 268 | loss.append(Nloss) 269 | loss_bd.append(Nloss_bd) 270 | error.append(Nerror) 271 | 272 | # show current state 273 | if epoch % R["plot_epoch"] == 0: 274 | 275 | print("epoch %d, time %.3f" % (epoch, time.time() - t0)) 276 | print("total loss %f, boundary loss %f" % (Nloss, Nloss_bd)) 277 | print("interior error %f" % (Nerror)) 278 | 279 | R["time"] = time.time() - t0 280 | R["loss"] = np.array(loss) 281 | R["loss_bd"] = np.array(loss_bd) 282 | R["error"] = np.array(error) 283 | 284 | plt.figure() 285 | ax = plt.gca() 286 | plt.semilogy(R["error"]) 287 | plt.title('error',fontsize=15) 288 | fntmp = os.path.join(R["record_path"], 'error') 289 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 290 | 291 | plt.figure() 292 | ax = plt.gca() 293 | plt.plot(R["loss"]) 294 | plt.title('loss',fontsize=15) 295 | fntmp = os.path.join(R["record_path"], 'loss') 296 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 297 | 298 | plt.figure() 299 | ax = plt.gca() 300 | plt.semilogy(R["loss_bd"]) 301 | plt.title('loss_bd',fontsize=15) 302 | fntmp = os.path.join(R["record_path"], 'loss_bd') 303 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 304 | 305 | u_samp = sess.run(Vu_it, feed_dict={Vx_it: R["x_samp"]}) 306 | R["u_samp_" + str(epoch)] = u_samp 307 | 308 | if R['dimension']==1: 309 | plt.figure() 310 | ax = plt.gca() 311 | plt.plot(R["x_samp"], R["u_samp_"+str(epoch)], 'r--',label='dnn') 312 | plt.plot(R["x_samp"], R["u_samp_true"], 'b-',label='true') 313 | plt.title('u',fontsize=15) 314 | plt.legend(fontsize=18) 315 | fntmp = os.path.join(R["record_path"], 'u_epoch_%s'%(epoch)) 316 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 317 | 318 | 319 | 320 | 321 | # %% save data 322 | data_dir = os.path.join(R["record_path"], "data.pkl") 323 | with open(data_dir, "wb") as file: 324 | pickle.dump(R, file) 325 | 326 | # %% save data 327 | data_dir = os.path.join(R["record_path"], "data.pkl") 328 | with open(data_dir, "wb") as file: 329 | pickle.dump(R, file) 330 | -------------------------------------------------------------------------------- /exp/sin_M/62178/code.py: -------------------------------------------------------------------------------- 1 | # !python3 2 | # coding: utf-8 3 | # author: Ziqi Liu and Zhi-Qin John Xu 4 | # Reference: Ziqi Liu,Wei Cai,Zhi-Qin John Xu. Multi-scale Deep Neural Network (MscaleDNN) 5 | # for Solving Poisson-Boltzmann Equation in Complex Domains[J]. 2020. arXiv:2007.11207 (CiCP) 6 | 7 | import os 8 | import time 9 | import shutil 10 | import pickle 11 | 12 | import tensorflow as tf 13 | import numpy as np 14 | import matplotlib.pyplot as plt 15 | 16 | from my_act import act_dict 17 | 18 | import matplotlib 19 | matplotlib.use('Agg') 20 | Leftp = 0.18 21 | Bottomp = 0.18 22 | Widthp = 0.88 - Leftp 23 | Heightp = 0.9 - Bottomp 24 | pos = [Leftp, Bottomp, Widthp, Heightp] 25 | 26 | def save_fig(pltm, fntmp,fp=0,ax=0,isax=0,iseps=0,isShowPic=0):# Save the figure 27 | if isax==1: 28 | pltm.rc('xtick',labelsize=18) 29 | pltm.rc('ytick',labelsize=18) 30 | ax.set_position(pos, which='both') 31 | fnm = '%s.png'%(fntmp) 32 | pltm.savefig(fnm) 33 | if iseps: 34 | fnm = '%s.eps'%(fntmp) 35 | pltm.savefig(fnm, format='eps', dpi=600) 36 | if fp!=0: 37 | fp.savefig("%s.pdf"%(fntmp), bbox_inches='tight') 38 | if isShowPic==1: 39 | pltm.show() 40 | elif isShowPic==-1: 41 | return 42 | else: 43 | pltm.close() 44 | 45 | 46 | 47 | 48 | # %% parameters 49 | R = {} 50 | 51 | R["seed"] = 0 52 | 53 | R["size_it"] = 1000 54 | R["size_bd"] = 100 55 | 56 | R["penalty"] = 1e3 57 | 58 | R["scale"] = np.array([32, 32, 32, 32, 32, 32]) 59 | 60 | R["act_name"] = ("sin",) * 3 61 | R["hidden_units"] = (30,) * 3 62 | R["learning_rate"] = 1e-4 63 | R["lr_decay"] = 5e-7 64 | 65 | R["varcoe"] = 0.5 66 | 67 | R["total_step"] = 5000 68 | R["resamp_epoch"] = 1 69 | R["plot_epoch"] = 500 70 | 71 | R["record_path"] = os.path.join("..", "exp", "sin_M",str(np.random.randint(10000,99999))) 72 | 73 | # %% generate data in domain 74 | 75 | R["dimension"] = dim = 1 76 | R["area_it"] = 2 77 | R["area_bd"] = 1 78 | 79 | # interior data 80 | def rand_it(size): 81 | x_it = np.random.rand(size, dim) * 2 - 1 82 | return x_it.astype(np.float32) 83 | 84 | 85 | # boundary data 86 | def rand_bd(size): 87 | 88 | x_bd = np.random.rand(size, dim) * 2 - 1 89 | ind = np.random.randint(dim, size=size) 90 | x01 = np.random.randint(2, size=size) * 2 - 1 91 | for ii in range(size): 92 | x_bd[ii, ind[ii]] = x01[ii] 93 | 94 | return x_bd.astype(np.float32) 95 | 96 | 97 | # %% PDE problem 98 | # - (a(x) u'(x))' + c(x) u(x) = f(x) 99 | R["mu"] = mu = 6 * np.pi 100 | 101 | 102 | def u(x): 103 | u = np.sin(mu * x) 104 | return u.astype(np.float32).reshape((-1, 1)) 105 | 106 | 107 | def a(x): 108 | a = np.ones((x.shape[0], 1)) 109 | return a.astype(np.float32).reshape((-1, 1)) 110 | 111 | 112 | def c(x): 113 | c = np.zeros((x.shape[0], 1)) 114 | return c.astype(np.float32).reshape((-1, 1)) 115 | 116 | 117 | def f(x): 118 | f = mu * mu * np.sin(mu * x) 119 | return f.astype(np.float32).reshape((-1, 1)) 120 | 121 | 122 | # %% save parameters 123 | 124 | # prepare folder 125 | if not os.path.isdir(R["record_path"]): 126 | os.mkdir(R["record_path"]) 127 | 128 | # save current code 129 | save_file = os.path.join(R["record_path"], "code.py") 130 | shutil.copyfile(__file__, save_file) 131 | 132 | # %% set seed 133 | tf.set_random_seed(R["seed"]) 134 | tf.random.set_random_seed(R["seed"]) 135 | np.random.seed(R["seed"]) 136 | 137 | # %% get test points 138 | x_test = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 139 | 140 | R["x_test"] = x_test.astype(np.float32) 141 | R["u_test_true"] = u(R["x_test"]) 142 | 143 | # %% get sample points 144 | x_samp = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 145 | 146 | R["x_samp"] = x_samp.astype(np.float32) 147 | R["u_samp_true"] = u(R["x_samp"]) 148 | 149 | # %% normal neural network 150 | units = (dim,) + R["hidden_units"] + (1,) 151 | 152 | 153 | def neural_net(x): 154 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 155 | 156 | all_y = [] 157 | for k in range(len(R["scale"])): 158 | scale_y = R["scale"][k] * x 159 | 160 | for i in range(len(units) - 2): 161 | init_W = np.random.randn(units[i], units[i + 1]).astype(np.float32) 162 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 163 | init_b = np.random.randn(units[i + 1]).astype(np.float32) 164 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 165 | 166 | W = tf.get_variable(name="W" + str(i) + str(k), initializer=init_W) 167 | b = tf.get_variable(name="b" + str(i) + str(k), initializer=init_b) 168 | 169 | scale_y = act_dict[R["act_name"][i]](tf.matmul(scale_y, W) + b) 170 | 171 | init_W = np.random.randn(units[-2], units[-1]).astype(np.float32) 172 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 173 | init_b = np.random.randn(units[-1]).astype(np.float32) 174 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 175 | 176 | W = tf.get_variable( 177 | name="W" + str(len(units) - 1) + str(k), initializer=init_W 178 | ) 179 | b = tf.get_variable( 180 | name="b" + str(len(units) - 1) + str(k), initializer=init_b 181 | ) 182 | 183 | scale_y = tf.matmul(scale_y, W) + b 184 | 185 | all_y.append(scale_y) 186 | 187 | y = sum(all_y) 188 | 189 | return y 190 | 191 | 192 | # %% loss and optimizer ("V" for variable) 193 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 194 | 195 | Vx_it = tf.placeholder(tf.float32, shape=(None, dim)) 196 | Vx_bd = tf.placeholder(tf.float32, shape=(None, dim)) 197 | 198 | Va_it = tf.placeholder(tf.float32, shape=(None, 1)) 199 | Vc_it = tf.placeholder(tf.float32, shape=(None, 1)) 200 | Vf_it = tf.placeholder(tf.float32, shape=(None, 1)) 201 | 202 | Vu_true_bd = tf.placeholder(tf.float32, shape=(None, 1)) 203 | 204 | Vu_it = neural_net(Vx_it) 205 | Vu_bd = neural_net(Vx_bd) 206 | 207 | Vdu_it = tf.gradients(Vu_it, Vx_it)[0] 208 | 209 | Vloss_it = R["area_it"] * tf.reduce_mean( 210 | 1 / 2 * Va_it * tf.reduce_sum(tf.square(Vdu_it), axis=1, keepdims=True) 211 | + 1 / 2 * Vc_it * tf.square(Vu_it) 212 | - Vf_it * Vu_it 213 | ) 214 | Vloss_bd = R["area_bd"] * tf.reduce_mean(tf.square(Vu_bd - Vu_true_bd)) 215 | 216 | Vloss = Vloss_it + R["penalty"] * Vloss_bd 217 | 218 | learning_rate = tf.placeholder_with_default(input=1e-3, shape=[], name="lr") 219 | optimizer = tf.train.AdamOptimizer(learning_rate) 220 | train_op = optimizer.minimize(Vloss) 221 | 222 | # %% train model 223 | t0 = time.time() 224 | loss, loss_bd = [], [] 225 | error = [] 226 | lr = 1 * R["learning_rate"] 227 | 228 | config = tf.ConfigProto() 229 | config.gpu_options.allow_growth = True 230 | with tf.Session(config=config) as sess: 231 | 232 | sess.run(tf.global_variables_initializer()) 233 | 234 | for epoch in range(R["total_step"] + 1): 235 | 236 | # generate new data ("N" for number) 237 | if epoch % R["resamp_epoch"] == 0: 238 | 239 | Nx_it = rand_it(R["size_it"]) 240 | Nx_bd = rand_bd(R["size_bd"]) 241 | 242 | Na_it = a(Nx_it) 243 | Nc_it = c(Nx_it) 244 | Nf_it = f(Nx_it) 245 | 246 | Nu_bd = u(Nx_bd) 247 | 248 | # train neural network 249 | lr = (1 - R["lr_decay"]) * lr 250 | 251 | _, Nloss, Nloss_bd = sess.run( 252 | [train_op, Vloss, Vloss_bd], 253 | feed_dict={ 254 | Vx_it: Nx_it, 255 | Vx_bd: Nx_bd, 256 | Va_it: Na_it, 257 | Vc_it: Nc_it, 258 | Vf_it: Nf_it, 259 | Vu_true_bd: Nu_bd, 260 | learning_rate: lr, 261 | }, 262 | ) 263 | 264 | # get test error 265 | R["u_test"] = sess.run(Vu_it, feed_dict={Vx_it: R["x_test"]}) 266 | Nerror = R["area_it"] * np.mean((R["u_test"] - R["u_test_true"]) ** 2) 267 | 268 | loss.append(Nloss) 269 | loss_bd.append(Nloss_bd) 270 | error.append(Nerror) 271 | 272 | # show current state 273 | if epoch % R["plot_epoch"] == 0: 274 | 275 | print("epoch %d, time %.3f" % (epoch, time.time() - t0)) 276 | print("total loss %f, boundary loss %f" % (Nloss, Nloss_bd)) 277 | print("interior error %f" % (Nerror)) 278 | 279 | R["time"] = time.time() - t0 280 | R["loss"] = np.array(loss) 281 | R["loss_bd"] = np.array(loss_bd) 282 | R["error"] = np.array(error) 283 | 284 | plt.figure() 285 | ax = plt.gca() 286 | plt.semilogy(R["error"]) 287 | plt.title('error',fontsize=15) 288 | fntmp = os.path.join(R["record_path"], 'error') 289 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 290 | 291 | plt.figure() 292 | ax = plt.gca() 293 | plt.plot(R["loss"]) 294 | plt.title('loss',fontsize=15) 295 | fntmp = os.path.join(R["record_path"], 'loss') 296 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 297 | 298 | plt.figure() 299 | ax = plt.gca() 300 | plt.semilogy(R["loss_bd"]) 301 | plt.title('loss_bd',fontsize=15) 302 | fntmp = os.path.join(R["record_path"], 'loss_bd') 303 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 304 | 305 | u_samp = sess.run(Vu_it, feed_dict={Vx_it: R["x_samp"]}) 306 | R["u_samp_" + str(epoch)] = u_samp 307 | 308 | if R['dimension']==1: 309 | plt.figure() 310 | ax = plt.gca() 311 | plt.plot(R["x_samp"], R["u_samp_"+str(epoch)], 'r--',label='dnn') 312 | plt.plot(R["x_samp"], R["u_samp_true"], 'b-',label='true') 313 | plt.title('u',fontsize=15) 314 | plt.legend(fontsize=18) 315 | fntmp = os.path.join(R["record_path"], 'u_epoch_%s'%(epoch)) 316 | save_fig(plt,fntmp,ax=ax,isax=1,iseps=0) 317 | 318 | 319 | 320 | 321 | # %% save data 322 | data_dir = os.path.join(R["record_path"], "data.pkl") 323 | with open(data_dir, "wb") as file: 324 | pickle.dump(R, file) 325 | 326 | # %% save data 327 | data_dir = os.path.join(R["record_path"], "data.pkl") 328 | with open(data_dir, "wb") as file: 329 | pickle.dump(R, file) 330 | -------------------------------------------------------------------------------- /notes/notes.tex: -------------------------------------------------------------------------------- 1 | \documentclass[12pt,a4paper]{article} 2 | 3 | \usepackage[utf8]{inputenc} 4 | \usepackage{ctex} 5 | \usepackage{amsmath,amsfonts,amssymb} 6 | \usepackage{abstract,appendix} 7 | \usepackage{makeidx,hyperref} 8 | \usepackage{graphicx,epsfig,subfig} 9 | \usepackage{geometry} 10 | \usepackage{color,xcolor} 11 | \usepackage{listings} 12 | 13 | \lstset{ 14 | basicstyle=\footnotesize, % 设定代码字体 15 | backgroundcolor=\color[RGB]{239,239,239}, % 设定背景颜色 16 | keywordstyle=\color[RGB]{40,40,255}, % 设定关键字颜色 17 | commentstyle=\color[RGB]{0,96,96}, % 设置代码注释的格式 18 | stringstyle=\color[RGB]{128,0,0}, % 设置字符串格式 19 | language=python, % 设置语言 20 | } 21 | 22 | \geometry{scale=0.8} 23 | 24 | %\setlength{\lineskip}{\baselineskip} 25 | \setlength{\parskip}{0.5\baselineskip} 26 | 27 | \title{深度学习(Mscale网络)求解椭圆方程代码模板(Deep-Ritz方法)} 28 | \author{刘子旗, 许志钦\thanks{xuzhiqin@sjtu.edu.cn}} 29 | \date{\today} 30 | 31 | \begin{document} 32 | 33 | \maketitle 34 | 35 | 根据神经网络的F-Principle \cite{fprinciple},网络在训练的过程中会优先拟合训练数据的低频部分。但是在某些特殊的场合下(如使用神经网络求解微分方程),需要拟合目标函数的高频部分。实验希望找到一种特殊的网络结构,更快地拟合目标函数的高频部分。 36 | 37 | 这里提出了一种新的网络结构:Mscale,用于求解椭圆方程。 38 | 39 | \section*{椭圆方程的Ritz方法} 40 | 41 | 椭圆方程的一般形式为 42 | \begin{align*} 43 | -\nabla(a(x) \nabla u(x)) + c(x) u(x) = f(x) \quad x \in \Omega \\ 44 | u(x) = u_0(x) \quad x \in \partial \Omega 45 | \end{align*} 46 | 47 | 通过数学上的推导\cite{deep-ritz},方程可以转化为泛函的极小值问题 48 | \begin{align*} 49 | \min_{u} J(u) & = \int_{\Omega} \frac{1}{2} a(x) |\nabla u(x)|^2 + c(x) |u(x)|^2 - f(x) u(x) \ dx \\ 50 | \text{s.t.} \quad u(x) & = u_0(x) \quad x \in \partial \Omega 51 | \end{align*} 52 | 53 | 这是一个约束优化问题,我们用加惩罚项的方法把它近似转化成无约束优化问题 54 | \begin{align*} 55 | \min_{u} \int_{\Omega} \frac{1}{2} a(x) |\nabla u(x)|^2 + \frac{1}{2} c(x) |u(x)|^2 - f(x) u(x) \ dx + \int_{\partial \Omega} |u(x) - u_0(x)|^2 \ dx 56 | \end{align*} 57 | 58 | 用神经网络$N(x; \theta)$表示未知函数,用蒙特卡洛方法计算积分的值,就得到了最终的优化问题 59 | \begin{align*} 60 | \min_{\theta} L(\theta) & = \frac{|\Omega|}{|S_1|} \sum_{x_i \in S_1} \frac{1}{2} a(x_i) |\nabla N(x_i; \theta)|^2 + \frac{1}{2} c(x_i) |N(x_i; \theta)|^2 - f(x_i) N(x_i; \theta) \\ 61 | & + \beta \frac{|\partial \Omega|}{|S_2|} \sum_{x_i \in S_2} |N(x_i; \theta) - u_0(x_i)|^2 \ dx 62 | \end{align*} 63 | 其中$S_1$和$S_2$是在$\Omega$和$\partial \Omega$上随机生成的点。 64 | 65 | \section*{神经网络结构} 66 | 67 | 一般的神经网络定义为一个函数$N(x): \mathbb{R}^d \rightarrow \mathbb{R}$,由线性变换和非线性变换复合而成。 68 | \begin{align*} 69 | N(x; \theta) = W^{[L]} \sigma(\cdots W^{[2]} \sigma(W^{[1]} x + b^{[1]}) + b^{[2]} \cdots) + b^{[L]} 70 | \end{align*} 71 | 其中$\sigma: \mathbb{R} \rightarrow \mathbb{R}$叫做“激活函数”,它作用在向量上相当于作用于每个分量。$L$叫做“层数”,$W^{[l]}, b^{[l]}$是待定的参数。 72 | 73 | 其中$n^{[l]}$叫做每一层的“宽度”,满足$n^{[1]} = d, n^{[L+1]} = 1$ 74 | \begin{align*} 75 | W^{[l]} \in \mathbb{R}^{n^{[l]} \times n^{[l+1]}}, \quad b^{[l]} \in \mathbb{R}^{n^{[l+1]}} 76 | \end{align*} 77 | 78 | \section*{Mscale网络结构} 79 | 80 | 网络结构\cite{mscale}如图\ref{net}。具体代码实现见下文。 81 | 82 | \begin{figure}[h] 83 | \centering 84 | \includegraphics[width=0.5\linewidth]{MscaleNet2} 85 | \caption{Mscale网络结构} 86 | \label{net} 87 | \end{figure} 88 | 89 | \section*{代码实现细节} 90 | 91 | 代码文件说明: 92 | 93 | \begin{itemize} 94 | \item \textbf{ritzN.py}:用一般的Normal网络求解椭圆方程。 95 | \item \textbf{ritzM.py}:用Mscale网络求解椭圆方程。 96 | \item \textbf{plot\_result.py}:展示网络结果,画图。 97 | \item \textbf{my\_act.py}:文件中定义了可能用到的激活函数,如果要用到新的激活函数可以在这里添加。\textbf{act\_dict}变量下标是激活函数的“名字”,它的值是名字对应的函数句柄。 98 | \end{itemize} 99 | 100 | \textbf{运行环境:python 3.6 + tensorflow 1.16} 101 | 102 | \subsection*{参数设定} 103 | 104 | 在代码中,首先我们要设定所有的参数,以便修改。所有的参数和运行结果都记录在变量\textbf{R}中。 105 | 106 | \begin{lstlisting} 107 | R = {} 108 | 109 | R["seed"] = 0 110 | 111 | R["size_it"] = 1000 112 | R["size_bd"] = 100 113 | 114 | R["penalty"] = 1e3 115 | 116 | R["act_name"] = ("sin",) * 3 117 | R["hidden_units"] = (180,) * 3 118 | R["learning_rate"] = 1e-4 119 | R["lr_decay"] = 5e-7 120 | 121 | R["varcoe"] = 0.5 122 | 123 | R["total_step"] = 5000 124 | R["resamp_epoch"] = 1 125 | R["plot_epoch"] = 500 126 | 127 | R["record_path"] = os.path.join("..", "exp", "sin_N") 128 | \end{lstlisting} 129 | 130 | 参数意义如下: 131 | 132 | \begin{itemize} 133 | \item \textbf{seed}:随机数种子。保证两次运行代码的结果完全一样。(在必要的时候,这个也是可调的参数之一) 134 | \item \textbf{size\_it}:区域$\Omega$内部随机生成的点的个数,也就是$|S_1|$。 135 | \item \textbf{size\_bd}:区域边界$\partial \Omega$上随机生成的点的个数,也就是$|S_2|$。 136 | \item \textbf{penalty}:惩罚系数,就是公式中的$\beta$。 137 | \item \textbf{scale}:Mscale网络中的变换系数。(Normal网络中没有这个参数) 138 | \item \textbf{act\_name}:长度为$N$的列表,代表每一层的激活函数。 139 | \item \textbf{hidden\_units}:长度为$N$的列表,代表每一层的宽度。 140 | \item \textbf{learning\_rate}:学习率。(一般不能太大) 141 | \item \textbf{lr\_decay}:学习率下降速度。 142 | \item \textbf{varcoe}:初始化参数。 143 | \item \textbf{total\_step}:训练总步数。 144 | \item \textbf{resamp\_epoch}:采样间隔。每隔X步重新随机生成$S_1,S_2$中的点。X=1代表每计算一步都重新随机生成点。X大于总步数时,代表训练用的样本不变。 145 | \item \textbf{plot\_epoch}:从第0步开始,每隔X步记录下数值解图像。同时输出当前状态。 146 | \item \textbf{record\_path}:保存记录的文件夹路径。在运行结束的时候,源代码文件会被复制到文件夹下的\textbf{code.py}文件中,以便重复实验。数据结果和参数会保存在文件夹下的\textbf{data.pkl}文件中,以便画图和展示。 147 | \end{itemize} 148 | 149 | \subsection*{定义求解区域} 150 | 151 | 这一部分定义了求解区域$\Omega$。\textbf{如果要求解复杂区域上的方程,就要修改这部分代码。} 152 | 153 | \begin{lstlisting} 154 | R["dimension"] = dim = 2 155 | R["area_it"] = 4 156 | R["area_bd"] = 8 157 | 158 | # interior data 159 | def rand_it(size): 160 | x_it = np.random.rand(size, dim) * 2 - 1 161 | return x_it.astype(np.float32) 162 | 163 | # boundary data 164 | def rand_bd(size): 165 | 166 | x_bd = np.random.rand(size, dim) * 2 - 1 167 | ind = np.random.randint(dim, size=size) 168 | x01 = np.random.randint(2, size=size) * 2 - 1 169 | for ii in range(size): 170 | x_bd[ii, ind[ii]] = x01[ii] 171 | return x_bd.astype(np.float32) 172 | \end{lstlisting} 173 | 174 | 这里\textbf{dim}是方程的空间维数,\textbf{area\_it}表示$\Omega$的面积。\textbf{area\_bd}表示$\partial \Omega$的面积。 175 | 176 | \textbf{rand\_it}函数,生成$\Omega$内\textbf{均匀分布}的随机点,输入需要的随机点个数\textbf{size},返回一个大小为$size \times dim$的数组。 177 | 178 | \textbf{rand\_bd}函数,生成边界$\partial \Omega$上\textbf{均匀分布}的随机点,输入输出同上。 179 | 180 | 这样就定义了求解区域。这里对应的求解区域是$[-1,1]^2$。想要在其它区域上求解,只要重新实现一遍这个函数。 181 | 182 | \subsection*{定义方程} 183 | 184 | 这一部分定义了方程中的已知函数$a(x),c(x),f(x)$,和精确解(边界条件)$u(x)$。如果精确解未知,$u(x)$就代表边界条件。\textbf{如果要求解其它方程,就要修改这部分代码。} 185 | 186 | \begin{lstlisting} 187 | # PDE problem 188 | # - (a(x) u'(x))' + c(x) u(x) = f(x) 189 | R["mu"] = mu = 6 * np.pi 190 | 191 | def u(x): 192 | u = np.sin(mu * x) 193 | return u.astype(np.float32).reshape((-1, 1)) 194 | 195 | def a(x): 196 | a = np.ones((x.shape[0], 1)) 197 | return a.astype(np.float32).reshape((-1, 1)) 198 | 199 | def c(x): 200 | c = np.zeros((x.shape[0], 1)) 201 | return c.astype(np.float32).reshape((-1, 1)) 202 | 203 | def f(x): 204 | f = mu * mu * np.sin(mu * x) 205 | return f.astype(np.float32).reshape((-1, 1)) 206 | \end{lstlisting} 207 | 208 | 需要注意的是:这些函数,输入都是一个大小为$size \times dim$的数组,每一行代表一个采样点。返回的是大小为$size \times 1$的数组。(这里的维度不一致会报错) 209 | 210 | \subsection*{保存参数} 211 | 212 | 把源代码文件复制到文件夹下的\textbf{code.py}文件中,以便重复实验。设定随机数种子。 213 | 214 | \begin{lstlisting} 215 | # save parameters 216 | 217 | # prepare folder 218 | if not os.path.isdir(R["record_path"]): 219 | os.mkdir(R["record_path"]) 220 | 221 | # save current code 222 | save_file = os.path.join(R["record_path"], "code.py") 223 | shutil.copyfile(__file__, save_file) 224 | 225 | # set seed 226 | tf.set_random_seed(R["seed"]) 227 | tf.random.set_random_seed(R["seed"]) 228 | np.random.seed(R["seed"]) 229 | \end{lstlisting} 230 | 231 | \subsection*{用于画图和计算误差的采样点} 232 | 233 | 这里的\textbf{x\_test}是用于计算误差的采样点,\textbf{u\_test\_true}是这些点上的精确解的值。计算数值解和精确解误差的时候就在这些点上计算。 234 | 235 | \textbf{x\_samp}是用于画图的采样点。\textbf{u\_samp\_true}是这些点上的精确解的值。 236 | 237 | \textbf{如果我们不知道精确解,而是通过差分方法或者其它方法生成的参考解。这里的\textbf{u\_test\_true}就要从文件里面读取出来。} 238 | 239 | \begin{lstlisting} 240 | # get test points 241 | x_test = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 242 | 243 | R["x_test"] = x_test.astype(np.float32) 244 | R["u_test_true"] = u(R["x_test"]) 245 | 246 | # get sample points 247 | x_samp = np.linspace(-1, 1, 200 +1).reshape((-1, 1)) 248 | 249 | R["x_samp"] = x_samp.astype(np.float32) 250 | R["u_samp_true"] = u(R["x_samp"]) 251 | \end{lstlisting} 252 | 253 | 需要注意的是:这里的\textbf{x\_test}和\textbf{x\_samp}都是$size \times dim$的数组,每一行代表一个采样点。这里的\textbf{u\_test\_true}和\textbf{u\_samp\_true}都是$size \times 1$的数组。在画图时要另外调整。 254 | 255 | 之后在训练的过程中,还会有\textbf{u\_test\_X}和\textbf{u\_samp\_X}变量,代表训练到第X步的数值解。 256 | 257 | \subsection*{神经网络} 258 | 259 | 用tensorflow实现深度学习的网络结构。 260 | 261 | \textbf{如果要换网络结构或者参数初始化,就要改这段代码。} 262 | 263 | 下面这段代码实现了一个普通的网络(Normal),\textbf{init\_W}和\textbf{init\_b}是变量的初始值。 264 | 265 | \begin{lstlisting} 266 | units = (dim,) + R["hidden_units"] + (1,) 267 | 268 | def neural_net(x): 269 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 270 | 271 | y = x 272 | 273 | for i in range(len(units) - 2): 274 | init_W = np.random.randn(units[i], units[i + 1]).astype(np.float32) 275 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 276 | init_b = np.random.randn(units[i + 1]).astype(np.float32) 277 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 278 | 279 | W = tf.get_variable(name="W" + str(i), initializer=init_W) 280 | b = tf.get_variable(name="b" + str(i), initializer=init_b) 281 | 282 | y = act_dict[R["act_name"][i]](tf.matmul(y, W) + b) 283 | 284 | init_W = np.random.randn(units[-2], units[-1]).astype(np.float32) 285 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 286 | init_b = np.random.randn(units[-1]).astype(np.float32) 287 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 288 | 289 | W = tf.get_variable(name="W" + str(len(units) - 1), initializer=init_W) 290 | b = tf.get_variable(name="b" + str(len(units) - 1), initializer=init_b) 291 | 292 | y = tf.matmul(y, W) + b 293 | 294 | return y 295 | \end{lstlisting} 296 | 297 | 下面这段代码实现了Mscale网络,原理类似。 298 | 299 | \begin{lstlisting} 300 | units = (dim,) + R["hidden_units"] + (1,) 301 | 302 | def neural_net(x): 303 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 304 | 305 | all_y = [] 306 | for k in range(len(R["scale"])): 307 | scale_y = R["scale"][k] * x 308 | 309 | for i in range(len(units) - 2): 310 | init_W = np.random.randn(units[i], units[i + 1]).astype(np.float32) 311 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 312 | init_b = np.random.randn(units[i + 1]).astype(np.float32) 313 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 314 | 315 | W = tf.get_variable(name="W" + str(i) + str(k), 316 | initializer=init_W) 317 | b = tf.get_variable(name="b" + str(i) + str(k), 318 | initializer=init_b) 319 | 320 | scale_y = act_dict[R["act_name"][i]](tf.matmul(scale_y, W) + b) 321 | 322 | init_W = np.random.randn(units[-2], units[-1]).astype(np.float32) 323 | init_W = init_W * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 324 | init_b = np.random.randn(units[-1]).astype(np.float32) 325 | init_b = init_b * (2 / (units[i] + units[i + 1]) ** R["varcoe"]) 326 | 327 | W = tf.get_variable(name="W" + str(len(units) - 1) + str(k), 328 | initializer=init_W) 329 | b = tf.get_variable(name="b" + str(len(units) - 1) + str(k), 330 | initializer=init_b) 331 | 332 | scale_y = tf.matmul(scale_y, W) + b 333 | 334 | all_y.append(scale_y) 335 | 336 | y = sum(all_y) 337 | 338 | return y 339 | \end{lstlisting} 340 | 341 | \subsection*{损失函数} 342 | 343 | 定义损失函数。这里变量前面的“V”代表它的类型是tensorflow中的Variable,变量前面的“N”表示它保存变量当前的取值。 344 | 345 | \textbf{如果求解的不是椭圆方程,或者损失函数的形式有变化,就要改这段代码。} 346 | 347 | \begin{lstlisting} 348 | # loss and optimizer ("V" for variable) 349 | with tf.variable_scope("vscope", reuse=tf.AUTO_REUSE): 350 | 351 | Vx_it = tf.placeholder(tf.float32, shape=(None, dim)) 352 | Vx_bd = tf.placeholder(tf.float32, shape=(None, dim)) 353 | 354 | Va_it = tf.placeholder(tf.float32, shape=(None, 1)) 355 | Vc_it = tf.placeholder(tf.float32, shape=(None, 1)) 356 | Vf_it = tf.placeholder(tf.float32, shape=(None, 1)) 357 | 358 | Vu_true_bd = tf.placeholder(tf.float32, shape=(None, 1)) 359 | 360 | Vu_it = neural_net(Vx_it) 361 | Vu_bd = neural_net(Vx_bd) 362 | 363 | Vdu_it = tf.gradients(Vu_it, Vx_it)[0] 364 | 365 | Vloss_it = R["area_it"] * tf.reduce_mean( 366 | 1 / 2 * Va_it * tf.reduce_sum(tf.square(Vdu_it), axis=1, keepdims=True) 367 | + 1 / 2 * Vc_it * tf.square(Vu_it) 368 | - Vf_it * Vu_it 369 | ) 370 | Vloss_bd = R["area_bd"] * tf.reduce_mean(tf.square(Vu_bd - Vu_true_bd)) 371 | 372 | Vloss = Vloss_it + R["penalty"] * Vloss_bd 373 | 374 | learning_rate = tf.placeholder_with_default(input=1e-3, shape=[], name="lr") 375 | optimizer = tf.train.AdamOptimizer(learning_rate) 376 | train_op = optimizer.minimize(Vloss) 377 | \end{lstlisting} 378 | 379 | \subsection*{开始训练} 380 | 381 | \textbf{这里的代码一般不用大改,但是如果前面有改动,这里的要和之前的匹配。} 382 | 383 | 随机生成定义域里的点,计算这些点上的函数值。 384 | \begin{lstlisting} 385 | # generate new data ("N" for number) 386 | if epoch % R["resamp_epoch"] == 0: 387 | Nx_it = rand_it(R["size_it"]) 388 | Nx_bd = rand_bd(R["size_bd"]) 389 | Na_it = a(Nx_it) 390 | Nc_it = c(Nx_it) 391 | Nf_it = f(Nx_it) 392 | Nu_bd = u(Nx_bd) 393 | \end{lstlisting} 394 | 395 | 把生成的数据喂给网络,训练一步。 396 | \begin{lstlisting} 397 | _, Nloss, Nloss_bd = sess.run( 398 | [train_op, Vloss, Vloss_bd], 399 | feed_dict={ 400 | Vx_it: Nx_it, 401 | Vx_bd: Nx_bd, 402 | Va_it: Na_it, 403 | Vc_it: Nc_it, 404 | Vf_it: Nf_it, 405 | Vu_true_bd: Nu_bd, 406 | learning_rate: lr, 407 | }, 408 | ) 409 | \end{lstlisting} 410 | 411 | 代入采样点,计算数值解和真实解之间的误差 412 | \begin{lstlisting} 413 | R["u_test"] = sess.run(Vu_it, feed_dict={Vx_it: R["x_test"]}) 414 | Nerror = R["area_it"] * np.mean((R["u_test"] - R["u_test_true"]) ** 2) 415 | \end{lstlisting} 416 | 417 | 每过\textbf{plot\_epoch}步,记录下当时的数值解,同时输出当前误差。 418 | \begin{lstlisting} 419 | if epoch % R["plot_epoch"] == 0: 420 | print("epoch %d, time %.3f" % (epoch, time.time() - t0)) 421 | print("total loss %f, boundary loss %f" % (Nloss, Nloss_bd)) 422 | print("interior error %f" % (Nerror)) 423 | 424 | u_samp = sess.run(Vu_it, feed_dict={Vx_it: R["x_samp"]}) 425 | R["u_samp_" + str(epoch)] = u_samp 426 | \end{lstlisting} 427 | 428 | 最后保存所有参数 429 | \begin{lstlisting} 430 | # save data 431 | data_dir = os.path.join(R["record_path"], "data.pkl") 432 | with open(data_dir, "wb") as file: 433 | pickle.dump(R, file) 434 | \end{lstlisting} 435 | 436 | \section*{测试用例} 437 | 438 | 具体参数如上,分别用规模为(180,180,180)的普通全连接网络(Normal)和系数为(1,2,4,8,16,32)的6个规模为(30,30,30)的网络(Mscale)求解方程。激活函数为sin(x). 439 | 440 | 方程的精确解为$u = \sin 6 \pi x$。 441 | 442 | (mscale1-32)求解过程中,数值解随训练的变化如图\ref{proc}。蓝色实线是精确解,红色虚线是数值解。 443 | \begin{figure}[h] 444 | \centering 445 | \subfloat[0步]{\includegraphics[width=0.24\linewidth]{s0}} 446 | \subfloat[500步]{\includegraphics[width=0.24\linewidth]{s500}} 447 | \subfloat[1000步]{\includegraphics[width=0.24\linewidth]{s1000}} 448 | \subfloat[5000步]{\includegraphics[width=0.24\linewidth]{s5000}} 449 | \caption{求解过程} 450 | \label{proc} 451 | \end{figure} 452 | 453 | 两种方法error下降对比如图\ref{error}。横轴是训练步数,纵轴是测试误差。 454 | \begin{figure}[h] 455 | \centering 456 | \includegraphics[width=0.3\linewidth]{error} 457 | \caption{误差下降曲线} 458 | \label{error} 459 | \end{figure} 460 | 461 | \begin{thebibliography}{99} 462 | \bibitem{fprinciple} Zhi-Qin John Xu, Yaoyu Zhang, Tao Luo, Yanyang Xiao, and Zheng Ma. Frequency principle: Fourier analysis sheds light on deep neural networks. Communications in 463 | Computational Physics, 2020, arXiv: 1901.06523. 464 | \bibitem{deep-ritz} Weinan E ,Yu B. The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems[J]. Communications in Mathematics \& Stats, 2018, 6(1):1-12. 465 | \bibitem{mscale}Ziqi Liu,Wei Cai,Zhi-Qin John Xu. Multi-scale Deep Neural Network (MscaleDNN) for Solving Poisson-Boltzmann Equation in Complex Domains[J]. Communications in 466 | Computational Physics, 2020. arXiv:2007.11207. 467 | \end{thebibliography} 468 | 469 | \end{document} 470 | -------------------------------------------------------------------------------- /notes/notes.log: -------------------------------------------------------------------------------- 1 | This is XeTeX, Version 3.14159265-2.6-0.999991 (TeX Live 2019/W32TeX) (preloaded format=xelatex 2019.11.18) 21 OCT 2020 23:53 2 | entering extended mode 3 | restricted \write18 enabled. 4 | %&-line parsing enabled. 5 | **notes.tex 6 | (./notes.tex 7 | LaTeX2e <2018-12-01> 8 | (c:/texlive/2019/texmf-dist/tex/latex/base/article.cls 9 | Document Class: article 2018/09/03 v1.4i Standard LaTeX document class 10 | (c:/texlive/2019/texmf-dist/tex/latex/base/size12.clo 11 | File: size12.clo 2018/09/03 v1.4i Standard LaTeX file (size option) 12 | ) 13 | \c@part=\count80 14 | \c@section=\count81 15 | \c@subsection=\count82 16 | \c@subsubsection=\count83 17 | \c@paragraph=\count84 18 | \c@subparagraph=\count85 19 | \c@figure=\count86 20 | \c@table=\count87 21 | \abovecaptionskip=\skip41 22 | \belowcaptionskip=\skip42 23 | \bibindent=\dimen102 24 | ) 25 | (c:/texlive/2019/texmf-dist/tex/latex/base/inputenc.sty 26 | Package: inputenc 2018/08/11 v1.3c Input encoding file 27 | \inpenc@prehook=\toks14 28 | \inpenc@posthook=\toks15 29 | 30 | 31 | Package inputenc Warning: inputenc package ignored with utf8 based engines. 32 | 33 | ) (c:/texlive/2019/texmf-dist/tex/latex/ctex/ctex.sty 34 | (c:/texlive/2019/texmf-dist/tex/latex/l3kernel/expl3.sty 35 | Package: expl3 2019-04-06 L3 programming layer (loader) 36 | 37 | (c:/texlive/2019/texmf-dist/tex/latex/l3kernel/expl3-code.tex 38 | Package: expl3 2019-04-06 L3 programming layer (code) 39 | \c_max_int=\count88 40 | \l_tmpa_int=\count89 41 | \l_tmpb_int=\count90 42 | \g_tmpa_int=\count91 43 | \g_tmpb_int=\count92 44 | \g__kernel_prg_map_int=\count93 45 | \c__ior_term_ior=\count94 46 | \c_log_iow=\count95 47 | \l_iow_line_count_int=\count96 48 | \l__iow_line_target_int=\count97 49 | \l__iow_one_indent_int=\count98 50 | \l__iow_indent_int=\count99 51 | \c_zero_dim=\dimen103 52 | \c_max_dim=\dimen104 53 | \l_tmpa_dim=\dimen105 54 | \l_tmpb_dim=\dimen106 55 | \g_tmpa_dim=\dimen107 56 | \g_tmpb_dim=\dimen108 57 | \c_zero_skip=\skip43 58 | \c_max_skip=\skip44 59 | \l_tmpa_skip=\skip45 60 | \l_tmpb_skip=\skip46 61 | \g_tmpa_skip=\skip47 62 | \g_tmpb_skip=\skip48 63 | \c_zero_muskip=\muskip10 64 | \c_max_muskip=\muskip11 65 | \l_tmpa_muskip=\muskip12 66 | \l_tmpb_muskip=\muskip13 67 | \g_tmpa_muskip=\muskip14 68 | \g_tmpb_muskip=\muskip15 69 | \l_keys_choice_int=\count100 70 | \l__intarray_loop_int=\count101 71 | \c__intarray_sp_dim=\dimen109 72 | \g__intarray_font_int=\count102 73 | \c__fp_leading_shift_int=\count103 74 | \c__fp_middle_shift_int=\count104 75 | \c__fp_trailing_shift_int=\count105 76 | \c__fp_big_leading_shift_int=\count106 77 | \c__fp_big_middle_shift_int=\count107 78 | \c__fp_big_trailing_shift_int=\count108 79 | \c__fp_Bigg_leading_shift_int=\count109 80 | \c__fp_Bigg_middle_shift_int=\count110 81 | \c__fp_Bigg_trailing_shift_int=\count111 82 | \g__fp_array_int=\count112 83 | \l__fp_array_loop_int=\count113 84 | \l__sort_length_int=\count114 85 | \l__sort_min_int=\count115 86 | \l__sort_top_int=\count116 87 | \l__sort_max_int=\count117 88 | \l__sort_true_max_int=\count118 89 | \l__sort_block_int=\count119 90 | \l__sort_begin_int=\count120 91 | \l__sort_end_int=\count121 92 | \l__sort_A_int=\count122 93 | \l__sort_B_int=\count123 94 | \l__sort_C_int=\count124 95 | \l__tl_analysis_normal_int=\count125 96 | \l__tl_analysis_index_int=\count126 97 | \l__tl_analysis_nesting_int=\count127 98 | \l__tl_analysis_type_int=\count128 99 | \l__regex_internal_a_int=\count129 100 | \l__regex_internal_b_int=\count130 101 | \l__regex_internal_c_int=\count131 102 | \l__regex_balance_int=\count132 103 | \l__regex_group_level_int=\count133 104 | \l__regex_mode_int=\count134 105 | \c__regex_cs_in_class_mode_int=\count135 106 | \c__regex_cs_mode_int=\count136 107 | \l__regex_catcodes_int=\count137 108 | \l__regex_default_catcodes_int=\count138 109 | \c__regex_catcode_L_int=\count139 110 | \c__regex_catcode_O_int=\count140 111 | \c__regex_catcode_A_int=\count141 112 | \c__regex_all_catcodes_int=\count142 113 | \l__regex_show_lines_int=\count143 114 | \l__regex_min_state_int=\count144 115 | \l__regex_max_state_int=\count145 116 | \l__regex_left_state_int=\count146 117 | \l__regex_right_state_int=\count147 118 | \l__regex_capturing_group_int=\count148 119 | \l__regex_min_pos_int=\count149 120 | \l__regex_max_pos_int=\count150 121 | \l__regex_curr_pos_int=\count151 122 | \l__regex_start_pos_int=\count152 123 | \l__regex_success_pos_int=\count153 124 | \l__regex_curr_char_int=\count154 125 | \l__regex_curr_catcode_int=\count155 126 | \l__regex_last_char_int=\count156 127 | \l__regex_case_changed_char_int=\count157 128 | \l__regex_curr_state_int=\count158 129 | \l__regex_step_int=\count159 130 | \l__regex_min_active_int=\count160 131 | \l__regex_max_active_int=\count161 132 | \l__regex_replacement_csnames_int=\count162 133 | \l__regex_match_count_int=\count163 134 | \l__regex_min_submatch_int=\count164 135 | \l__regex_submatch_int=\count165 136 | \l__regex_zeroth_submatch_int=\count166 137 | \g__regex_trace_regex_int=\count167 138 | \c_empty_box=\box27 139 | \l_tmpa_box=\box28 140 | \l_tmpb_box=\box29 141 | \g_tmpa_box=\box30 142 | \g_tmpb_box=\box31 143 | \l__box_top_dim=\dimen110 144 | \l__box_bottom_dim=\dimen111 145 | \l__box_left_dim=\dimen112 146 | \l__box_right_dim=\dimen113 147 | \l__box_top_new_dim=\dimen114 148 | \l__box_bottom_new_dim=\dimen115 149 | \l__box_left_new_dim=\dimen116 150 | \l__box_right_new_dim=\dimen117 151 | \l__box_internal_box=\box32 152 | \l__coffin_internal_box=\box33 153 | \l__coffin_internal_dim=\dimen118 154 | \l__coffin_offset_x_dim=\dimen119 155 | \l__coffin_offset_y_dim=\dimen120 156 | \l__coffin_x_dim=\dimen121 157 | \l__coffin_y_dim=\dimen122 158 | \l__coffin_x_prime_dim=\dimen123 159 | \l__coffin_y_prime_dim=\dimen124 160 | \c_empty_coffin=\box34 161 | \l__coffin_aligned_coffin=\box35 162 | \l__coffin_aligned_internal_coffin=\box36 163 | \l_tmpa_coffin=\box37 164 | \l_tmpb_coffin=\box38 165 | \g_tmpa_coffin=\box39 166 | \g_tmpb_coffin=\box40 167 | \l__coffin_bounding_shift_dim=\dimen125 168 | \l__coffin_left_corner_dim=\dimen126 169 | \l__coffin_right_corner_dim=\dimen127 170 | \l__coffin_bottom_corner_dim=\dimen128 171 | \l__coffin_top_corner_dim=\dimen129 172 | \l__coffin_scaled_total_height_dim=\dimen130 173 | \l__coffin_scaled_width_dim=\dimen131 174 | \c__coffin_empty_coffin=\box41 175 | \l__coffin_display_coffin=\box42 176 | \l__coffin_display_coord_coffin=\box43 177 | \l__coffin_display_pole_coffin=\box44 178 | \l__coffin_display_offset_dim=\dimen132 179 | \l__coffin_display_x_dim=\dimen133 180 | \l__coffin_display_y_dim=\dimen134 181 | \g__file_internal_ior=\read1 182 | \l__seq_internal_a_int=\count168 183 | \l__seq_internal_b_int=\count169 184 | \c__deprecation_minus_one=\count170 185 | ) 186 | (c:/texlive/2019/texmf-dist/tex/latex/l3kernel/l3xdvipdfmx.def 187 | File: l3xdvipdfmx.def 2019-04-06 v L3 Experimental driver: xdvipdfmx 188 | \g__driver_image_int=\count171 189 | \g__driver_pdf_object_int=\count172 190 | )) 191 | Package: ctex 2019/04/07 v2.4.15 Chinese adapter in LaTeX (CTEX) 192 | 193 | (c:/texlive/2019/texmf-dist/tex/latex/l3packages/xparse/xparse.sty 194 | Package: xparse 2019-03-05 L3 Experimental document command parser 195 | \l__xparse_current_arg_int=\count173 196 | \g__xparse_grabber_int=\count174 197 | \l__xparse_m_args_int=\count175 198 | \l__xparse_v_nesting_int=\count176 199 | ) 200 | (c:/texlive/2019/texmf-dist/tex/latex/l3packages/l3keys2e/l3keys2e.sty 201 | Package: l3keys2e 2019-03-05 LaTeX2e option processing using LaTeX3 keys 202 | ) 203 | (c:/texlive/2019/texmf-dist/tex/latex/ctex/ctexhook.sty 204 | Package: ctexhook 2019/04/07 v2.4.15 Document and package hooks (CTEX) 205 | ) 206 | (c:/texlive/2019/texmf-dist/tex/latex/ctex/ctexpatch.sty 207 | Package: ctexpatch 2019/04/07 v2.4.15 Patching commands (CTEX) 208 | ) 209 | (c:/texlive/2019/texmf-dist/tex/latex/base/fix-cm.sty 210 | Package: fix-cm 2015/01/14 v1.1t fixes to LaTeX 211 | 212 | (c:/texlive/2019/texmf-dist/tex/latex/base/ts1enc.def 213 | File: ts1enc.def 2001/06/05 v3.0e (jk/car/fm) Standard LaTeX file 214 | )) 215 | (c:/texlive/2019/texmf-dist/tex/latex/ms/everysel.sty 216 | Package: everysel 2011/10/28 v1.2 EverySelectfont Package (MS) 217 | ) 218 | \l__ctex_tmp_int=\count177 219 | \l__ctex_tmp_box=\box45 220 | \l__ctex_tmp_dim=\dimen135 221 | \g__ctex_section_depth_int=\count178 222 | \g__ctex_font_size_int=\count179 223 | 224 | (c:/texlive/2019/texmf-dist/tex/latex/ctex/config/ctexopts.cfg 225 | File: ctexopts.cfg 2019/04/07 v2.4.15 Option configuration file (CTEX) 226 | ) 227 | (c:/texlive/2019/texmf-dist/tex/latex/ctex/engine/ctex-engine-xetex.def 228 | File: ctex-engine-xetex.def 2019/04/07 v2.4.15 XeLaTeX adapter (CTEX) 229 | 230 | (c:/texlive/2019/texmf-dist/tex/xelatex/xecjk/xeCJK.sty 231 | Package: xeCJK 2019/04/07 v3.7.2 Typesetting CJK scripts with XeLaTeX 232 | 233 | (c:/texlive/2019/texmf-dist/tex/latex/l3packages/xtemplate/xtemplate.sty 234 | Package: xtemplate 2019-03-05 L3 Experimental prototype document functions 235 | \l__xtemplate_tmp_dim=\dimen136 236 | \l__xtemplate_tmp_int=\count180 237 | \l__xtemplate_tmp_muskip=\muskip16 238 | \l__xtemplate_tmp_skip=\skip49 239 | ) 240 | \l__xeCJK_tmp_int=\count181 241 | \l__xeCJK_tmp_box=\box46 242 | \l__xeCJK_tmp_dim=\dimen137 243 | \l__xeCJK_tmp_skip=\skip50 244 | \g__xeCJK_space_factor_int=\count182 245 | \l__xeCJK_begin_int=\count183 246 | \l__xeCJK_end_int=\count184 247 | \c__xeCJK_CJK_class_int=\XeTeXcharclass1 248 | \c__xeCJK_FullLeft_class_int=\XeTeXcharclass2 249 | \c__xeCJK_FullRight_class_int=\XeTeXcharclass3 250 | \c__xeCJK_HalfLeft_class_int=\XeTeXcharclass4 251 | \c__xeCJK_HalfRight_class_int=\XeTeXcharclass5 252 | \c__xeCJK_NormalSpace_class_int=\XeTeXcharclass6 253 | \c__xeCJK_CM_class_int=\XeTeXcharclass7 254 | \c__xeCJK_HangulJamo_class_int=\XeTeXcharclass8 255 | \l__xeCJK_last_skip=\skip51 256 | \g__xeCJK_node_int=\count185 257 | \c__xeCJK_CJK_node_dim=\dimen138 258 | \c__xeCJK_CJK-space_node_dim=\dimen139 259 | \c__xeCJK_default_node_dim=\dimen140 260 | \c__xeCJK_default-space_node_dim=\dimen141 261 | \c__xeCJK_CJK-widow_node_dim=\dimen142 262 | \c__xeCJK_normalspace_node_dim=\dimen143 263 | \l__xeCJK_ccglue_skip=\skip52 264 | \l__xeCJK_ecglue_skip=\skip53 265 | \l__xeCJK_punct_kern_skip=\skip54 266 | \l__xeCJK_last_penalty_int=\count186 267 | \l__xeCJK_last_bound_dim=\dimen144 268 | \l__xeCJK_last_kern_dim=\dimen145 269 | \l__xeCJK_widow_penalty_int=\count187 270 | 271 | Package xtemplate Info: Declaring object type 'xeCJK/punctuation' taking 0 272 | (xtemplate) argument(s) on line 2352. 273 | 274 | \l__xeCJK_fixed_punct_width_dim=\dimen146 275 | \l__xeCJK_mixed_punct_width_dim=\dimen147 276 | \l__xeCJK_middle_punct_width_dim=\dimen148 277 | \l__xeCJK_fixed_margin_width_dim=\dimen149 278 | \l__xeCJK_mixed_margin_width_dim=\dimen150 279 | \l__xeCJK_middle_margin_width_dim=\dimen151 280 | \l__xeCJK_bound_punct_width_dim=\dimen152 281 | \l__xeCJK_bound_margin_width_dim=\dimen153 282 | \l__xeCJK_margin_minimum_dim=\dimen154 283 | \l__xeCJK_kerning_total_width_dim=\dimen155 284 | \l__xeCJK_same_align_margin_dim=\dimen156 285 | \l__xeCJK_different_align_margin_dim=\dimen157 286 | \l__xeCJK_kerning_margin_width_dim=\dimen158 287 | \l__xeCJK_kerning_margin_minimum_dim=\dimen159 288 | \l__xeCJK_bound_dim=\dimen160 289 | \l__xeCJK_reverse_bound_dim=\dimen161 290 | \l__xeCJK_minimum_bound_dim=\dimen162 291 | \l__xeCJK_kerning_margin_dim=\dimen163 292 | \l__xeCJK_original_margin_dim=\dimen164 293 | \g__xeCJK_family_int=\count188 294 | \l__xeCJK_fam_int=\count189 295 | \g__xeCJK_fam_allocation_int=\count190 296 | \l__xeCJK_verb_case_int=\count191 297 | \l__xeCJK_verb_exspace_skip=\skip55 298 | 299 | (c:/texlive/2019/texmf-dist/tex/latex/fontspec/fontspec.sty 300 | Package: fontspec 2019/03/15 v2.7c Font selection for XeLaTeX and LuaLaTeX 301 | 302 | (c:/texlive/2019/texmf-dist/tex/latex/fontspec/fontspec-xetex.sty 303 | Package: fontspec-xetex 2019/03/15 v2.7c Font selection for XeLaTeX and LuaLaTe 304 | X 305 | \l__fontspec_script_int=\count192 306 | \l__fontspec_language_int=\count193 307 | \l__fontspec_strnum_int=\count194 308 | \l__fontspec_tmp_int=\count195 309 | \l__fontspec_tmpa_int=\count196 310 | \l__fontspec_tmpb_int=\count197 311 | \l__fontspec_tmpc_int=\count198 312 | \l__fontspec_em_int=\count199 313 | \l__fontspec_emdef_int=\count266 314 | \l__fontspec_strong_int=\count267 315 | \l__fontspec_strongdef_int=\count268 316 | \l__fontspec_tmpa_dim=\dimen165 317 | \l__fontspec_tmpb_dim=\dimen166 318 | \l__fontspec_tmpc_dim=\dimen167 319 | 320 | (c:/texlive/2019/texmf-dist/tex/latex/base/fontenc.sty 321 | Package: fontenc 2018/08/11 v2.0j Standard LaTeX package 322 | 323 | (c:/texlive/2019/texmf-dist/tex/latex/base/tuenc.def 324 | File: tuenc.def 2018/08/11 v2.0j Standard LaTeX file 325 | LaTeX Font Info: Redeclaring font encoding TU on input line 82. 326 | )) 327 | (c:/texlive/2019/texmf-dist/tex/latex/fontspec/fontspec.cfg) 328 | LaTeX Info: Redefining \itshape on input line 4051. 329 | LaTeX Info: Redefining \slshape on input line 4056. 330 | LaTeX Info: Redefining \scshape on input line 4061. 331 | LaTeX Info: Redefining \upshape on input line 4066. 332 | LaTeX Info: Redefining \em on input line 4096. 333 | LaTeX Info: Redefining \emph on input line 4121. 334 | )) 335 | (c:/texlive/2019/texmf-dist/tex/xelatex/xecjk/xeCJK.cfg 336 | File: xeCJK.cfg 2019/04/07 v3.7.2 Configuration file for xeCJK package 337 | )) 338 | (c:/texlive/2019/texmf-dist/tex/xelatex/xecjk/xeCJKfntef.sty 339 | Package: xeCJKfntef 2019/04/07 v3.7.2 xeCJK font effect 340 | 341 | (c:/texlive/2019/texmf-dist/tex/generic/ulem/ulem.sty 342 | \UL@box=\box47 343 | \UL@hyphenbox=\box48 344 | \UL@skip=\skip56 345 | \UL@hook=\toks16 346 | \UL@height=\dimen168 347 | \UL@pe=\count269 348 | \UL@pixel=\dimen169 349 | \ULC@box=\box49 350 | Package: ulem 2012/05/18 351 | \ULdepth=\dimen170 352 | ) 353 | (c:/texlive/2019/texmf-dist/tex/latex/cjk/texinput/CJKfntef.sty 354 | Package: CJKfntef 2015/04/18 4.8.4 355 | \CJK@fntefSkip=\skip57 356 | \CJK@nest=\count270 357 | \CJK@fntefDimen=\dimen171 358 | \CJK@underdotBox=\box50 359 | \CJK@ULbox=\box51 360 | \CJK@underanyskip=\dimen172 361 | ) 362 | \l__xeCJK_space_skip=\skip58 363 | \c__xeCJK_ulem-begin_node_dim=\dimen173 364 | \c__xeCJK_null_box=\box52 365 | \l__xeCJK_fntef_box=\box53 366 | \l__xeCJK_under_symbol_box=\box54 367 | \c__xeCJK_filll_skip=\skip59 368 | ) 369 | \ccwd=\dimen174 370 | \l__ctex_ccglue_skip=\skip60 371 | ) 372 | \l__ctex_ziju_dim=\dimen175 373 | 374 | (c:/texlive/2019/texmf-dist/tex/latex/zhnumber/zhnumber.sty 375 | Package: zhnumber 2019/04/07 v2.7 Typesetting numbers with Chinese glyphs 376 | \l__zhnum_scale_int=\count271 377 | 378 | (c:/texlive/2019/texmf-dist/tex/latex/zhnumber/zhnumber-utf8.cfg 379 | File: zhnumber-utf8.cfg 2019/04/07 v2.7 Chinese numerals with UTF8 encoding 380 | )) 381 | (c:/texlive/2019/texmf-dist/tex/latex/ctex/scheme/ctex-scheme-chinese.def 382 | File: ctex-scheme-chinese.def 2019/04/07 v2.4.15 Chinese scheme for generic (CT 383 | EX) 384 | 385 | (c:/texlive/2019/texmf-dist/tex/latex/ctex/config/ctex-name-utf8.cfg 386 | File: ctex-name-utf8.cfg 2019/04/07 v2.4.15 Caption with encoding UTF8 (CTEX) 387 | )) 388 | (c:/texlive/2019/texmf-dist/tex/latex/tools/indentfirst.sty 389 | Package: indentfirst 1995/11/23 v1.03 Indent first paragraph (DPC) 390 | ) 391 | (c:/texlive/2019/texmf-dist/tex/latex/ctex/fontset/ctex-fontset-windows.def 392 | File: ctex-fontset-windows.def 2019/04/07 v2.4.15 Windows fonts definition (CTE 393 | X) 394 | 395 | (c:/texlive/2019/texmf-dist/tex/latex/ctex/fontset/ctex-fontset-windowsnew.def 396 | File: ctex-fontset-windowsnew.def 2019/04/07 v2.4.15 Windows fonts definition f 397 | or Vista or later version (CTEX) 398 | 399 | Package fontspec Info: Could not resolve font "KaiTi/B" (it probably doesn't 400 | (fontspec) exist). 401 | 402 | 403 | Package fontspec Info: Could not resolve font "SimHei/I" (it probably doesn't 404 | (fontspec) exist). 405 | 406 | 407 | Package fontspec Info: Could not resolve font "SimSun/BI" (it probably doesn't 408 | (fontspec) exist). 409 | 410 | 411 | Package fontspec Info: Font family 'SimSun(0)' created for font 'SimSun' with 412 | (fontspec) options 413 | (fontspec) [Script={CJK},BoldFont={SimHei},ItalicFont={KaiTi}]. 414 | (fontspec) 415 | (fontspec) This font family consists of the following NFSS 416 | (fontspec) series/shapes: 417 | (fontspec) 418 | (fontspec) - 'normal' (m/n) with NFSS spec.: 419 | (fontspec) <->"SimSun/OT:script=hani;language=DFLT;" 420 | (fontspec) - 'small caps' (m/sc) with NFSS spec.: 421 | (fontspec) - 'bold' (bx/n) with NFSS spec.: 422 | (fontspec) <->"SimHei/OT:script=hani;language=DFLT;" 423 | (fontspec) - 'bold small caps' (bx/sc) with NFSS spec.: 424 | (fontspec) - 'italic' (m/it) with NFSS spec.: 425 | (fontspec) <->"KaiTi/OT:script=hani;language=DFLT;" 426 | (fontspec) - 'italic small caps' (m/itsc) with NFSS spec.: 427 | 428 | ))) (c:/texlive/2019/texmf-dist/tex/latex/ctex/config/ctex.cfg 429 | File: ctex.cfg 2019/04/07 v2.4.15 Configuration file (CTEX) 430 | ) 431 | (c:/texlive/2019/texmf-dist/tex/latex/amsmath/amsmath.sty 432 | Package: amsmath 2018/12/01 v2.17b AMS math features 433 | \@mathmargin=\skip61 434 | 435 | For additional information on amsmath, use the `?' option. 436 | (c:/texlive/2019/texmf-dist/tex/latex/amsmath/amstext.sty 437 | Package: amstext 2000/06/29 v2.01 AMS text 438 | 439 | (c:/texlive/2019/texmf-dist/tex/latex/amsmath/amsgen.sty 440 | File: amsgen.sty 1999/11/30 v2.0 generic functions 441 | \@emptytoks=\toks17 442 | \ex@=\dimen176 443 | )) 444 | (c:/texlive/2019/texmf-dist/tex/latex/amsmath/amsbsy.sty 445 | Package: amsbsy 1999/11/29 v1.2d Bold Symbols 446 | \pmbraise@=\dimen177 447 | ) 448 | (c:/texlive/2019/texmf-dist/tex/latex/amsmath/amsopn.sty 449 | Package: amsopn 2016/03/08 v2.02 operator names 450 | ) 451 | \inf@bad=\count272 452 | LaTeX Info: Redefining \frac on input line 223. 453 | \uproot@=\count273 454 | \leftroot@=\count274 455 | LaTeX Info: Redefining \overline on input line 385. 456 | \classnum@=\count275 457 | \DOTSCASE@=\count276 458 | LaTeX Info: Redefining \ldots on input line 482. 459 | LaTeX Info: Redefining \dots on input line 485. 460 | LaTeX Info: Redefining \cdots on input line 606. 461 | \Mathstrutbox@=\box55 462 | \strutbox@=\box56 463 | \big@size=\dimen178 464 | LaTeX Font Info: Redeclaring font encoding OML on input line 729. 465 | LaTeX Font Info: Redeclaring font encoding OMS on input line 730. 466 | \macc@depth=\count277 467 | \c@MaxMatrixCols=\count278 468 | \dotsspace@=\muskip17 469 | \c@parentequation=\count279 470 | \dspbrk@lvl=\count280 471 | \tag@help=\toks18 472 | \row@=\count281 473 | \column@=\count282 474 | \maxfields@=\count283 475 | \andhelp@=\toks19 476 | \eqnshift@=\dimen179 477 | \alignsep@=\dimen180 478 | \tagshift@=\dimen181 479 | \tagwidth@=\dimen182 480 | \totwidth@=\dimen183 481 | \lineht@=\dimen184 482 | \@envbody=\toks20 483 | \multlinegap=\skip62 484 | \multlinetaggap=\skip63 485 | \mathdisplay@stack=\toks21 486 | LaTeX Info: Redefining \[ on input line 2844. 487 | LaTeX Info: Redefining \] on input line 2845. 488 | ) 489 | (c:/texlive/2019/texmf-dist/tex/latex/amsfonts/amsfonts.sty 490 | Package: amsfonts 2013/01/14 v3.01 Basic AMSFonts support 491 | \symAMSa=\mathgroup4 492 | \symAMSb=\mathgroup5 493 | LaTeX Font Info: Overwriting math alphabet `\mathfrak' in version `bold' 494 | (Font) U/euf/m/n --> U/euf/b/n on input line 106. 495 | ) 496 | (c:/texlive/2019/texmf-dist/tex/latex/amsfonts/amssymb.sty 497 | Package: amssymb 2013/01/14 v3.01 AMS font symbols 498 | ) 499 | (c:/texlive/2019/texmf-dist/tex/latex/abstract/abstract.sty 500 | Package: abstract 2009/06/08 v1.2a configurable abstracts 501 | \abstitleskip=\skip64 502 | \absleftindent=\skip65 503 | \absrightindent=\skip66 504 | \absparindent=\skip67 505 | \absparsep=\skip68 506 | ) 507 | (c:/texlive/2019/texmf-dist/tex/latex/appendix/appendix.sty 508 | Package: appendix 2009/09/02 v1.2b extra appendix facilities 509 | \c@@pps=\count284 510 | \c@@ppsavesec=\count285 511 | \c@@ppsaveapp=\count286 512 | ) 513 | (c:/texlive/2019/texmf-dist/tex/latex/base/makeidx.sty 514 | Package: makeidx 2014/09/29 v1.0m Standard LaTeX package 515 | ) 516 | (c:/texlive/2019/texmf-dist/tex/latex/hyperref/hyperref.sty 517 | Package: hyperref 2018/11/30 v6.88e Hypertext links for LaTeX 518 | 519 | (c:/texlive/2019/texmf-dist/tex/generic/oberdiek/hobsub-hyperref.sty 520 | Package: hobsub-hyperref 2016/05/16 v1.14 Bundle oberdiek, subset hyperref (HO) 521 | 522 | 523 | (c:/texlive/2019/texmf-dist/tex/generic/oberdiek/hobsub-generic.sty 524 | Package: hobsub-generic 2016/05/16 v1.14 Bundle oberdiek, subset generic (HO) 525 | Package: hobsub 2016/05/16 v1.14 Construct package bundles (HO) 526 | Package: infwarerr 2016/05/16 v1.4 Providing info/warning/error messages (HO) 527 | Package: ltxcmds 2016/05/16 v1.23 LaTeX kernel commands for general use (HO) 528 | Package: ifluatex 2016/05/16 v1.4 Provides the ifluatex switch (HO) 529 | Package ifluatex Info: LuaTeX not detected. 530 | Package: ifvtex 2016/05/16 v1.6 Detect VTeX and its facilities (HO) 531 | Package ifvtex Info: VTeX not detected. 532 | Package: intcalc 2016/05/16 v1.2 Expandable calculations with integers (HO) 533 | Package: ifpdf 2018/09/07 v3.3 Provides the ifpdf switch 534 | Package: etexcmds 2016/05/16 v1.6 Avoid name clashes with e-TeX commands (HO) 535 | Package: kvsetkeys 2016/05/16 v1.17 Key value parser (HO) 536 | Package: kvdefinekeys 2016/05/16 v1.4 Define keys (HO) 537 | Package: pdftexcmds 2018/09/10 v0.29 Utility functions of pdfTeX for LuaTeX (HO 538 | ) 539 | Package pdftexcmds Info: LuaTeX not detected. 540 | Package pdftexcmds Info: pdfTeX >= 1.30 not detected. 541 | Package pdftexcmds Info: \pdf@primitive is available. 542 | Package pdftexcmds Info: \pdf@ifprimitive is available. 543 | Package pdftexcmds Info: \pdfdraftmode not found. 544 | Package: pdfescape 2016/05/16 v1.14 Implements pdfTeX's escape features (HO) 545 | Package: bigintcalc 2016/05/16 v1.4 Expandable calculations on big integers (HO 546 | ) 547 | Package: bitset 2016/05/16 v1.2 Handle bit-vector datatype (HO) 548 | Package: uniquecounter 2016/05/16 v1.3 Provide unlimited unique counter (HO) 549 | ) 550 | Package hobsub Info: Skipping package `hobsub' (already loaded). 551 | Package: letltxmacro 2016/05/16 v1.5 Let assignment for LaTeX macros (HO) 552 | Package: hopatch 2016/05/16 v1.3 Wrapper for package hooks (HO) 553 | Package: xcolor-patch 2016/05/16 xcolor patch 554 | Package: atveryend 2016/05/16 v1.9 Hooks at the very end of document (HO) 555 | Package atveryend Info: \enddocument detected (standard20110627). 556 | Package: atbegshi 2016/06/09 v1.18 At begin shipout hook (HO) 557 | Package: refcount 2016/05/16 v3.5 Data extraction from label references (HO) 558 | Package: hycolor 2016/05/16 v1.8 Color options for hyperref/bookmark (HO) 559 | ) 560 | (c:/texlive/2019/texmf-dist/tex/latex/graphics/keyval.sty 561 | Package: keyval 2014/10/28 v1.15 key=value parser (DPC) 562 | \KV@toks@=\toks22 563 | ) 564 | (c:/texlive/2019/texmf-dist/tex/generic/ifxetex/ifxetex.sty 565 | Package: ifxetex 2010/09/12 v0.6 Provides ifxetex conditional 566 | ) 567 | (c:/texlive/2019/texmf-dist/tex/latex/oberdiek/auxhook.sty 568 | Package: auxhook 2016/05/16 v1.4 Hooks for auxiliary files (HO) 569 | ) 570 | (c:/texlive/2019/texmf-dist/tex/latex/oberdiek/kvoptions.sty 571 | Package: kvoptions 2016/05/16 v3.12 Key value format for package options (HO) 572 | ) 573 | \@linkdim=\dimen185 574 | \Hy@linkcounter=\count287 575 | \Hy@pagecounter=\count288 576 | 577 | (c:/texlive/2019/texmf-dist/tex/latex/hyperref/pd1enc.def 578 | File: pd1enc.def 2018/11/30 v6.88e Hyperref: PDFDocEncoding definition (HO) 579 | ) 580 | \Hy@SavedSpaceFactor=\count289 581 | 582 | (c:/texlive/2019/texmf-dist/tex/latex/latexconfig/hyperref.cfg 583 | File: hyperref.cfg 2002/06/06 v1.2 hyperref configuration of TeXLive 584 | ) 585 | Package hyperref Info: Option `unicode' set `true' on input line 4393. 586 | 587 | (c:/texlive/2019/texmf-dist/tex/latex/hyperref/puenc.def 588 | File: puenc.def 2018/11/30 v6.88e Hyperref: PDF Unicode definition (HO) 589 | ) 590 | Package hyperref Info: Hyper figures OFF on input line 4519. 591 | Package hyperref Info: Link nesting OFF on input line 4524. 592 | Package hyperref Info: Hyper index ON on input line 4527. 593 | Package hyperref Info: Plain pages OFF on input line 4534. 594 | Package hyperref Info: Backreferencing OFF on input line 4539. 595 | Package hyperref Info: Implicit mode ON; LaTeX internals redefined. 596 | Package hyperref Info: Bookmarks ON on input line 4772. 597 | \c@Hy@tempcnt=\count290 598 | 599 | (c:/texlive/2019/texmf-dist/tex/latex/url/url.sty 600 | \Urlmuskip=\muskip18 601 | Package: url 2013/09/16 ver 3.4 Verb mode for urls, etc. 602 | ) 603 | LaTeX Info: Redefining \url on input line 5125. 604 | \XeTeXLinkMargin=\dimen186 605 | \Fld@menulength=\count291 606 | \Field@Width=\dimen187 607 | \Fld@charsize=\dimen188 608 | Package hyperref Info: Hyper figures OFF on input line 6380. 609 | Package hyperref Info: Link nesting OFF on input line 6385. 610 | Package hyperref Info: Hyper index ON on input line 6388. 611 | Package hyperref Info: backreferencing OFF on input line 6395. 612 | Package hyperref Info: Link coloring OFF on input line 6400. 613 | Package hyperref Info: Link coloring with OCG OFF on input line 6405. 614 | Package hyperref Info: PDF/A mode OFF on input line 6410. 615 | LaTeX Info: Redefining \ref on input line 6450. 616 | LaTeX Info: Redefining \pageref on input line 6454. 617 | \Hy@abspage=\count292 618 | \c@Item=\count293 619 | \c@Hfootnote=\count294 620 | ) 621 | Package hyperref Info: Driver (autodetected): hxetex. 622 | 623 | (c:/texlive/2019/texmf-dist/tex/latex/hyperref/hxetex.def 624 | File: hxetex.def 2018/11/30 v6.88e Hyperref driver for XeTeX 625 | 626 | (c:/texlive/2019/texmf-dist/tex/generic/oberdiek/stringenc.sty 627 | Package: stringenc 2016/05/16 v1.11 Convert strings between diff. encodings (HO 628 | ) 629 | ) 630 | \pdfm@box=\box57 631 | \c@Hy@AnnotLevel=\count295 632 | \HyField@AnnotCount=\count296 633 | \Fld@listcount=\count297 634 | \c@bookmark@seq@number=\count298 635 | 636 | (c:/texlive/2019/texmf-dist/tex/latex/oberdiek/rerunfilecheck.sty 637 | Package: rerunfilecheck 2016/05/16 v1.8 Rerun checks for auxiliary files (HO) 638 | Package uniquecounter Info: New unique counter `rerunfilecheck' on input line 2 639 | 82. 640 | ) 641 | \Hy@SectionHShift=\skip69 642 | ) 643 | (c:/texlive/2019/texmf-dist/tex/latex/graphics/graphicx.sty 644 | Package: graphicx 2017/06/01 v1.1a Enhanced LaTeX Graphics (DPC,SPQR) 645 | 646 | (c:/texlive/2019/texmf-dist/tex/latex/graphics/graphics.sty 647 | Package: graphics 2017/06/25 v1.2c Standard LaTeX Graphics (DPC,SPQR) 648 | 649 | (c:/texlive/2019/texmf-dist/tex/latex/graphics/trig.sty 650 | Package: trig 2016/01/03 v1.10 sin cos tan (DPC) 651 | ) 652 | (c:/texlive/2019/texmf-dist/tex/latex/graphics-cfg/graphics.cfg 653 | File: graphics.cfg 2016/06/04 v1.11 sample graphics configuration 654 | ) 655 | Package graphics Info: Driver file: xetex.def on input line 99. 656 | 657 | (c:/texlive/2019/texmf-dist/tex/latex/graphics-def/xetex.def 658 | File: xetex.def 2017/06/24 v5.0h Graphics/color driver for xetex 659 | )) 660 | \Gin@req@height=\dimen189 661 | \Gin@req@width=\dimen190 662 | ) 663 | (c:/texlive/2019/texmf-dist/tex/latex/graphics/epsfig.sty 664 | Package: epsfig 2017/06/25 v1.7b (e)psfig emulation (SPQR) 665 | \epsfxsize=\dimen191 666 | \epsfysize=\dimen192 667 | ) 668 | (c:/texlive/2019/texmf-dist/tex/latex/subfig/subfig.sty 669 | Package: subfig 2005/06/28 ver: 1.3 subfig package 670 | 671 | (c:/texlive/2019/texmf-dist/tex/latex/caption/caption.sty 672 | Package: caption 2018/10/06 v3.3-154 Customizing captions (AR) 673 | 674 | (c:/texlive/2019/texmf-dist/tex/latex/caption/caption3.sty 675 | Package: caption3 2018/09/12 v1.8c caption3 kernel (AR) 676 | Package caption3 Info: TeX engine: e-TeX on input line 64. 677 | \captionmargin=\dimen193 678 | \captionmargin@=\dimen194 679 | \captionwidth=\dimen195 680 | \caption@tempdima=\dimen196 681 | \caption@indent=\dimen197 682 | \caption@parindent=\dimen198 683 | \caption@hangindent=\dimen199 684 | ) 685 | \c@caption@flags=\count299 686 | \c@ContinuedFloat=\count300 687 | Package caption Info: hyperref package is loaded. 688 | ) 689 | \c@KVtest=\count301 690 | \sf@farskip=\skip70 691 | \sf@captopadj=\dimen256 692 | \sf@capskip=\skip71 693 | \sf@nearskip=\skip72 694 | \c@subfigure=\count302 695 | \c@subfigure@save=\count303 696 | \c@lofdepth=\count304 697 | \c@subtable=\count305 698 | \c@subtable@save=\count306 699 | \c@lotdepth=\count307 700 | \sf@top=\skip73 701 | \sf@bottom=\skip74 702 | ) 703 | (c:/texlive/2019/texmf-dist/tex/latex/geometry/geometry.sty 704 | Package: geometry 2018/04/16 v5.8 Page Geometry 705 | \Gm@cnth=\count308 706 | \Gm@cntv=\count309 707 | \c@Gm@tempcnt=\count310 708 | \Gm@bindingoffset=\dimen257 709 | \Gm@wd@mp=\dimen258 710 | \Gm@odd@mp=\dimen259 711 | \Gm@even@mp=\dimen260 712 | \Gm@layoutwidth=\dimen261 713 | \Gm@layoutheight=\dimen262 714 | \Gm@layouthoffset=\dimen263 715 | \Gm@layoutvoffset=\dimen264 716 | \Gm@dimlist=\toks23 717 | ) 718 | (c:/texlive/2019/texmf-dist/tex/latex/graphics/color.sty 719 | Package: color 2016/07/10 v1.1e Standard LaTeX Color (DPC) 720 | 721 | (c:/texlive/2019/texmf-dist/tex/latex/graphics-cfg/color.cfg 722 | File: color.cfg 2016/01/02 v1.6 sample color configuration 723 | ) 724 | Package color Info: Driver file: xetex.def on input line 147. 725 | ) 726 | (c:/texlive/2019/texmf-dist/tex/latex/xcolor/xcolor.sty 727 | Package: xcolor 2016/05/11 v2.12 LaTeX color extensions (UK) 728 | 729 | (c:/texlive/2019/texmf-dist/tex/latex/graphics-cfg/color.cfg 730 | File: color.cfg 2016/01/02 v1.6 sample color configuration 731 | ) 732 | Package xcolor Info: Driver file: xetex.def on input line 225. 733 | LaTeX Info: Redefining \color on input line 709. 734 | Package xcolor Info: Model `cmy' substituted by `cmy0' on input line 1348. 735 | Package xcolor Info: Model `RGB' extended on input line 1364. 736 | Package xcolor Info: Model `HTML' substituted by `rgb' on input line 1366. 737 | Package xcolor Info: Model `Hsb' substituted by `hsb' on input line 1367. 738 | Package xcolor Info: Model `tHsb' substituted by `hsb' on input line 1368. 739 | Package xcolor Info: Model `HSB' substituted by `hsb' on input line 1369. 740 | Package xcolor Info: Model `Gray' substituted by `gray' on input line 1370. 741 | Package xcolor Info: Model `wave' substituted by `hsb' on input line 1371. 742 | ) 743 | (c:/texlive/2019/texmf-dist/tex/latex/listings/listings.sty 744 | \lst@mode=\count311 745 | \lst@gtempboxa=\box58 746 | \lst@token=\toks24 747 | \lst@length=\count312 748 | \lst@currlwidth=\dimen265 749 | \lst@column=\count313 750 | \lst@pos=\count314 751 | \lst@lostspace=\dimen266 752 | \lst@width=\dimen267 753 | \lst@newlines=\count315 754 | \lst@lineno=\count316 755 | \lst@maxwidth=\dimen268 756 | 757 | (c:/texlive/2019/texmf-dist/tex/latex/listings/lstmisc.sty 758 | File: lstmisc.sty 2019/02/27 1.8b (Carsten Heinz) 759 | \c@lstnumber=\count317 760 | \lst@skipnumbers=\count318 761 | \lst@framebox=\box59 762 | ) 763 | (c:/texlive/2019/texmf-dist/tex/latex/listings/listings.cfg 764 | File: listings.cfg 2019/02/27 1.8b listings configuration 765 | )) 766 | Package: listings 2019/02/27 1.8b (Carsten Heinz) 767 | 768 | (c:/texlive/2019/texmf-dist/tex/latex/listings/lstlang1.sty 769 | File: lstlang1.sty 2019/02/27 1.8b listings language file 770 | ) 771 | (c:/texlive/2019/texmf-dist/tex/xelatex/xecjk/xeCJK-listings.sty 772 | Package: xeCJK-listings 2019/04/07 v3.7.2 xeCJK patch file for listings 773 | \l__xeCJK_listings_max_char_int=\count319 774 | \l__xeCJK_listings_flag_int=\count320 775 | ) (./notes.aux) 776 | \openout1 = `notes.aux'. 777 | 778 | LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 31. 779 | LaTeX Font Info: ... okay on input line 31. 780 | LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 31. 781 | LaTeX Font Info: ... okay on input line 31. 782 | LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 31. 783 | LaTeX Font Info: ... okay on input line 31. 784 | LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 31. 785 | LaTeX Font Info: ... okay on input line 31. 786 | LaTeX Font Info: Checking defaults for TU/lmr/m/n on input line 31. 787 | LaTeX Font Info: ... okay on input line 31. 788 | LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 31. 789 | LaTeX Font Info: ... okay on input line 31. 790 | LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 31. 791 | LaTeX Font Info: ... okay on input line 31. 792 | LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 31. 793 | LaTeX Font Info: ... okay on input line 31. 794 | LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 31. 795 | LaTeX Font Info: ... okay on input line 31. 796 | LaTeX Font Info: Checking defaults for PU/pdf/m/n on input line 31. 797 | LaTeX Font Info: ... okay on input line 31. 798 | ABD: EverySelectfont initializing macros 799 | LaTeX Info: Redefining \selectfont on input line 31. 800 | 801 | Package fontspec Info: Adjusting the maths setup (use [no-math] to avoid 802 | (fontspec) this). 803 | 804 | \symlegacymaths=\mathgroup6 805 | LaTeX Font Info: Overwriting symbol font `legacymaths' in version `bold' 806 | (Font) OT1/cmr/m/n --> OT1/cmr/bx/n on input line 31. 807 | LaTeX Font Info: Redeclaring math accent \acute on input line 31. 808 | LaTeX Font Info: Redeclaring math accent \grave on input line 31. 809 | LaTeX Font Info: Redeclaring math accent \ddot on input line 31. 810 | LaTeX Font Info: Redeclaring math accent \tilde on input line 31. 811 | LaTeX Font Info: Redeclaring math accent \bar on input line 31. 812 | LaTeX Font Info: Redeclaring math accent \breve on input line 31. 813 | LaTeX Font Info: Redeclaring math accent \check on input line 31. 814 | LaTeX Font Info: Redeclaring math accent \hat on input line 31. 815 | LaTeX Font Info: Redeclaring math accent \dot on input line 31. 816 | LaTeX Font Info: Redeclaring math accent \mathring on input line 31. 817 | LaTeX Font Info: Redeclaring math symbol \Gamma on input line 31. 818 | LaTeX Font Info: Redeclaring math symbol \Delta on input line 31. 819 | LaTeX Font Info: Redeclaring math symbol \Theta on input line 31. 820 | LaTeX Font Info: Redeclaring math symbol \Lambda on input line 31. 821 | LaTeX Font Info: Redeclaring math symbol \Xi on input line 31. 822 | LaTeX Font Info: Redeclaring math symbol \Pi on input line 31. 823 | LaTeX Font Info: Redeclaring math symbol \Sigma on input line 31. 824 | LaTeX Font Info: Redeclaring math symbol \Upsilon on input line 31. 825 | LaTeX Font Info: Redeclaring math symbol \Phi on input line 31. 826 | LaTeX Font Info: Redeclaring math symbol \Psi on input line 31. 827 | LaTeX Font Info: Redeclaring math symbol \Omega on input line 31. 828 | LaTeX Font Info: Redeclaring math symbol \mathdollar on input line 31. 829 | LaTeX Font Info: Redeclaring symbol font `operators' on input line 31. 830 | LaTeX Font Info: Encoding `OT1' has changed to `TU' for symbol font 831 | (Font) `operators' in the math version `normal' on input line 31. 832 | LaTeX Font Info: Overwriting symbol font `operators' in version `normal' 833 | (Font) OT1/cmr/m/n --> TU/lmr/m/n on input line 31. 834 | LaTeX Font Info: Encoding `OT1' has changed to `TU' for symbol font 835 | (Font) `operators' in the math version `bold' on input line 31. 836 | LaTeX Font Info: Overwriting symbol font `operators' in version `bold' 837 | (Font) OT1/cmr/bx/n --> TU/lmr/m/n on input line 31. 838 | LaTeX Font Info: Overwriting symbol font `operators' in version `normal' 839 | (Font) TU/lmr/m/n --> TU/lmr/m/n on input line 31. 840 | LaTeX Font Info: Overwriting math alphabet `\mathit' in version `normal' 841 | (Font) OT1/cmr/m/it --> TU/lmr/m/it on input line 31. 842 | LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `normal' 843 | (Font) OT1/cmr/bx/n --> TU/lmr/bx/n on input line 31. 844 | LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal' 845 | (Font) OT1/cmss/m/n --> TU/lmss/m/n on input line 31. 846 | LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal' 847 | (Font) OT1/cmtt/m/n --> TU/lmtt/m/n on input line 31. 848 | LaTeX Font Info: Overwriting symbol font `operators' in version `bold' 849 | (Font) TU/lmr/m/n --> TU/lmr/bx/n on input line 31. 850 | LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold' 851 | (Font) OT1/cmr/bx/it --> TU/lmr/bx/it on input line 31. 852 | LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold' 853 | (Font) OT1/cmss/bx/n --> TU/lmss/bx/n on input line 31. 854 | LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold' 855 | (Font) OT1/cmtt/m/n --> TU/lmtt/bx/n on input line 31. 856 | \AtBeginShipoutBox=\box60 857 | Package hyperref Info: Link coloring OFF on input line 31. 858 | 859 | (c:/texlive/2019/texmf-dist/tex/latex/hyperref/nameref.sty 860 | Package: nameref 2016/05/21 v2.44 Cross-referencing by name of section 861 | 862 | (c:/texlive/2019/texmf-dist/tex/generic/oberdiek/gettitlestring.sty 863 | Package: gettitlestring 2016/05/16 v1.5 Cleanup title references (HO) 864 | ) 865 | \c@section@level=\count321 866 | ) 867 | LaTeX Info: Redefining \ref on input line 31. 868 | LaTeX Info: Redefining \pageref on input line 31. 869 | LaTeX Info: Redefining \nameref on input line 31. 870 | 871 | (./notes.out) (./notes.out) 872 | \@outlinefile=\write3 873 | \openout3 = `notes.out'. 874 | 875 | Package caption Info: Begin \AtBeginDocument code. 876 | Package caption Info: subfig package v1.3 is loaded. 877 | Package caption Info: listings package is loaded. 878 | Package caption Info: End \AtBeginDocument code. 879 | 880 | *geometry* driver: auto-detecting 881 | *geometry* detected driver: xetex 882 | *geometry* verbose mode - [ preamble ] result: 883 | * driver: xetex 884 | * paper: a4paper 885 | * layout: 886 | * layoutoffset:(h,v)=(0.0pt,0.0pt) 887 | * modes: 888 | * h-part:(L,W,R)=(59.74988pt, 478.00812pt, 59.74988pt) 889 | * v-part:(T,H,B)=(67.60269pt, 676.04005pt, 101.4041pt) 890 | * \paperwidth=597.50787pt 891 | * \paperheight=845.04684pt 892 | * \textwidth=478.00812pt 893 | * \textheight=676.04005pt 894 | * \oddsidemargin=-12.52011pt 895 | * \evensidemargin=-12.52011pt 896 | * \topmargin=-41.6673pt 897 | * \headheight=12.0pt 898 | * \headsep=25.0pt 899 | * \topskip=12.0pt 900 | * \footskip=30.0pt 901 | * \marginparwidth=35.0pt 902 | * \marginparsep=10.0pt 903 | * \columnsep=10.0pt 904 | * \skip\footins=10.8pt plus 4.0pt minus 2.0pt 905 | * \hoffset=0.0pt 906 | * \voffset=0.0pt 907 | * \mag=1000 908 | * \@twocolumnfalse 909 | * \@twosidefalse 910 | * \@mparswitchfalse 911 | * \@reversemarginfalse 912 | * (1in=72.27pt=25.4mm, 1cm=28.453pt) 913 | 914 | \c@lstlisting=\count322 915 | LaTeX Font Info: Try loading font information for U+msa on input line 34. 916 | (c:/texlive/2019/texmf-dist/tex/latex/amsfonts/umsa.fd 917 | File: umsa.fd 2013/01/14 v3.01 AMS symbols A 918 | ) 919 | LaTeX Font Info: Try loading font information for U+msb on input line 34. 920 | 921 | (c:/texlive/2019/texmf-dist/tex/latex/amsfonts/umsb.fd 922 | File: umsb.fd 2013/01/14 v3.01 AMS symbols B 923 | ) [1 924 | 925 | ] 926 | File: MscaleNet2.png Graphic file (type bmp) 927 | 928 | 929 | 930 | LaTeX Warning: `h' float specifier changed to `ht'. 931 | 932 | [2] [3] [4] [5] [6] 933 | Overfull \hbox (30.28864pt too wide) in paragraph at lines 237--238 934 | []\TU/SimSun(0)/bx/n/12 如 果 我 们 不 知 道 精 确 解,| 而 是 通 过 差 分 方 法 或 者 其 它 方 法 生 成 935 | 的 参 考 解。| 这 里 的 \TU/lmr/bx/n/12 u_test_true 936 | [] 937 | 938 | [7] 939 | Overfull \hbox (19.99695pt too wide) in paragraph at lines 310--311 940 | [][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][] 941 | [] 942 | 943 | 944 | Overfull \hbox (1.99677pt too wide) in paragraph at lines 311--312 945 | [][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][ 946 | ][][][][][][] 947 | [] 948 | 949 | 950 | Overfull \hbox (1.99677pt too wide) in paragraph at lines 313--315 951 | [][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][ 952 | ][][][][][][] 953 | [] 954 | 955 | [8] [9] 956 | Overfull \hbox (1.99677pt too wide) in paragraph at lines 374--375 957 | [][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][] 958 | [] 959 | 960 | [10] 961 | Overfull \hbox (28.1424pt too wide) in paragraph at lines 438--439 962 | []\TU/SimSun(0)/m/n/12 具 体 参 数 如 上,| 分 别 用 规 模 为 \TU/lmr/m/n/12 (180,180,180) 963 | \TU/SimSun(0)/m/n/12 的 普 通 全 连 接 网 络 \TU/lmr/m/n/12 (normal)\TU/SimSun(0)/m/n/1 964 | 2 ,| 系 数 为 \TU/lmr/m/n/12 (1,2,4,8,16,32) 965 | [] 966 | 967 | File: s0.png Graphic file (type bmp) 968 | 969 | File: s500.png Graphic file (type bmp) 970 | 971 | File: s1000.png Graphic file (type bmp) 972 | 973 | File: s5000.png Graphic file (type bmp) 974 | 975 | 976 | LaTeX Warning: `h' float specifier changed to `ht'. 977 | 978 | File: error.png Graphic file (type bmp) 979 | 980 | 981 | LaTeX Warning: `h' float specifier changed to `ht'. 982 | 983 | [11] 984 | Package atveryend Info: Empty hook `BeforeClearDocument' on input line 466. 985 | [12] 986 | Package atveryend Info: Empty hook `AfterLastShipout' on input line 466. 987 | (./notes.aux) 988 | Package atveryend Info: Empty hook `AtVeryEndDocument' on input line 466. 989 | Package atveryend Info: Executing hook `AtEndAfterFileList' on input line 466. 990 | Package rerunfilecheck Info: File `notes.out' has not changed. 991 | (rerunfilecheck) Checksum: D41D8CD98F00B204E9800998ECF8427E. 992 | Package atveryend Info: Empty hook `AtVeryVeryEnd' on input line 466. 993 | ) 994 | Here is how much of TeX's memory you used: 995 | 27627 strings out of 492922 996 | 541182 string characters out of 6136750 997 | 852613 words of memory out of 5000000 998 | 31404 multiletter control sequences out of 15000+600000 999 | 540909 words of font info for 85 fonts, out of 8000000 for 9000 1000 | 1348 hyphenation exceptions out of 8191 1001 | 54i,11n,68p,382b,1858s stack positions out of 5000i,500n,10000p,200000b,80000s 1002 | 1003 | Output written on notes.pdf (12 pages). 1004 | --------------------------------------------------------------------------------