├── .DS_Store
├── LICENSE
├── README.md
├── dat
└── thermal_fine.mat
├── model
├── model_testing.ipynb
└── pinn_for_Solidification.py
├── prediction_results
├── .DS_Store
├── predict_ftem.txt
├── predict_tem.txt
└── predict_xT.txt
├── requirements.txt
└── visualisation
├── .DS_Store
├── final_results
├── Residuals_Tem_Pred.png
├── Scatter_Plot.png
├── Temp_Pred_VS_Exact.png
├── TrainingLossCurve_0.001_HL=8_HN=200.png
├── TrainingLossCurve_LR=cosine.png
├── TrainingLossCurve_LR=expo.png
└── TrainingLossCurve_LR=polynomial.png
├── plots.py
├── plots2.py
└── plots3.py
/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/.DS_Store
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Aman Khilani
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Heat-Transfer-in-Advanced-Manufacturing-using-PINN
2 | This is a PINN based approach in solving high temperature heat transfer equations in manufacturing industries, with a focus on reducing the energy consumption and optimizing the sensor positioning.
3 |
4 | ## Overview
5 |
6 | Physics-Informed Neural Networks (PINNs) are a novel class of neural networks that leverage physical laws described by partial differential equations (PDEs) to inform the learning process. This project implements a PINN to model the temperature distribution in a solidification process.
7 |
8 | ## Process Description
9 |
10 | This project is based on the research paper [Machine learning for metal additive manufacturing: Predicting temperature and melt pool fluid dynamics using physics-informed neural networks](https://arxiv.org/abs/2008.13547) by Qiming Zhu, Zeliang Liu, Jinhui Yan.
11 |
12 | The paper introduces the concept of Physics-Informed Neural Networks (PINNs), which embed physical laws into the learning process of neural networks. The methodology uses automatic differentiation to incorporate the PDEs into the loss function, guiding the training process with both data and physical laws.
13 |
14 | ### Adaptation to TensorFlow 2.11.0
15 |
16 | The original implementation provided in the paper was designed for TensorFlow 1.x, which is not compatible with TensorFlow 2.x. This repository contains a refactored version of the code, making it compatible with TensorFlow 2.11.0. The key changes include:
17 |
18 | 1. **Session Management**: TensorFlow 2.x uses eager execution by default, removing the need for explicit session management. However, to maintain the structure of the original code, the `tf.compat.v1.Session` is used for session-based execution.
19 | 2. **Optimizers**: TensorFlow 2.x has a new API for optimizers. The code now uses `tf.keras.optimizers` and the `minimize` method to handle optimization.
20 | 3. **Gradient Computation**: The `tf.GradientTape` context is used for computing gradients in TensorFlow 2.x, replacing the `tf.gradients` function from TensorFlow 1.x.
21 | 4. **Eager Execution Compatibility**: Ensured all tensor operations are compatible with eager execution to facilitate debugging and model development.
22 |
23 | These changes ensure that the code is up-to-date with the latest version of TensorFlow, benefiting from improved performance, ease of use, and ongoing support.
24 |
25 | ## Model Explanation
26 |
27 | ### Process Details
28 | The model predicts the temperature profile in the process of the solidification of Aluminium using a Graphite rod. This involves both heat conduction and phase change, modeled through a set of PDEs. The Aluminium starts in a liquid state and solidifies as it cools down. The model takes into account the latent heat of fusion and the difference in thermal properties between the solid and liquid phases of Aluminium.
29 |
30 | The PINN model is trained to solve the following PDE:
31 |
32 | $$
33 | \frac{\partial T}{\partial t} = \alpha \nabla^2 T
34 | $$
35 |
36 | where T is the temperature, t is time, and α is the thermal diffusivity.
37 |
38 | ### Boundary Conditions
39 |
40 | - **Dirichlet Boundary Condition**: Fixed temperature at the boundaries, which represents a constant temperature at the ends of the rod.
41 | - **Neumann Boundary Condition**: Heat flux at the boundaries, which ensures the conservation of energy across the boundaries.
42 |
43 | These boundary conditions are hard-coded into the loss function to ensure the solution adheres to the physical constraints of the problem.
44 |
45 | ### Heaviside Function
46 | The Heaviside function is used to handle the phase change in Aluminium. It smooths the transition between the solid and liquid phases, ensuring numerical stability and convergence. The function is defined as:
47 | $$
48 | H(x) = \begin{cases}
49 | 0 & \text{if } x < 0 \\
50 | 1 & \text{if } x \geq 0
51 | \end{cases}
52 | $$
53 |
54 | ## Training
55 |
56 | The model is trained for 50,000 epochs using the Adam optimizer with a learning rate of 0.001. The training process is as follows:
57 |
58 | - **Pre-training**: Initial training with a reduced learning rate to stabilize the model.
59 | - **Full training**: Further training with the full learning rate to minimize the loss function.
60 |
61 | ### Loss Function
62 |
63 | The loss function is a combination of the data loss and the physics-informed loss:
64 |
65 | $$
66 | \text{Loss} = \text{MSE}(\hat{T}, T) + \lambda \cdot \text{PDE\_Loss}
67 | $$
68 |
69 | where:
70 |
71 | - The first term is the mean squared error between the predicted and exact temperatures.
72 | - The second term ensures that the predicted temperature profile satisfies the underlying PDE.
73 | - Lambda is a weighting factor that balances the data loss and the PDE loss.
74 |
75 | ### Results
76 |
77 | The PINN model achieved a mean squared error (MSE) loss of 0.19. Below are some visualizations of the predictions compared to the exact solutions.
78 |
79 | - **Temperature Distribution**: A graph showing the exact and predicted temperature profiles.
80 | - **Residuals**: The difference between the predicted and exact temperatures.
81 | - **Scatter Plot**: A scatter plot comparing the exact and predicted temperatures.
82 |
83 | ### Visualisation
84 |
85 | The model predicts temperature in Kelvin, time in seconds, and X in metres. Below are some visualizations of the results.
86 |
87 | 
88 |
89 | *Scatter plot of exact vs predicted temperatures.*
90 |
91 | 
92 |
93 | *Residual plot (Exact - Predicted temperatures).*
94 |
95 | 
96 |
97 | *Temperature distribution over time.*
98 |
99 | These are the training loss curves for various different parametres:
100 |
101 | 
102 |
103 | *Training Loss Curve for constant LR = 0.001*
104 |
105 | 
106 |
107 | *Training Loss Curve for exponentially decayed LR*
108 |
109 | 
110 |
111 | *Training Loss Curve for cosine decayed LR*
112 |
113 | 
114 |
115 | *Training Loss Curve for polynomial(deg=2) decayed LR*
116 |
117 | ## References
118 |
119 | This model and methodology are based on the research presented in the paper [Machine learning for metal additive manufacturing: Predicting temperature and melt pool fluid dynamics using physics-informed neural networks](https://arxiv.org/abs/2008.13547) by Qiming Zhu, Zeliang Liu, Jinhui Yan.
120 |
121 | ## Usage
122 |
123 | Install the required dependencie using:
124 | ```sh
125 | pip install -r requirements.txt
126 | ```
127 | ## License
128 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
--------------------------------------------------------------------------------
/dat/thermal_fine.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/dat/thermal_fine.mat
--------------------------------------------------------------------------------
/model/model_testing.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "provenance": [],
7 | "gpuType": "T4",
8 | "authorship_tag": "ABX9TyNjdblU+O6JGUaR5iOfgoDZ",
9 | "include_colab_link": true
10 | },
11 | "kernelspec": {
12 | "name": "python3",
13 | "display_name": "Python 3"
14 | },
15 | "language_info": {
16 | "name": "python"
17 | },
18 | "accelerator": "GPU"
19 | },
20 | "cells": [
21 | {
22 | "cell_type": "markdown",
23 | "metadata": {
24 | "id": "view-in-github",
25 | "colab_type": "text"
26 | },
27 | "source": [
28 | "
"
29 | ]
30 | },
31 | {
32 | "cell_type": "code",
33 | "execution_count": 8,
34 | "metadata": {
35 | "colab": {
36 | "base_uri": "https://localhost:8080/",
37 | "height": 1000
38 | },
39 | "id": "cajAmpzj4sx-",
40 | "outputId": "79e1fe61-bf5d-4e5e-f857-bdd5b718961d"
41 | },
42 | "outputs": [
43 | {
44 | "output_type": "stream",
45 | "name": "stderr",
46 | "text": [
47 | ":144: DeprecationWarning: `interp2d` is deprecated!\n",
48 | "`interp2d` is deprecated in SciPy 1.10 and will be removed in SciPy 1.13.0.\n",
49 | "\n",
50 | "For legacy code, nearly bug-for-bug compatible replacements are\n",
51 | "`RectBivariateSpline` on regular grids, and `bisplrep`/`bisplev` for\n",
52 | "scattered 2D data.\n",
53 | "\n",
54 | "In new code, for regular grids use `RegularGridInterpolator` instead.\n",
55 | "For scattered data, prefer `LinearNDInterpolator` or\n",
56 | "`CloughTocher2DInterpolator`.\n",
57 | "\n",
58 | "For more details see\n",
59 | "`https://scipy.github.io/devdocs/notebooks/interp_transition_guide.html`\n",
60 | "\n",
61 | " ftem = interp2d(x, t, Exact.T)\n",
62 | ":158: DeprecationWarning: `interp2d` is deprecated!\n",
63 | " `interp2d` is deprecated in SciPy 1.10 and will be removed in SciPy 1.13.0.\n",
64 | "\n",
65 | " For legacy code, nearly bug-for-bug compatible replacements are\n",
66 | " `RectBivariateSpline` on regular grids, and `bisplrep`/`bisplev` for\n",
67 | " scattered 2D data.\n",
68 | "\n",
69 | " In new code, for regular grids use `RegularGridInterpolator` instead.\n",
70 | " For scattered data, prefer `LinearNDInterpolator` or\n",
71 | " `CloughTocher2DInterpolator`.\n",
72 | "\n",
73 | " For more details see\n",
74 | " `https://scipy.github.io/devdocs/notebooks/interp_transition_guide.html`\n",
75 | "\n",
76 | " X_tem = ftem(X_f[:, 0], 0).flatten()[:, None]\n",
77 | "WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.\n"
78 | ]
79 | },
80 | {
81 | "output_type": "stream",
82 | "name": "stdout",
83 | "text": [
84 | "Epoch: 0, Loss: 513610.9637938071\n",
85 | "Epoch: 100, Loss: 2557.692966414559\n",
86 | "Epoch: 200, Loss: 47.284276287188106\n",
87 | "Epoch: 300, Loss: 82.50186519952989\n",
88 | "Epoch: 400, Loss: 394.02291045777\n"
89 | ]
90 | },
91 | {
92 | "output_type": "error",
93 | "ename": "KeyboardInterrupt",
94 | "evalue": "",
95 | "traceback": [
96 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
97 | "\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
98 | "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 161\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 162\u001b[0m \u001b[0mstart_time\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtime\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtime\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 163\u001b[0;31m \u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mepochs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m1000\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlearning_rate\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0.01\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 164\u001b[0m \u001b[0melapsed\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtime\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtime\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m-\u001b[0m \u001b[0mstart_time\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 165\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'Training time: %.4f'\u001b[0m \u001b[0;34m%\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0melapsed\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
99 | "\u001b[0;32m\u001b[0m in \u001b[0;36mtrain\u001b[0;34m(self, epochs, learning_rate)\u001b[0m\n\u001b[1;32m 101\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mtrain\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mepochs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlearning_rate\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 102\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mepoch\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mepochs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 103\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msess\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain_op\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfeed_dict\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m{\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlearning_rate\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mlearning_rate\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 104\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mepoch\u001b[0m \u001b[0;34m%\u001b[0m \u001b[0;36m100\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 105\u001b[0m \u001b[0mloss_value\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msess\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mloss\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfeed_dict\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m{\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlearning_rate\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mlearning_rate\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
100 | "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, fetches, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m 970\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 971\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 972\u001b[0;31m result = self._run(None, fetches, feed_dict, options_ptr,\n\u001b[0m\u001b[1;32m 973\u001b[0m run_metadata_ptr)\n\u001b[1;32m 974\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mrun_metadata\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
101 | "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_run\u001b[0;34m(self, handle, fetches, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m 1213\u001b[0m \u001b[0;31m# or if the call is a partial run that specifies feeds.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1214\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mfinal_fetches\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0mfinal_targets\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mhandle\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mfeed_dict_tensor\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1215\u001b[0;31m results = self._do_run(handle, final_targets, final_fetches,\n\u001b[0m\u001b[1;32m 1216\u001b[0m feed_dict_tensor, options, run_metadata)\n\u001b[1;32m 1217\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
102 | "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_do_run\u001b[0;34m(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m 1393\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1394\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mhandle\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1395\u001b[0;31m return self._do_call(_run_fn, feeds, fetches, targets, options,\n\u001b[0m\u001b[1;32m 1396\u001b[0m run_metadata)\n\u001b[1;32m 1397\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
103 | "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_do_call\u001b[0;34m(self, fn, *args)\u001b[0m\n\u001b[1;32m 1400\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_do_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfn\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1401\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1402\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mfn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1403\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0merrors\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mOpError\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1404\u001b[0m \u001b[0mmessage\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcompat\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mas_text\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmessage\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
104 | "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_run_fn\u001b[0;34m(feed_dict, fetch_list, target_list, options, run_metadata)\u001b[0m\n\u001b[1;32m 1383\u001b[0m \u001b[0;31m# Ensure any changes to the graph are reflected in the runtime.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1384\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_extend_graph\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1385\u001b[0;31m return self._call_tf_sessionrun(options, feed_dict, fetch_list,\n\u001b[0m\u001b[1;32m 1386\u001b[0m target_list, run_metadata)\n\u001b[1;32m 1387\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
105 | "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_call_tf_sessionrun\u001b[0;34m(self, options, feed_dict, fetch_list, target_list, run_metadata)\u001b[0m\n\u001b[1;32m 1476\u001b[0m def _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list,\n\u001b[1;32m 1477\u001b[0m run_metadata):\n\u001b[0;32m-> 1478\u001b[0;31m return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,\n\u001b[0m\u001b[1;32m 1479\u001b[0m \u001b[0mfetch_list\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtarget_list\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1480\u001b[0m run_metadata)\n",
106 | "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
107 | ]
108 | }
109 | ],
110 | "source": [
111 | "import sys\n",
112 | "\n",
113 | "import tensorflow as tf\n",
114 | "tf.compat.v1.disable_eager_execution()\n",
115 | "# import tensorflow_probability as tfp\n",
116 | "import numpy as np\n",
117 | "import matplotlib.pyplot as plt\n",
118 | "import scipy.io\n",
119 | "from scipy.interpolate import griddata\n",
120 | "from pyDOE import lhs\n",
121 | "from mpl_toolkits.mplot3d import Axes3D\n",
122 | "import time\n",
123 | "import matplotlib.gridspec as gridspec\n",
124 | "from mpl_toolkits.axes_grid1 import make_axes_locatable\n",
125 | "from scipy import interpolate\n",
126 | "from scipy.interpolate import interp2d\n",
127 | "\n",
128 | "import tensorflow as tf\n",
129 | "import numpy as np\n",
130 | "\n",
131 | "class SolidificationPINN:\n",
132 | " def __init__(self, x0, tem0, tb, X_f, X_tem, layers, lb, ub):\n",
133 | " self.lb = lb\n",
134 | " self.ub = ub\n",
135 | "\n",
136 | " self.x0 = x0\n",
137 | " self.tem0 = tem0\n",
138 | " self.tb = tb\n",
139 | " self.x_f = X_f[:, 0:1]\n",
140 | " self.t_f = X_f[:, 1:2]\n",
141 | " self.tem_f = X_tem\n",
142 | "\n",
143 | " self.layers = layers\n",
144 | " self.weights, self.biases = self.initialize_NN(layers)\n",
145 | "\n",
146 | " self.x0_tf = tf.convert_to_tensor(x0, dtype=tf.float64)\n",
147 | " self.tem0_tf = tf.convert_to_tensor(tem0, dtype=tf.float64)\n",
148 | " self.tb_tf = tf.convert_to_tensor(tb, dtype=tf.float64)\n",
149 | " self.x_f_tf = tf.convert_to_tensor(self.x_f, dtype=tf.float64)\n",
150 | " self.t_f_tf = tf.convert_to_tensor(self.t_f, dtype=tf.float64)\n",
151 | " self.tem_f_tf = tf.convert_to_tensor(self.tem_f, dtype=tf.float64)\n",
152 | "\n",
153 | " self.sess = tf.compat.v1.Session()\n",
154 | "\n",
155 | " self.learning_rate = tf.compat.v1.placeholder(tf.float64, shape=[])\n",
156 | " self.tem0_pred = self.net_uv(self.x0_tf, tf.zeros_like(self.x0_tf))\n",
157 | " self.f_u_pred = self.net_f_uv(self.x_f_tf, self.t_f_tf)\n",
158 | "\n",
159 | " self.loss = tf.reduce_mean(tf.square(self.tem0_tf - self.tem0_pred)) + \\\n",
160 | " tf.reduce_mean(tf.square(self.f_u_pred))\n",
161 | "\n",
162 | " self.optimizer = tf.compat.v1.train.AdamOptimizer(self.learning_rate)\n",
163 | " self.train_op = self.optimizer.minimize(self.loss)\n",
164 | "\n",
165 | " init = tf.compat.v1.global_variables_initializer()\n",
166 | " self.sess.run(init)\n",
167 | "\n",
168 | " def initialize_NN(self, layers):\n",
169 | " weights = []\n",
170 | " biases = []\n",
171 | " for l in range(len(layers) - 1):\n",
172 | " W = self.xavier_init(size=[layers[l], layers[l + 1]])\n",
173 | " b = tf.Variable(tf.zeros([1, layers[l + 1]], dtype=tf.float64), dtype=tf.float64)\n",
174 | " weights.append(W)\n",
175 | " biases.append(b)\n",
176 | " return weights, biases\n",
177 | "\n",
178 | " def xavier_init(self, size):\n",
179 | " in_dim = size[0]\n",
180 | " out_dim = size[1]\n",
181 | " xavier_stddev = np.sqrt(2 / (in_dim + out_dim))\n",
182 | " return tf.Variable(tf.random.truncated_normal([in_dim, out_dim], stddev=xavier_stddev, dtype=tf.float64), dtype=tf.float64)\n",
183 | "\n",
184 | " def forward_pass(self, H):\n",
185 | " for l in range(len(self.layers) - 2):\n",
186 | " W = self.weights[l]\n",
187 | " b = self.biases[l]\n",
188 | " H = tf.nn.swish(tf.add(tf.matmul(H, W), b))\n",
189 | " W = self.weights[-1]\n",
190 | " b = self.biases[-1]\n",
191 | " H = tf.add(tf.matmul(H, W), b)\n",
192 | " return H\n",
193 | "\n",
194 | " def net_uv(self, x, t):\n",
195 | " X = tf.concat([x, t], axis=1)\n",
196 | " uv = self.forward_pass(X)\n",
197 | " return uv\n",
198 | "\n",
199 | " def net_f_uv(self, x, t):\n",
200 | " with tf.GradientTape(persistent=True) as tape:\n",
201 | " tape.watch(x)\n",
202 | " tape.watch(t)\n",
203 | " uv = self.net_uv(x, t)\n",
204 | " uv_x = tape.gradient(uv, x)\n",
205 | " uv_t = tape.gradient(uv, t)\n",
206 | " uv_xx = tape.gradient(uv_x, x)\n",
207 | " del tape\n",
208 | " f_uv = uv_t - uv_xx\n",
209 | " return f_uv\n",
210 | "\n",
211 | " def train(self, epochs, learning_rate):\n",
212 | " for epoch in range(epochs):\n",
213 | " self.sess.run(self.train_op, feed_dict={self.learning_rate: learning_rate})\n",
214 | " if epoch % 100 == 0:\n",
215 | " loss_value = self.sess.run(self.loss, feed_dict={self.learning_rate: learning_rate})\n",
216 | " print(f'Epoch: {epoch}, Loss: {loss_value}')\n",
217 | "\n",
218 | " def predict(self, X_star):\n",
219 | " X_star = tf.convert_to_tensor(X_star, dtype=tf.float64)\n",
220 | " tem_star = self.sess.run(self.net_uv(X_star[:, 0:1], X_star[:, 1:2]))\n",
221 | " ftem_star = self.sess.run(self.net_f_uv(X_star[:, 0:1], X_star[:, 1:2]))\n",
222 | " return tem_star, ftem_star\n",
223 | "\n",
224 | "import numpy as np\n",
225 | "import scipy.io\n",
226 | "from scipy.interpolate import interp2d\n",
227 | "import time\n",
228 | "from pyDOE import lhs\n",
229 | "import tensorflow as tf\n",
230 | "\n",
231 | "if __name__ == \"__main__\":\n",
232 | " noise = 0.0\n",
233 | "\n",
234 | " ltem = 298.15\n",
235 | " utem = 973.15\n",
236 | " eps = 0.02\n",
237 | "\n",
238 | " # Domain bounds\n",
239 | " lb = np.array([-0.4, 5.0])\n",
240 | " ub = np.array([0.4, 10.0])\n",
241 | "\n",
242 | " N0 = 300\n",
243 | " N_b = 100\n",
244 | " N_f = 10000\n",
245 | " num_hidden = 8\n",
246 | " layers = [2] + num_hidden * [200] + [1]\n",
247 | "\n",
248 | " data = scipy.io.loadmat('thermal_fine.mat')\n",
249 | " x = data['x'].flatten()[:, None]\n",
250 | " t = data['tt'].flatten()[:, None]\n",
251 | " Exact = data['Tem']\n",
252 | " Exact_tem = np.real(Exact)\n",
253 | "\n",
254 | " ftem = interp2d(x, t, Exact.T)\n",
255 | "\n",
256 | " X, T = np.meshgrid(x, t)\n",
257 | " X_star = np.hstack((X.flatten()[:, None], T.flatten()[:, None]))\n",
258 | "\n",
259 | " idx_x = np.random.choice(x.shape[0], N0, replace=False)\n",
260 | " x0 = x[idx_x, :]\n",
261 | " tem0 = Exact_tem[idx_x, 0:1]\n",
262 | "\n",
263 | " idx_t = np.random.choice(t.shape[0], N_b, replace=False)\n",
264 | " tb = t[idx_t, :]\n",
265 | "\n",
266 | " X_f = lb + (ub - lb) * lhs(2, N_f)\n",
267 | " X_f = X_f[np.argsort(X_f[:, 0])]\n",
268 | " X_tem = ftem(X_f[:, 0], 0).flatten()[:, None]\n",
269 | "\n",
270 | " model = SolidificationPINN(x0, tem0, tb, X_f, X_tem, layers, lb, ub)\n",
271 | "\n",
272 | " start_time = time.time()\n",
273 | " model.train(epochs=1000, learning_rate=0.01)\n",
274 | " elapsed = time.time() - start_time\n",
275 | " print('Training time: %.4f' % (elapsed))\n",
276 | "\n",
277 | " tem_pred, ftem_pred = model.predict(X_star)\n",
278 | " np.savetxt('predict_xT.txt', X_star)\n",
279 | " np.savetxt('predict_tem.txt', tem_pred)\n",
280 | " np.savetxt('predict_ftem.txt', ftem_pred)"
281 | ]
282 | }
283 | ]
284 | }
--------------------------------------------------------------------------------
/model/pinn_for_Solidification.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import tensorflow as tf
3 | tf.compat.v1.disable_eager_execution()
4 | import tensorflow_probability as tfp
5 | import numpy as np
6 | import matplotlib.pyplot as plt
7 | import scipy.io
8 | from scipy.interpolate import griddata
9 | from pyDOE import lhs
10 | from mpl_toolkits.mplot3d import Axes3D
11 | import time
12 | import matplotlib.gridspec as gridspec
13 | from mpl_toolkits.axes_grid1 import make_axes_locatable
14 | from scipy import interpolate
15 | from scipy.interpolate import interp2d
16 |
17 | class SolidificationPINN:
18 | def __init__(self, x0, tem0, tb, X_f, X_tem, layers, lb, ub):
19 |
20 | ltem = 298.15
21 | utem = 973.15
22 |
23 | X0 = np.concatenate((x0, 0*x0+5.0), 1) # (x0, 5)
24 | X_lb = np.concatenate((0*tb + lb[0], tb), 1) # (lb[0], tb)
25 | X_ub = np.concatenate((0*tb + ub[0], tb), 1) # (ub[0], tb)
26 |
27 | tem_lb = np.concatenate((0*tb + ltem, tb), 1)
28 | tem_ub = np.concatenate((0*tb + utem, tb), 1)
29 |
30 | self.lb = lb
31 | self.ub = ub
32 |
33 | self.x0 = X0[:,0:1]
34 | self.t0 = X0[:,1:2]
35 |
36 | self.x_lb = X_lb[:,0:1]
37 | self.t_lb = X_lb[:,1:2]
38 |
39 | self.x_ub = X_ub[:,0:1]
40 | self.t_ub = X_ub[:,1:2]
41 |
42 | self.x_f = X_f[:,0:1]
43 | self.t_f = X_f[:,1:2]
44 |
45 | self.X_tem = X_tem[:,0:1]
46 |
47 | self.tem0 = tem0
48 | self.tem_lb = tem_lb[:,0:1]
49 | self.tem_ub = tem_ub[:,0:1]
50 |
51 | self.layers = layers
52 | self.weights, self.biases = self.initialize_NN(layers)
53 |
54 | self.x0_tf = tf.convert_to_tensor(self.x0, dtype=tf.float64)
55 | self.t0_tf = tf.convert_to_tensor(self.t0, dtype=tf.float64)
56 |
57 | self.tem0_tf = tf.convert_to_tensor(self.tem0, dtype=tf.float64)
58 |
59 | self.x_lb_tf = tf.convert_to_tensor(self.x_lb, dtype=tf.float64)
60 | self.t_lb_tf = tf.convert_to_tensor(self.t_lb, dtype=tf.float64)
61 | self.tem_lb_tf = tf.convert_to_tensor(self.tem_lb, dtype=tf.float64)
62 |
63 | self.x_ub_tf = tf.convert_to_tensor(self.x_ub, dtype=tf.float64)
64 | self.t_ub_tf = tf.convert_to_tensor(self.t_ub, dtype=tf.float64)
65 | self.tem_ub_tf = tf.convert_to_tensor(self.tem_ub, dtype=tf.float64)
66 |
67 | self.x_f_tf = tf.convert_to_tensor(self.x_f, dtype=tf.float64)
68 | self.t_f_tf = tf.convert_to_tensor(self.t_f, dtype=tf.float64)
69 |
70 | self.X_tem_tf = tf.convert_to_tensor(self.X_tem, dtype=tf.float64)
71 |
72 | # tf Graphs
73 | self.tem0_pred = self.net_uv(self.x0_tf, self.t0_tf)
74 | self.tem_lb_pred = self.net_uv(self.x_lb_tf, self.t_lb_tf)
75 | self.tem_ub_pred = self.net_uv(self.x_ub_tf, self.t_ub_tf)
76 | self.f_tem_pred = self.net_f_uv(self.x_f_tf, self.t_f_tf)
77 | self.X_tem_pred = self.net_uv(self.x_f_tf, self.t_f_tf)
78 |
79 | self.loss_pre = tf.reduce_mean(tf.square(self.X_tem_pred - self.X_tem_tf))
80 | self.loss = tf.reduce_mean(tf.square(self.tem0_pred - self.tem0_tf)) + 1.0e-3 * tf.reduce_mean(tf.square(self.f_tem_pred))
81 |
82 | self.global_step = tf.Variable(0, trainable=False, dftype=tf.int64)
83 | self.decayed_lr = tf.compat.v1.train.exponential_decay(
84 | learning_rate = 0.1,
85 | global_step=self.global_step,
86 | decay_steps=2000,
87 | decay_rate=0.1,
88 | staircase=True
89 | )
90 |
91 | self.decayed_lr2 = tf.compat.v1.train.cosine_decay(
92 | learning_rate=0.001,
93 | global_step=self.global_step,
94 | decay_steps=5000, # adjust as needed
95 | )
96 |
97 | self.decayed_lr3 = tf.compat.v1.train.polynomial_decay(
98 | learning_rate=0.001,
99 | global_step=self.global_step,
100 | decay_steps=5000, # adjust as needed
101 | end_learning_rate=0.0001,
102 | power=2.0,
103 | cycle=False
104 | )
105 |
106 | self.optimizer_Adam = tf.compat.v1.train.AdamOptimizer(learning_rate=0.001)
107 | self.train_op_Adam = self.optimizer_Adam.minimize(self.loss)
108 |
109 | self.sess = tf.compat.v1.Session()
110 | init = tf.compat.v1.global_variables_initializer()
111 | self.sess.run(init)
112 |
113 | def initialize_NN(self, layers):
114 | weights = []
115 | biases = []
116 | num_layers = len(layers)
117 | for l in range(0, num_layers - 1):
118 | W = self.xavier_init(size=[layers[l], layers[l + 1]])
119 | b = tf.Variable(tf.zeros([1, layers[l + 1]], dtype=tf.float64), dtype=tf.float64)
120 | weights.append(W)
121 | biases.append(b)
122 | return weights, biases
123 |
124 | def xavier_init(self, size):
125 | in_dim = size[0]
126 | out_dim = size[1]
127 | xavier_stddev = np.sqrt(2 / (in_dim + out_dim))
128 | return tf.Variable(tf.random.truncated_normal([in_dim, out_dim], stddev=xavier_stddev, dtype=tf.float64), dtype=tf.float64)
129 |
130 | def neural_net(self, X, weights, biases):
131 | num_layers = len(weights) + 1
132 |
133 | H = 2.0*(X - self.lb)/(self.ub - self.lb) - 1.0
134 | for l in range(0, num_layers - 2):
135 | W = weights[l]
136 | b = biases[l]
137 | H = tf.tanh(tf.add(tf.matmul(H, W), b))
138 | W = weights[-1]
139 | b = biases[-1]
140 | Y = tf.add(tf.matmul(H, W), b)
141 | return Y
142 |
143 | def cal_H(self, x):
144 | eps= 1e-3
145 | x1 = (-0.4 + eps)*tf.ones_like(x)
146 | x2 = ( 0.4 - eps)*tf.ones_like(x)
147 | one = tf.ones_like(x)
148 |
149 | d1 = x - x1
150 | d2 = x2 - x
151 | dist = tf.minimum(d1, d2)
152 |
153 | Hcal = 0.5*( one + dist/eps + 1.0/np.pi*tf.sin(dist*np.pi/eps) )
154 |
155 | #xtmp = tf.where(tf.greater(dist, eps), tf.ones_like(x), Hcal)
156 | xout = tf.where(tf.less(dist, -eps), tf.zeros_like(x), tf.where(tf.greater(dist, eps), tf.ones_like(x), Hcal))
157 |
158 | return xout
159 |
160 | def net_uv(self, x, t):
161 | X = tf.concat([x, t], axis=1)
162 |
163 | Hcal = self.cal_H(x)
164 | one = tf.ones_like(x)
165 |
166 | T1 = 298.15*one
167 | T2 = 973.15*one
168 | xlen = 0.8
169 | dx = x + 0.4*one
170 | Tbc = T1 + (T2-T1)/xlen*dx
171 |
172 | tem = self.neural_net(X, self.weights, self.biases)
173 | tem = Tbc*(one-Hcal) + tem*Hcal
174 |
175 | return tem
176 |
177 | def net_f_uv(self, x, t):
178 | tem = self.net_uv(x, t)
179 |
180 | one = tf.ones_like(tem)
181 | zero = tf.zeros_like(tem)
182 |
183 | rho_Al_liquid = 2555.0 * one
184 | rho_Al_solid = 2555.0 * one
185 | rho_grap = 2200.0 * one
186 |
187 | kappa_Al_liquid = 91.0 * one
188 | kappa_Al_solid = 211.0 * one
189 | kappa_grap = 100.0 * one
190 |
191 | cp_Al_liquid = 1190.0 * one
192 | cp_Al_solid = 1190.0 * one
193 | cp_grap = 1700.0 * one
194 |
195 | cl_Al_liquid = 3.98e5 * one
196 | cl_Al_solid = 3.98e5 * one
197 | cl_grap = 3.98e5 * one
198 |
199 | # Value of Ts
200 | Ts = 913.15 * one
201 | Tl = 933.15 * one
202 |
203 | tem_t = tf.gradients(tem, t)[0]
204 | tem_x = tf.gradients(tem, x)[0]
205 | fL = (tem - Ts) / (Tl - Ts)
206 | fL = tf.maximum(tf.minimum((tem - Ts) / (Tl - Ts), one), zero)
207 | fL_t = tf.gradients(fL, t)[0]
208 |
209 | rho = tf.where(tf.greater(x, zero), rho_Al_liquid * fL + rho_Al_solid * (one - fL), rho_grap)
210 | kappa = tf.where(tf.greater(x, zero), kappa_Al_liquid * fL + kappa_Al_solid * (one - fL), kappa_grap)
211 | cp = tf.where(tf.greater(x, zero), cp_Al_liquid * fL + cp_Al_solid * (one - fL), cp_grap)
212 | cl = tf.where(tf.greater(x, zero), cl_Al_liquid * fL + cl_Al_solid * (one - fL), cl_grap)
213 |
214 | lap = tf.gradients(kappa * tem_x, x)[0]
215 |
216 | f_tem = (rho * cp * tem_t + rho * cl * fL_t - lap) / (rho_Al_solid * kappa_Al_solid)
217 |
218 | return f_tem
219 |
220 | def callback(self, loss):
221 | print('Loss:', loss)
222 |
223 | def train(self, nIter_pre, nIter):
224 | losses = []
225 |
226 | @tf.function
227 | def train_step_Adam_pre():
228 | with tf.GradientTape() as tape:
229 | loss_value = self.loss_pre
230 | gradients = tape.gradient(loss_value, self.trainable_variables)
231 | self.optimizer_Adam_pre.apply_gradients(zip(gradients, self.trainable_variables))
232 | return loss_value
233 |
234 | @tf.function
235 | def train_step_Adam():
236 | with tf.GradientTape() as tape:
237 | loss_value = self.loss
238 | gradients = tape.gradient(loss_value, self.trainable_variables)
239 | self.optimizer_Adam.apply_gradients(zip(gradients, self.trainable_variables))
240 | return loss_value
241 |
242 | def scipy_lbfgs_optimizer(loss, variables):
243 | def get_loss_and_grads():
244 | with tf.GradientTape() as tape:
245 | loss_value = loss()
246 | gradients = tape.gradient(loss_value, variables)
247 | return loss_value, gradients
248 |
249 | tfp.optimizer.lbfgs_minimize(get_loss_and_grads, initial_position=variables)
250 |
251 | it = 0
252 | tf_dict = {self.x0_tf: self.x0, self.t0_tf: self.t0,
253 | self.tem0_tf: self.tem0,
254 | self.tem_lb_tf: self.tem_lb, self.tem_ub_tf: self.tem_ub,
255 | self.x_lb_tf: self.x_lb, self.t_lb_tf: self.t_lb,
256 | self.x_ub_tf: self.x_ub, self.t_ub_tf: self.t_ub,
257 | self.x_f_tf: self.x_f, self.t_f_tf: self.t_f,
258 | self.X_tem_tf: self.X_tem}
259 |
260 | start_time = time.time()
261 | for it in range(nIter_pre):
262 | self.sess.run(self.train_op_Adam_pre, tf_dict)
263 |
264 | # Print
265 | if it % 1000 == 0:
266 | elapsed = time.time() - start_time
267 | loss_value = self.sess.run(self.loss_pre, tf_dict)
268 | print('It: %d, Loss: %.3e, Time: %.2f' %
269 | (it, loss_value, elapsed))
270 | sys.stdout.flush()
271 | start_time = time.time()
272 |
273 | # if nIter_pre > 0:
274 | # scipy_lbfgs_optimizer(lambda: self.loss_pre, tf_dict)
275 | threshold = 1
276 | start_time = time.time()
277 | for it in range(nIter):
278 | self.sess.run(self.train_op_Adam, tf_dict)
279 |
280 | loss_value = self.sess.run(self.loss, tf_dict)
281 | losses.append(loss_value)
282 | if (loss_value <= threshold) :
283 | print("Training Completed with threshold = " + threshold)
284 | break
285 | # Print
286 | if it % 1000 == 0:
287 | elapsed = time.time() - start_time
288 | loss_value = self.sess.run(self.loss, tf_dict)
289 | print('It: %d, Loss: %.3e, Time: %.2f' %
290 | (it, loss_value, elapsed))
291 | sys.stdout.flush()
292 | start_time = time.time()
293 | np.savetxt('training_losse_layers=8_lr=0.001_hn=200_epochs=25000.txt', losses)
294 | # if nIter > 0:
295 | # scipy_lbfgs_optimizer(lambda: self.loss, tf_dict)
296 | return losses
297 |
298 |
299 | def predict(self, X_star):
300 | X_star = tf.convert_to_tensor(X_star, dtype=tf.float64)
301 | tem_star = self.sess.run(self.net_uv(X_star[:, 0:1], X_star[:, 1:2]))
302 | ftem_star = self.sess.run(self.net_f_uv(X_star[:, 0:1], X_star[:, 1:2]))
303 | return tem_star, ftem_star
304 |
305 | return tem_star, ftem_star
306 |
307 | def save_model(self, path):
308 | save_path = self.saver.save(self.sess, path)
309 | print(f"Model saved in path: {save_path}")
310 |
311 | def load_model(self, path):
312 | self.saver.restore(self.sess, path)
313 | print(f"Model restored from path: {path}")
314 |
315 | if __name__ == "__main__":
316 | noise = 0.0
317 |
318 | ltem = 298.15
319 | utem = 973.15
320 | eps = 0.02
321 |
322 | # Domain bounds
323 | lb = np.array([-0.4, 5.0])
324 | ub = np.array([0.4, 10.0])
325 |
326 | lbr = np.array([-0.05, 5.0])
327 | ubr = np.array([ 0.05, 10.0])
328 |
329 | N0 = 300
330 | N_b = 100
331 | N_f = 10000
332 | num_hidden = 8
333 | layers = [2] + num_hidden * [200] + [1]
334 |
335 | data = scipy.io.loadmat('project_pinn/thermal_fine.mat')
336 |
337 | x = data['x'].flatten()[:, None]
338 | t = data['tt'].flatten()[:, None]
339 | Exact = data['Tem']
340 | Exact_tem = np.real(Exact)
341 |
342 | ftem = interp2d(x, t, Exact.T)
343 |
344 | X, T = np.meshgrid(x, t)
345 | X_star = np.hstack((X.flatten()[:, None], T.flatten()[:, None]))
346 |
347 | idx_x = np.random.choice(x.shape[0], N0, replace=False)
348 | x0 = x[idx_x, :]
349 | tem0 = Exact_tem[idx_x, 0:1]
350 |
351 | idx_t = np.random.choice(t.shape[0], N_b, replace=False)
352 | tb = t[idx_t, :]
353 |
354 | X_f = lb + (ub - lb) * lhs(2, N_f)
355 |
356 | X_f = X_f[np.argsort(X_f[:, 0])]
357 | X_tem = ftem(X_f[:, 0], 0).flatten()[:, None]
358 |
359 | model = SolidificationPINN(x0, tem0, tb, X_f, X_tem, layers, lb, ub)
360 |
361 | start_time = time.time()
362 | losses = model.train(-1, 50000)
363 | elapsed = time.time() - start_time
364 |
365 | print('Training time: %.4f' % (elapsed))
366 |
367 | tem_pred, ftem_pred = model.predict(X_star)
368 | np.savetxt('predict_xT.txt', X_star)
369 | np.savetxt('predict_tem.txt', tem_pred)
370 | np.savetxt('predict_ftem.txt', ftem_pred)
371 |
372 | # Plot training loss curve
373 | plt.figure()
374 | plt.plot(losses)
375 | plt.xlabel('Epoch')
376 | plt.ylabel('Loss')
377 | plt.title('Training Loss Curve')
378 | plt.grid(True)
379 | plt.savefig('training_loss_curve.png')
380 |
381 | # model.save_model('model_checkpoint.ckpt')
382 | # model.load_model('model_checkpoint.ckpt')
--------------------------------------------------------------------------------
/prediction_results/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/prediction_results/.DS_Store
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | tensorflow==2.11.0
2 | tensorflow-probability==0.16.0
3 | numpy==1.23.5
4 | matplotlib==3.7.1
5 | scipy==1.9.3
6 | pyDOE==0.3.8
7 |
--------------------------------------------------------------------------------
/visualisation/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/visualisation/.DS_Store
--------------------------------------------------------------------------------
/visualisation/final_results/Residuals_Tem_Pred.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/visualisation/final_results/Residuals_Tem_Pred.png
--------------------------------------------------------------------------------
/visualisation/final_results/Scatter_Plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/visualisation/final_results/Scatter_Plot.png
--------------------------------------------------------------------------------
/visualisation/final_results/Temp_Pred_VS_Exact.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/visualisation/final_results/Temp_Pred_VS_Exact.png
--------------------------------------------------------------------------------
/visualisation/final_results/TrainingLossCurve_0.001_HL=8_HN=200.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/visualisation/final_results/TrainingLossCurve_0.001_HL=8_HN=200.png
--------------------------------------------------------------------------------
/visualisation/final_results/TrainingLossCurve_LR=cosine.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/visualisation/final_results/TrainingLossCurve_LR=cosine.png
--------------------------------------------------------------------------------
/visualisation/final_results/TrainingLossCurve_LR=expo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/visualisation/final_results/TrainingLossCurve_LR=expo.png
--------------------------------------------------------------------------------
/visualisation/final_results/TrainingLossCurve_LR=polynomial.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doomsday4/Heat-Transfer-in-Advanced-Manufacturing-using-PINN/07344fa42c072c35119a102568e320ff6e044f5a/visualisation/final_results/TrainingLossCurve_LR=polynomial.png
--------------------------------------------------------------------------------
/visualisation/plots.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import matplotlib.pyplot as plt
3 |
4 | # Load the data
5 | X_star = np.loadtxt('predict_xT.txt')
6 | tem_pred = np.loadtxt('predict_tem.txt')
7 | ftem_pred = np.loadtxt('predict_ftem.txt')
8 |
9 | # Extract the spatial and temporal coordinates
10 | x = X_star[:, 0]
11 | t = X_star[:, 1]
12 |
13 | # Determine the number of unique points in x and t
14 | num_x_points = len(np.unique(x))
15 | num_t_points = len(np.unique(t))
16 |
17 | # Reshape the data for plotting
18 | X = x.reshape((num_x_points, num_t_points))
19 | T = t.reshape((num_x_points, num_t_points))
20 | Tem = tem_pred.reshape((num_x_points, num_t_points))
21 |
22 | # Plot temperature distribution
23 | plt.figure()
24 | plt.contourf(X, T, Tem, levels=50, cmap='hot')
25 | plt.colorbar(label='Temperature')
26 | plt.xlabel('x')
27 | plt.ylabel('t')
28 | plt.title('Predicted Temperature Distribution')
29 | plt.show()
30 |
31 | # Reshape the residuals
32 | Res = ftem_pred.reshape((num_x_points, num_t_points))
33 |
34 | # Plot residuals
35 | plt.figure()
36 | plt.contourf(X, T, Res, levels=50, cmap='coolwarm')
37 | plt.colorbar(label='Residuals')
38 | plt.xlabel('x')
39 | plt.ylabel('t')
40 | plt.title('Residuals of the Heat Equation')
41 | plt.show()
--------------------------------------------------------------------------------
/visualisation/plots2.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import matplotlib.pyplot as plt
3 | import scipy.io
4 | from scipy.interpolate import RegularGridInterpolator
5 |
6 | # Load the data
7 | X_star = np.loadtxt('predict_xT.txt')
8 | tem_pred = np.loadtxt('predict_tem.txt')
9 | ftem_pred = np.loadtxt('predict_ftem.txt')
10 |
11 | # Load the exact temperature data from the original dataset
12 | data = scipy.io.loadmat('1D/dat/thermal_fine.mat')
13 | x = data['x'].flatten()
14 | t = data['tt'].flatten()
15 | Exact = data['Tem']
16 | Exact_tem = np.real(Exact)
17 |
18 | # Reshape the exact temperature data to match the grid
19 | Exact_tem_reshaped = Exact_tem.T # Transpose for proper alignment
20 |
21 | # Create an interpolator for the exact temperature data
22 | interpolator = RegularGridInterpolator((t, x), Exact_tem_reshaped)
23 |
24 | # Extract the spatial and temporal coordinates from the prediction data
25 | x_star = X_star[:, 0]
26 | t_star = X_star[:, 1]
27 |
28 | # Determine the number of unique points in x and t
29 | num_x_points = len(np.unique(x_star))
30 | num_t_points = len(np.unique(t_star))
31 |
32 | # Reshape the prediction data for plotting
33 | X = x_star.reshape((num_t_points, num_x_points)) # Notice the order change
34 | T = t_star.reshape((num_t_points, num_x_points)) # Notice the order change
35 | Tem_pred = tem_pred.reshape((num_t_points, num_x_points))
36 |
37 | # Interpolate the exact temperature data at the prediction points
38 | Exact_tem_resampled = interpolator((t_star, x_star)).reshape((num_t_points, num_x_points))
39 |
40 | # Plot predicted temperature distribution
41 | plt.figure()
42 | plt.contourf(X, T, Tem_pred, levels=50, cmap='hot')
43 | plt.colorbar(label='Temperature')
44 | plt.xlabel('x')
45 | plt.ylabel('t')
46 | plt.title('Predicted Temperature Distribution')
47 | plt.show()
48 |
49 | # Plot exact temperature distribution
50 | plt.figure()
51 | plt.contourf(X, T, Exact_tem_resampled, levels=50, cmap='hot')
52 | plt.colorbar(label='Temperature')
53 | plt.xlabel('x')
54 | plt.ylabel('t')
55 | plt.title('Exact Temperature Distribution')
56 | plt.show()
57 |
58 | # Plot residuals (errors)
59 | plt.figure()
60 | plt.contourf(X, T, Tem_pred - Exact_tem_resampled, levels=50, cmap='coolwarm')
61 | plt.colorbar(label='Residuals')
62 | plt.xlabel('x')
63 | plt.ylabel('t')
64 | plt.title('Residuals of the Temperature Prediction')
65 | plt.show()
66 |
67 | # Scatter plot of predictions vs. actual values
68 | plt.figure()
69 | plt.scatter(Exact_tem_resampled.flatten(), Tem_pred.flatten(), alpha=0.5)
70 | plt.plot([Exact_tem_resampled.min(), Exact_tem_resampled.max()],
71 | [Exact_tem_resampled.min(), Exact_tem_resampled.max()], 'k--', lw=2)
72 | plt.xlabel('Actual Temperature')
73 | plt.ylabel('Predicted Temperature')
74 | plt.title('Scatter Plot of Predicted vs. Actual Temperature')
75 | plt.show()
--------------------------------------------------------------------------------
/visualisation/plots3.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import matplotlib.pyplot as plt
3 | import scipy.io
4 | from scipy.interpolate import RegularGridInterpolator
5 |
6 | # Load the data
7 | X_star = np.loadtxt('predict_xT.txt')
8 | tem_pred = np.loadtxt('predict_tem.txt')
9 |
10 | # Load the exact temperature data from the original dataset
11 | data = scipy.io.loadmat('1D/dat/thermal_fine.mat')
12 | x = data['x'].flatten()
13 | t = data['tt'].flatten()
14 | Exact = data['Tem']
15 | Exact_tem = np.real(Exact)
16 |
17 | # Reshape the exact temperature data to match the grid
18 | Exact_tem_reshaped = Exact_tem.T # Transpose for proper alignment
19 |
20 | # Create an interpolator for the exact temperature data
21 | interpolator = RegularGridInterpolator((t, x), Exact_tem_reshaped)
22 |
23 | # Extract the spatial and temporal coordinates from the prediction data
24 | x_star = X_star[:, 0]
25 | t_star = X_star[:, 1]
26 |
27 | # Determine the number of unique points in x and t
28 | num_x_points = len(np.unique(x_star))
29 | num_t_points = len(np.unique(t_star))
30 |
31 | # Reshape the prediction data for plotting
32 | X = x_star.reshape((num_t_points, num_x_points))
33 | T = t_star.reshape((num_t_points, num_x_points))
34 | Tem_pred = tem_pred.reshape((num_t_points, num_x_points))
35 |
36 | # Interpolate the exact temperature data at the prediction points
37 | Exact_tem_resampled = interpolator((t_star, x_star)).reshape((num_t_points, num_x_points))
38 |
39 | # Plot predicted and exact temperature distributions side by side
40 | plt.figure(figsize=(14, 6))
41 |
42 | plt.subplot(1, 2, 1)
43 | plt.contourf(X, T, Tem_pred, levels=50, cmap='hot')
44 | plt.colorbar(label='Temperature')
45 | plt.xlabel('x')
46 | plt.ylabel('t')
47 | plt.title('Predicted Temperature Distribution')
48 |
49 | plt.subplot(1, 2, 2)
50 | plt.contourf(X, T, Exact_tem_resampled, levels=50, cmap='hot')
51 | plt.colorbar(label='Temperature')
52 | plt.xlabel('x')
53 | plt.ylabel('t')
54 | plt.title('Exact Temperature Distribution')
55 |
56 | plt.tight_layout()
57 | plt.show()
--------------------------------------------------------------------------------
|