├── figs ├── cylinder-2d-re40 │ ├── loss-hist.png │ ├── drag-lift-coeffs.png │ ├── surface-pressure.png │ └── contour-comparison.png ├── cylinder-2d-re200 │ ├── loss-hist.png │ ├── qcriterion.png │ ├── vorticity_z.png │ ├── drag-lift-coeffs.png │ ├── contour-comparison-p.png │ ├── contour-comparison-u.png │ ├── contour-comparison-v.png │ ├── koopman_mode_strength.png │ ├── contour-comparison-steady.png │ ├── koopman_pinn_000_st0.000.png │ ├── koopman_pinn_001_st0.201.png │ ├── koopman_pinn_002_st1.142.png │ ├── koopman_pinn_003_st1.253.png │ ├── koopman_pinn_004_st0.633.png │ ├── koopman_pinn_005_st0.761.png │ ├── koopman_pinn_006_st0.403.png │ ├── koopman_pinn_007_st0.604.png │ ├── contour-comparison-omega_z.png │ ├── koopman_eigenvalues_complex.png │ ├── koopman_petibm_000_st0.000.png │ ├── koopman_petibm_001_st0.201.png │ ├── koopman_petibm_002_st0.403.png │ └── koopman_petibm_003_st0.604.png ├── tgv-2d-re100 │ ├── learning-rate-hist.png │ ├── petibm-tgv-2d-re100-convergence.png │ ├── pinn-nl3-nn256-npts4096-contours.png │ └── pinn-nl3-nn128-npts8192-convergence.png └── pinn.tikz ├── .gitmodules ├── .gitignore ├── latexmkrc ├── tikzit.tikzstyles ├── README.md ├── reproducibility.tex ├── conclusion.tex ├── supp.tex ├── introduction.tex ├── main.tex ├── results_2.tex ├── methods.tex ├── discussion.tex ├── results_1.tex ├── main.bbl ├── references.bib └── results_3.tex /figs/cylinder-2d-re40/loss-hist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re40/loss-hist.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/loss-hist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/loss-hist.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/qcriterion.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/qcriterion.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/vorticity_z.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/vorticity_z.png -------------------------------------------------------------------------------- /figs/tgv-2d-re100/learning-rate-hist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/tgv-2d-re100/learning-rate-hist.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re40/drag-lift-coeffs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re40/drag-lift-coeffs.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re40/surface-pressure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re40/surface-pressure.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/drag-lift-coeffs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/drag-lift-coeffs.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re40/contour-comparison.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re40/contour-comparison.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/contour-comparison-p.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/contour-comparison-p.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/contour-comparison-u.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/contour-comparison-u.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/contour-comparison-v.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/contour-comparison-v.png -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "repro-pack"] 2 | path = repro-pack 3 | url = https://github.com/barbagroup/chuang-dissertation-repro-pack.git 4 | branch = jcs-paper 5 | -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_mode_strength.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_mode_strength.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/contour-comparison-steady.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/contour-comparison-steady.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_pinn_000_st0.000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_pinn_000_st0.000.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_pinn_001_st0.201.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_pinn_001_st0.201.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_pinn_002_st1.142.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_pinn_002_st1.142.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_pinn_003_st1.253.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_pinn_003_st1.253.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_pinn_004_st0.633.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_pinn_004_st0.633.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_pinn_005_st0.761.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_pinn_005_st0.761.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_pinn_006_st0.403.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_pinn_006_st0.403.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_pinn_007_st0.604.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_pinn_007_st0.604.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/contour-comparison-omega_z.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/contour-comparison-omega_z.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_eigenvalues_complex.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_eigenvalues_complex.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_petibm_000_st0.000.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_petibm_000_st0.000.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_petibm_001_st0.201.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_petibm_001_st0.201.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_petibm_002_st0.403.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_petibm_002_st0.403.png -------------------------------------------------------------------------------- /figs/cylinder-2d-re200/koopman_petibm_003_st0.604.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/cylinder-2d-re200/koopman_petibm_003_st0.604.png -------------------------------------------------------------------------------- /figs/tgv-2d-re100/petibm-tgv-2d-re100-convergence.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/tgv-2d-re100/petibm-tgv-2d-re100-convergence.png -------------------------------------------------------------------------------- /figs/tgv-2d-re100/pinn-nl3-nn256-npts4096-contours.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/tgv-2d-re100/pinn-nl3-nn256-npts4096-contours.png -------------------------------------------------------------------------------- /figs/tgv-2d-re100/pinn-nl3-nn128-npts8192-convergence.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/barbagroup/jcs_paper_pinn/master/figs/tgv-2d-re100/pinn-nl3-nn128-npts8192-convergence.png -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | elsarticle_manual.pdf 2 | outputs 3 | *.aux 4 | *.log 5 | *.out 6 | *.spl 7 | *.blg 8 | *.synctex.gz 9 | *.fdb_latexmk 10 | *.fls 11 | .DS_Store 12 | figs/.DS_Store 13 | elsarticle.cls -------------------------------------------------------------------------------- /latexmkrc: -------------------------------------------------------------------------------- 1 | # the directory to store generated files 2 | $out_dir = "outputs"; 3 | 4 | # only have to provide main.tex; other tex files are included in main.tex 5 | @default_files = ('main.tex'); 6 | 7 | # which compiler to use for generating PDF (1 means xelatex) 8 | $pdf_mode = 1; 9 | 10 | # don't generate the DVI version of the document 11 | $dvi_mode = 0; 12 | 13 | # don't generate the PostScript version of the document 14 | $dvi_mode = 0; 15 | 16 | # the command used for the xelatex compiler 17 | $pdflatex = "pdflatex -synctex=1 %O %S"; 18 | -------------------------------------------------------------------------------- /tikzit.tikzstyles: -------------------------------------------------------------------------------- 1 | \usetikzlibrary{decorations.text, arrows.meta, bending, positioning, fit, calc} 2 | 3 | % node styles 4 | \tikzstyle{none}=[inner sep=0mm] 5 | 6 | \tikzstyle{title}=[font=\fontsize{6}{6}\color{black!50}\ttfamily] 7 | 8 | \tikzstyle{input}=[ 9 | fill=white, draw=black, shape=rectangle, minimum width=0.2in, minimum height=0.2in, 10 | inner sep=0.02in, align=center, line width=0.01in 11 | ] 12 | 13 | \tikzstyle{param}=[ 14 | fill={rgb,255: red,191; green,191; blue,191}, draw=black, shape=circle, minimum width=0.25in, 15 | minimum height=0.25in, inner sep=0in, align=center, line width=0.01in 16 | ] 17 | 18 | % link styles 19 | \tikzstyle{one arrow}=[->, >=stealth, line width=0.005in] 20 | 21 | % vim:ft=tex: 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding 2 | 3 | **Pi-Yueh Chuang, Lorena A. Barba** 4 | 5 | Manuscript repository, including all source files for the text and figures. 6 | 7 | arXiv preprint: Coming soon 8 | 9 | ## Reproducibility package archives 10 | 11 | - Reproducibility Package for Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding, [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7988066.svg)](https://doi.org/10.5281/zenodo.7988066) (2023) 12 | - Reproducibility Package (Raw Data) for Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding, [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7988105.svg)](https://doi.org/10.5281/zenodo.7988105) (2023) 13 | 14 | ## Software 15 | 16 | - "PetIBM: toolbox and applications of the immersed-boundary method on distributed-memory architectures", Pi-Yueh Chuang, Olivier Mesnard, Anush Krishnan, Lorena A. Barba. _The Journal of Open Source Software_, 3(25):558 (May 2018). doi:[10.21105/joss.00558](http://doi.org/10.21105/joss.00558). Code repository: https://github.com/barbagroup/PetIBM (BSD-3 license) 17 | - [Documentation](https://barbagroup.github.io/PetIBM/) 18 | - "NVIDIA Modulus: a Neural Network Framework," https://developer.nvidia.com/modulus (Apache 2.0 license) 19 | - [Documentation](https://docs.nvidia.com/deeplearning/modulus/modulus-core/index.html) 20 | - Our PINN solvers using the Modulus toolkit can be found in the `repro-pack` submodule (BSD-3 license) 21 | 22 | ## LICENSE 23 | 24 | Not all content in this repository is open source. The Python code for creating the figures is shared under a BSD3 License. 25 | Please note that the manuscript text is not openly licensed; we reserve rights to the article content, which will be submitted for publication in a journal. Only fair use applies in this case. -------------------------------------------------------------------------------- /reproducibility.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | In our work, we strive for achieving reproducibility of the results, and all the code we developed for this research is available publicly on GitHub under an open-source license, while all the data is available in open archival repositories. 4 | PetIBM is an open-source CFD library based on the immersed boundary method, and is available at \url{https://github.com/barbagroup/PetIBM} under the permissive BSD-3 license. 5 | The software was peer reviewed and published in the Journal of Open Source Software \cite{chuang_petibm_2018}. 6 | Our PINN solvers based on the NVIDIA \emph{Modulus} toolkit can be found following the links in the GitHub repository for this paper, located at \url{https://github.com/barbagroup/jcs_paper_pinn/}. 7 | There, the folder prefixed by \texttt{repro-pack} corresponds to a git submodule pointing to the relevant commit on a branch of the repository for the full reproducibility package of the first author's PhD dissertation \cite{chuang_thesis_2023}. 8 | The branch named \texttt{jcs-paper} contains the modified plotting scripts to produce the publication-quality figures in this paper. 9 | A snapshot of the repro-pack is archived on Zenodo, and the DOI is 10.5281/zenodo.7988067. 10 | As described in the README of the repro-pack, readers can use pre-generated data for plotting the figures in this paper, or they can re-run the solutions using the code and data available in the repro-pack. 11 | The latter option is of course limited by the computational resources available to the reader. 12 | For the first option, the reader can find the raw data in a Zenodo archive, with DOI: 10.5281/zenodo.7988106. 13 | To facilitate reproducibility of the computational environment, we execute all cases using Singularity/Apptainer images for both the PetIBM and PINN cases. 14 | All the container recipes are included in the repro-pack under the \texttt{resources} folder. 15 | The \emph{Modulus} toolkit was open-sourced by NVIDIA in March 2023,\footnote{\url{https://developer.nvidia.com/blog/physics-ml-platform-modulus-is-now-open-source/}} under the Apache License 2.0. 16 | This is a permissive license that requires preservation of copyright and license notices and provides an express grant of patent rights. 17 | When we started this research, \emph{Modulus} was not yet open-source, but it was publicly available through the conditions of an End User Agreement. 18 | Documentation of those conditions can be found via the May 21, 2022, snapshot of the \emph{Modulus} developer website on the Internet Archive Wayback Machine.\footnote{\url{https://web.archive.org/web/20220521223413/https://catalog.ngc.nvidia.com/orgs/nvidia/teams/modulus/containers/modulus}} 19 | We are confident that following the best practices of open science described in this statement provides good conditions for reproducibility of our results. 20 | Readers can inspect the code if any detail is unclear in the paper narrative, and they can re-analyze our data or re-run the computational experiments. 21 | We spared no effort to document, organize, and preserve all the digital artifacts for this work. 22 | -------------------------------------------------------------------------------- /conclusion.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | In this study, we aimed to expand upon our previous work \cite{chuang_experience_2022} by exploring the effectiveness of physics-informed neural networks (PINNs) in predicting vortex shedding in a 2D cylinder flow at $Re = 200$. 4 | It should be noted that our focus is limited to forward problems involving non-parameterized, incompressible Navier-Stokes equations. 5 | 6 | To ensure the correctness of our results, we verified and validated all involved solvers. 7 | Aside from using as a baseline results obtained with PetIBM, we used three PINN solvers in the case study: a steady data-free PINN, an unsteady data-free PINN, and a data-driven PINN. 8 | Our results indicate that while both data-free PINNs produced steady-state solutions similar to traditional CFD solvers, they failed to predict vortex shedding in unsteady flow situations. 9 | On the other hand, the data-driven PINN predicted vortex shedding only within the timeframe where PetIBM training data were available, and beyond this timeframe the prediction quickly reverted to the steady-state solution. 10 | Additionally, the data-driven PINN showed limited extrapolation capabilities and produced meaningless predictions at unseen coordinates. 11 | Our Koopman analysis suggests that PINN methods may be dissipative and dispersive, which inhibits oscillation and causes the computed flow to return to a steady state. 12 | This analysis is also consistent with the observation of a spectral bias inherent in neural networks \cite{rahaman_spectral_2019}. 13 | 14 | For forward problems, data-free PINNs will need more theoretical and mathematical refinement to compete with traditional numerical methods in terms of accuracy and computational cost, making data-driven PINNs more promising due to their capabilities in handling applications that are challenging for traditional methods, such as flow reconstruction or surrogate modeling with sparse data. 15 | Thus, an interesting research question arises: how do data-driven PINNs compare to classical deep learning approaches, where neural networks are not constrained by partial differential equations? 16 | Both data-driven PINNs and classical deep learning approaches work well for interpolation settings but struggle with extrapolation beyond the training data. 17 | In situations where only sparse data are available, data-driven PINNs still work while classical deep learning may not. 18 | However, data-driven PINNs are also more expensive to train and require significantly more computational resources due to the presence of high-order derivatives in the loss function. 19 | It is therefore important to assess the cost-performance ratio of data-driven PINNs when deployed in real-world applications. 20 | One obvious factor affecting this ratio in data-driven PINNs and classical deep learning is the ease of obtaining additional training data. 21 | For example, if training data are obtained from traditional numerical simulations, meaning that high-quality data can be acquired abundantly, then classical deep learning may be a cheaper option than data-driven PINNs. 22 | Thus, it would be valuable to quantitatively assess the benefits of data-driven PINNs compared to classical deep learning and understand the associated cost-performance trade-offs. 23 | 24 | % vim:ft=tex -------------------------------------------------------------------------------- /supp.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | \begin{figure*} [h] 4 | \centering% 5 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_petibm_000_st0.000.png}% 6 | \caption{% 7 | The \num{1}st mode in PetIBM. 8 | } 9 | \label{fig:cylinder-re200-koopman-petibm-1st}% 10 | \end{figure*} 11 | 12 | \begin{figure*} 13 | \centering% 14 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_petibm_001_st0.201.png}% 15 | \caption{% 16 | The \num{2}nd mode in PetIBM. 17 | } 18 | \label{fig:cylinder-re200-koopman-petibm-2nd}% 19 | \end{figure*} 20 | 21 | \begin{figure*} 22 | \centering% 23 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_petibm_002_st0.403.png}% 24 | \caption{% 25 | The \num{3}rd mode in PetIBM. 26 | } 27 | \label{fig:cylinder-re200-koopman-petibm-3rd}% 28 | \end{figure*} 29 | 30 | \begin{figure*} 31 | \centering% 32 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_petibm_003_st0.604.png}% 33 | \caption{% 34 | The \num{4}th mode in PetIBM. 35 | } 36 | \label{fig:cylinder-re200-koopman-petibm-4th}% 37 | \end{figure*} 38 | 39 | \begin{figure*} 40 | \centering% 41 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_pinn_000_st0.000.png}% 42 | \caption{% 43 | The \num{1}st primary mode in data-driven PINN. 44 | } 45 | \label{fig:cylinder-re200-koopman-pinn-primary-1st}% 46 | \end{figure*} 47 | 48 | \begin{figure*} 49 | \centering% 50 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_pinn_001_st0.201.png}% 51 | \caption{% 52 | The \num{2}nd primary mode in data-driven PINN. 53 | } 54 | \label{fig:cylinder-re200-koopman-pinn-primary-2nd}% 55 | \end{figure*} 56 | 57 | \begin{figure*} 58 | \centering% 59 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_pinn_006_st0.403.png}% 60 | \caption{% 61 | The \num{3}rd primary mode in data-driven PINN. 62 | } 63 | \label{fig:cylinder-re200-koopman-pinn-primary-3rd}% 64 | \end{figure*} 65 | 66 | \begin{figure*} 67 | \centering% 68 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_pinn_007_st0.604.png}% 69 | \caption{% 70 | The \num{4}th primary mode in data-driven PINN. 71 | } 72 | \label{fig:cylinder-re200-koopman-pinn-primary-4th}% 73 | \end{figure*} 74 | 75 | \begin{figure*} 76 | \centering% 77 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_pinn_002_st1.142.png}% 78 | \caption{% 79 | The \num{1}st damped mode in data-driven PINN. 80 | } 81 | \label{fig:cylinder-re200-koopman-pinn-damped-1st}% 82 | \end{figure*} 83 | 84 | \begin{figure*} 85 | \centering% 86 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_pinn_003_st1.253.png}% 87 | \caption{% 88 | The \num{2}nd damped mode in data-driven PINN. 89 | } 90 | \label{fig:cylinder-re200-koopman-pinn-damped-2nd}% 91 | \end{figure*} 92 | 93 | \begin{figure*} 94 | \centering% 95 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_pinn_004_st0.633.png}% 96 | \caption{% 97 | The \num{3}rd damped mode in data-driven PINN. 98 | } 99 | \label{fig:cylinder-re200-koopman-pinn-damped-3rd}% 100 | \end{figure*} 101 | 102 | \begin{figure*} 103 | \centering% 104 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/koopman_pinn_005_st0.761.png}% 105 | \caption{% 106 | The \num{4}th damped mode in data-driven PINN. 107 | } 108 | \label{fig:cylinder-re200-koopman-pinn-damped-4th}% 109 | \end{figure*} 110 | 111 | % vim:ft=tex: -------------------------------------------------------------------------------- /introduction.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | In recent years, research interest in using Physics-Informed Neural Networks (PINNs) has surged. 4 | The idea of using neural networks to represent solutions of ordinary and partial differential equations goes back to the 1990s \cite{dissanayake_neural-network-based_1994,lagaris_artificial_1998}, but upon the term PINN being coined about five years ago, the field exploded. 5 | Partly, it reflects the immense popularity of all things machine learning and artificial intelligence (ML/AI). 6 | It also seems very attractive to be able to solve differential equations without meshing the domain, and without having to discretize the equations in space and time. 7 | PINN methods incorporate the differential equations as constraints in the loss function, and obtain the solution by minimizing the loss function using standard ML techniques. 8 | They are easily implemented in a few lines of code, taking advantage of the ML frameworks that have become available in recent years, such as PyTorch. 9 | In contrast, traditional numerical solvers for PDEs such as the Navier-Stokes equations can require years of expertise and thousands of lines of code to develop, test and maintain. 10 | The general optimism in this field has perhaps held back critical examinations of the limitations of PINNs, and the challenges of using them in practical applications. 11 | This is compounded by the well-known fact that the academic literature is biased to positive results, and negative results are rarely published. 12 | We agree with a recent perspective article that calls for a view of ``cautious optimism'' in these emerging methods \cite{vinuesa_emerging_2022}, for which discussion in the published literature of both successes and failures is needed. 13 | 14 | In this paper, we examine the solution of Navier-Stokes equations using PINNs in flows with instabilities, particularly vortex shedding. 15 | Fluid dynamic instabilities are ubiquitous in nature and engineering applications, and any method competing with traditional CFD should be able to handle them. 16 | In a previous conference paper, we already reported on our observations of the limitations of PINNs in this context \cite{chuang_experience_2022}. 17 | Although the solution of a laminar flow with vorticity, the classical Taylor-Green vortex, was well represented by a PINN solver, the same network architecture failed to give the expected solution in a flow with vortex shedding. 18 | The PINN solver accurately represented the steady solution at a lower Reynolds number of $Re=40$, but reverted to the steady state solution in two-dimensional flow past a circular cylinder at $Re=200$, which is known to exhibit vortex shedding. 19 | Here, we investigate this failure in more detail, comparing with a traditional CFD solver and with a data-driven PINN that receives as training data the solution of the CFD solver. 20 | We look at various fluid diagnostics, and also use dynamic mode decomposition (DMD) to analyze the flow and help explain the difficulty of the PINN solver to capture oscillatory solutions. 21 | 22 | Other works have called attention to possible failure modes for PINN methods. Krishnapriyan el al. \cite{krishnapriyan_failure_2021} studied PINN models of simple problems of convection, reaction, and reaction-diffusion, and found that the PINN method only works for the simplest, slowyly varying problems. 23 | They suggested that the neural network architecture is expressive enough to represent a good solution, but the landscape of the loss function is too complex for the optimization to find it. 24 | Fuks and Tchelepi \cite{fuks_limitations_2020} studied the limitations of PINNs in solving the Buckley-Leverett equation, a nonlinear hyperbolic equation that models two-phase flow in porous media. 25 | They found that the neural network model was unable to represent the solution of the 1D hyperbolic PDE when shocks were present, and also concluded that the problem was the optimization process, or the loss function. 26 | The failure to capture the vortex shedding of cylinder flow is also highlighted in a recent work by Rohrhofer et al. \cite{rohrhofer_fixedpoints_2023}, who cite our previous conference paper. 27 | 28 | Our PINN solvers were built using the NVIDIA \emph{Modulus} toolkit,\footnote{\url{https://developer.nvidia.com/modulus}} a high-level package built on PyTorch for building, training, and fine-tuning physics-informed machine learning models. 29 | For the traditional CFD solver, we used our own code, \emph{PetIBM}, which is open-source and available on GitHub, and has also been peer reviewed \cite{chuang_petibm_2018}. 30 | A Reproducibility Statement gives more details regarding all the open research objects to accompany the paper, and how the interested reader can reuse them. 31 | 32 | -------------------------------------------------------------------------------- /main.tex: -------------------------------------------------------------------------------- 1 | %! TEX program = pdflatex 2 | \documentclass[5p, twocolumn, times, sort&compress]{elsarticle} 3 | \usepackage{url} % to fix underscores in URLs in BibTex 4 | \usepackage{amsmath} 5 | \usepackage{amsthm} 6 | \usepackage{amssymb} 7 | \usepackage{amsfonts} 8 | \usepackage{mathtools} 9 | \usepackage[overload]{empheq} 10 | \usepackage[section]{placeins} % limit floats (figs, tables, etc.) to the same sections 11 | \usepackage{tikz} % visualizing computational graphs 12 | \usepackage{relsize} % setting font sizes relative to the current font size 13 | \usepackage{makecell} % create a multi-line cell 14 | \usepackage{stfloats} % for positioning of figure* on the same page 15 | \usepackage{booktabs} % table separation lines 16 | \usepackage{threeparttable} % three part table 17 | \usepackage[group-separator={,}]{siunitx} % unifying the styles of numbers, units, and quantities 18 | \usepackage[multidot]{grffile} 19 | 20 | % temporary; can be deleted 21 | \usepackage{lipsum} % generate random text as place holders 22 | 23 | % helpful macros 24 | \DeclareMathOperator*{\argmin}{arg\,min} % thin space, limits underneath in displays 25 | \RenewDocumentCommand{\vec}{m}{\mathbf{#1}} % ISO-style vector notation 26 | \NewDocumentCommand{\mat}{m}{\boldsymbol{\mathit{#1}}} % ISO-style matrix notation 27 | \NewDocumentCommand{\diff}{}{\mathop{}\!\mathrm{d}} % ISO-style upright ordinary differentiation 28 | \NewDocumentCommand{\pdiff}{mm}{\frac{\partial #1}{\partial #2}} % partial differential terms 29 | 30 | % tikz setup 31 | \input{tikzit.tikzstyles} 32 | 33 | % figure-related 34 | \graphicspath{{figs/}} % default figure search path 35 | \DeclareGraphicsExtensions{.pdf,.png} % priority of the formats of figures 36 | 37 | % bibliography style 38 | \bibliographystyle{elsarticle-num} 39 | 40 | % journal name 41 | \journal{an Elsevier journal} 42 | 43 | % main body 44 | \begin{document} 45 | 46 | % front matter 47 | \begin{frontmatter} 48 | % title 49 | \title{% 50 | Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding% 51 | } 52 | 53 | % author list 54 | \author[1]{Pi-Yueh Chuang} 55 | \ead{pychuang@gwu.edu} 56 | \author[1]{Lorena A. Barba\corref{cor1}} 57 | \ead{labarba@gwu.edu} 58 | \cortext[cor1]{Corresponding author} 59 | \affiliation[1]{% 60 | organization={% 61 | Department of Mechanical and Aerospace Engineering, % 62 | The George Washington University% 63 | }, % 64 | city={Washington}, % 65 | state={DC 20052}, % 66 | country={USA}% 67 | } 68 | 69 | % abstract 70 | \begin{abstract} 71 | The recent surge of interest in physics-informed neural network (PINN) methods has led to a wave of studies that attest to their potential for solving partial differential equations (PDEs) and predicting the dynamics of physical systems. However, the predictive limitations of PINNs have not been thoroughly investigated. We look at the flow around a 2D cylinder and find that data-free PINNs failed to predict vortex shedding in our settings. Data-driven PINN exhibits vortex shedding only while the training data (from a traditional CFD solver) is available, but reverts to the steady state solution when the data flow stops. We conducted dynamic mode decomposition and analyze the Koopman modes in the solutions obtained with PINNs versus a traditional fluid solver (PetIBM). The distribution of the Koopman eigenvalues on the complex plane suggests that the results of the PINN exhibit numerical dispersion and diffusion. The PINN method reverts to the steady solution possibly as a consequence of spectral bias. This case study raises concerns about the ability of PINNs to predict flows with instabilities, specifically vortex shedding. Our computational study supports the need for more theoretical work to analyze the numerical properties of PINN methods. The results in this paper are transparent and reproducible, with all data and code available in public repositories and persistent archives; links are provided in the paper repository at \url{https://github.com/barbagroup/jcs_paper_pinn}, and a Reproducibility Statement within the paper. 72 | \end{abstract} 73 | 74 | % keywords 75 | \begin{keyword} 76 | computational fluid dynamics \sep 77 | physics-informed neural networks \sep 78 | dynamic mode analysis \sep 79 | Koopman analysis \sep 80 | vortex shedding 81 | \end{keyword} 82 | \end{frontmatter} 83 | 84 | \section{Introduction} 85 | \input{introduction.tex} 86 | 87 | \section{Method} 88 | \input{methods.tex} 89 | 90 | \section{Verification and Validation} 91 | \input{results_1.tex} 92 | \input{results_2.tex} 93 | 94 | \section{Case Study: 2D Cylinder Flow at $Re=\num{200}$}\label{sec:case-study} 95 | \input{results_3.tex} 96 | 97 | \section{Discussion} 98 | \input{discussion.tex} 99 | 100 | \section{Conclusion} 101 | \input{conclusion.tex} 102 | 103 | \section{Reproducibility statement} 104 | \input{reproducibility.tex} 105 | 106 | \section*{Acknowledgement} 107 | We appreciate the support by NVIDIA, through sponsoring the access to its high-performance computing cluster. 108 | 109 | % bibliography 110 | \bibliography{references} 111 | 112 | 113 | \appendix 114 | \section{Supplement} 115 | \input{supp.tex} 116 | 117 | \end{document} 118 | % vim:ft=tex: 119 | -------------------------------------------------------------------------------- /figs/pinn.tikz: -------------------------------------------------------------------------------- 1 | \begin{tikzpicture} 2 | % network's frame 3 | \node [none] (network) at (0, 0) { 4 | Network 5 | $\left[\begin{smallmatrix} \vec{u} \\ p \end{smallmatrix}\right] = 6 | G(\vec{x}, t; \vec{\theta})$ 7 | }; 8 | 9 | % network's nodes: the input layer 10 | \node [below=1.5 of network.south west, input, anchor=north west] (nin1) {$x$}; 11 | \node [below=0.5 of nin1, input] (nin2) {$y$}; 12 | \node [below=0.5 of nin2, input] (nin3) {$t$}; 13 | 14 | % network's nodes: the 1st hidden layer 15 | \node [right=0.3 of nin1, param] (nh12) {$h_2^1$}; 16 | \node [above=0.5 of nh12, param] (nh11) {$h_1^1$}; 17 | \node [below=0.5 of nh12, none] (nh13) {$\vdots$}; 18 | \node [below=0.5 of nh13, none] (nh14) {$\vdots$}; 19 | \node [below=0.5 of nh14, param] (nh15) {$h_{N_1}^1$}; 20 | 21 | % network's nodes: the skipped layer 22 | \node [above right=0.3 of nh12, none] (nskip1) {$\cdots$}; 23 | \node [right=0.3 of nh13, none] (nskip2) {$\cdots$}; 24 | \node [above right=0.3 of nh15, none] (nskip3) {$\cdots$}; 25 | 26 | % network's nodes: the last hidden layer 27 | \node [right=0.6 of nh12, param] (nh22) {$h_2^\ell$}; 28 | \node [above=0.5 of nh22, param] (nh21) {$h_1^\ell$}; 29 | \node [below=0.5 of nh22, none] (nh23) {$\vdots$}; 30 | \node [below=0.5 of nh23, none] (nh24) {$\vdots$}; 31 | \node [below=0.5 of nh24, param] (nh25) {$h_{N_\ell}^\ell$}; 32 | 33 | % network's nodes: the output layer 34 | \node [right=0.5 of nh22, input] (nout1) {$u$}; 35 | \node [below=0.5 of nout1, input] (nout2) {$v$}; 36 | \node [below=0.5 of nout2, input] (nout3) {$p$}; 37 | 38 | % network's outer frame 39 | \node [draw=black, line width=0.01in, fit={(network) (nin2) (nh15) (nh25) (nout2)}] (nnframe){}; 40 | 41 | % denoting automatic derivation 42 | \node [right=0.5 of nnframe.north east, anchor=north west, none] (adtxt) { 43 | Automatic Differentiation 44 | }; 45 | 46 | % u derivative nodes 47 | \node [below=0.4 of adtxt.south, anchor=north, input] (dudy) { 48 | $\frac{\partial u}{\partial y}$}; 49 | \node [right=0.1 of dudy.east, anchor=west, input] (d2udx2) { 50 | $\frac{\partial^2 u}{\partial x^2}$}; 51 | \node [right=0.1 of d2udx2.east, anchor=west, input] (d2udy2) { 52 | $\frac{\partial^2 u}{\partial y^2}$}; 53 | \node [left=0.1 of dudy.west, anchor=east, input] (dudx) { 54 | $\frac{\partial u}{\partial x}$}; 55 | \node [left=0.1 of dudx.west, anchor=east, input] (dudt) { 56 | $\frac{\partial u}{\partial t}$}; 57 | 58 | % u derivatives' outer frame 59 | \node [draw=black!50, fit={(dudt) (d2udy2)}] (dubox) {}; 60 | 61 | % v derivative nodes 62 | \node [below=0.4 of dudy.south, anchor=north, input] (dvdy) { 63 | $\frac{\partial v}{\partial y}$}; 64 | \node [right=0.1 of dvdy.east, anchor=west, input] (d2vdx2) { 65 | $\frac{\partial^2 v}{\partial x^2}$}; 66 | \node [right=0.1 of d2vdx2.east, anchor=west, input] (d2vdy2) { 67 | $\frac{\partial^2 v}{\partial y^2}$}; 68 | \node [left=0.1 of dvdy.west, anchor=east, input] (dvdx) { 69 | $\frac{\partial v}{\partial x}$}; 70 | \node [left=0.1 of dvdx.west, anchor=east, input] (dvdt) { 71 | $\frac{\partial v}{\partial t}$}; 72 | 73 | % v derivatives' outer frame 74 | \node [draw=black!50, fit={(dvdt) (d2vdy2)}] (dvbox) {}; 75 | 76 | % p derivative nodes 77 | \node [below=0.4 of dvdx.south east, anchor=north, input] (dpdx) { 78 | $\frac{\partial p}{\partial x}$}; 79 | \node [below=0.4 of d2vdx2.south west, anchor=north, input] (dpdy) { 80 | $\frac{\partial p}{\partial y}$}; 81 | 82 | % p derivatives' outer frame 83 | \node [draw=black!50, fit={(dpdx) (dpdy)}] (dpbox) {}; 84 | 85 | % all derivatives' outer frame 86 | \node [draw=black, line width=0.01in, fit={(dubox) (dpbox)}] (dervbox) {}; 87 | 88 | % loss 5: IC of v (rendered first bc it's at the center) 89 | \node [right=6.5 of nnframe.east, anchor=south west, none] (loss5) {$ 90 | L_5 = v - v_0 91 | \text{\enspace if } t = 0 92 | $}; 93 | 94 | % loss 4: IC of u 95 | \node [above=0.2 of loss5.north west, anchor=south west, none] (loss4) {$ 96 | L_4 = u - u_0 97 | \text{\enspace if } t = 0 98 | $}; 99 | 100 | % loss 3: momentum y 101 | \node [above=0.4 of loss4.north, anchor=south, none] (loss3) {$ 102 | L_3 = 103 | \frac{\partial v}{\partial t} + 104 | \vec{u} \cdot \nabla v + 105 | \frac{1}{\rho}\frac{\partial p}{\partial y} - 106 | \nu \nabla^2 v 107 | \text{\enspace if } \vec{x} \in {\Omega} 108 | $}; 109 | 110 | % loss 2: momentum x 111 | \node [above=0.2 of loss3.north west, anchor=south west, none] (loss2) {$ 112 | L_2 = 113 | \frac{\partial u}{\partial t} + 114 | \vec{u} \cdot \nabla u + 115 | \frac{1}{\rho}\frac{\partial p}{\partial x} - 116 | \nu \nabla^2 u 117 | \text{\enspace if } \vec{x} \in {\Omega} 118 | $}; 119 | 120 | % loss 1: continuity 121 | \node [above=0.2 of loss2.north west, anchor=south west, none] (loss1) {$ 122 | L_1 = \nabla \cdot \vec{u} \text{\enspace if } \vec{x} \in {\Omega} 123 | $}; 124 | 125 | % loss 6: IC of p 126 | \node [below=0.2 of loss5.south west, anchor=north west, none] (loss6) {$ 127 | L_6 = p - p_0 128 | \text{\enspace if } t = 0 129 | $}; 130 | 131 | % loss 7: dirichlet bc of u 132 | \node [below=0.4 of loss6.south, anchor=north, none] (loss7) {$ 133 | L_7 = u - u_D \text{\enspace if } \vec{x}\in\Gamma_{\displaystyle u_D} 134 | $}; 135 | 136 | % loss 8: dirichlet bc of v 137 | \node [below=0.2 of loss7.south west, anchor=north west, none] (loss8) {$ 138 | L_8 = v - v_D \text{\enspace if } \vec{x}\in\Gamma_{\displaystyle v_D} 139 | $}; 140 | 141 | % loss 9: neumann bc of u 142 | \node [below=0.4 of loss8.south, anchor=north, none] (loss9) {$ 143 | L_9 = \frac{\partial u}{\partial \vec{n}} - u_N 144 | \text{\enspace if } \vec{x}\in\Gamma_{\displaystyle u_N} 145 | $}; 146 | 147 | % loss 10: neumann bc of v 148 | \node [below=0.2 of loss9.south west, anchor=north west, none] (loss10) {$ 149 | L_{10} = \frac{\partial v}{\partial \vec{n}} - v_N 150 | \text{\enspace if } \vec{x}\in\Gamma_{\displaystyle v_N} 151 | $}; 152 | 153 | % PDE residuals' outer frame 154 | \node [draw=black!50, fit={(loss1) (loss2) (loss3)}] (pdelossbox){}; 155 | 156 | % IC's outer frame 157 | \node [draw=black!50, fit={(loss4) (loss5) (loss6)}] (iclossbox){}; 158 | 159 | % Dirichlet BCs' outer frame 160 | \node [draw=black!50, fit={(loss7) (loss8)}] (dbclossbox){}; 161 | 162 | % Neumann BCs' outer frame 163 | \node [draw=black!50, fit={(loss9) (loss10)}] (nbclossbox){}; 164 | 165 | % losses' outer frame 166 | \node [draw=black, line width=0.01in, fit={(pdelossbox) (iclossbox) (dbclossbox) (nbclossbox)}] (lossframe){}; 167 | 168 | % arg min 169 | \node [right=0.5 of lossframe.east, anchor=west, input] (argmin) {$ 170 | \argmin\limits_{\theta \in \Theta} 171 | \sum\limits_{\substack{\vec{x} \in \Omega \cup \Gamma \\ t \in T}} 172 | \sum\limits_{j=1}^{10} c_j L_j^2 173 | $}; 174 | \node [above=0.1 of argmin.north, anchor=south, none] (argmintxt) {Optimizing/training}; 175 | 176 | % link network's inputs to the 1st hidden layer 177 | \draw [style=one arrow] (nin1) to (nh11); 178 | \draw [style=one arrow] (nin1) to (nh12); 179 | \draw [style=one arrow] (nin1) to (nh15); 180 | \draw [style=one arrow] (nin2) to (nh11); 181 | \draw [style=one arrow] (nin2) to (nh12); 182 | \draw [style=one arrow] (nin2) to (nh15); 183 | \draw [style=one arrow] (nin3) to (nh11); 184 | \draw [style=one arrow] (nin3) to (nh12); 185 | \draw [style=one arrow] (nin3) to (nh15); 186 | 187 | % link network's last hidden layer to the outputs 188 | \draw [style=one arrow] (nh21) to (nout1); 189 | \draw [style=one arrow] (nh21) to (nout2); 190 | \draw [style=one arrow] (nh21) to (nout3); 191 | \draw [style=one arrow] (nh22) to (nout1); 192 | \draw [style=one arrow] (nh22) to (nout2); 193 | \draw [style=one arrow] (nh22) to (nout3); 194 | \draw [style=one arrow] (nh25) to (nout1); 195 | \draw [style=one arrow] (nh25) to (nout2); 196 | \draw [style=one arrow] (nh25) to (nout3); 197 | 198 | % link network to derivatives 199 | \draw [style=one arrow] (nnframe.east) to (dervbox.west); 200 | 201 | % link network to loss 202 | \draw [style=one arrow] (nnframe.east) to [out=270,in=250] (lossframe.west); 203 | 204 | % link derivatives to loss 205 | \draw [style=one arrow] (dervbox.east) to (lossframe.west); 206 | 207 | % links from losses to argmin 208 | \draw [style=one arrow] (lossframe.east) to (argmin.west); 209 | 210 | \end{tikzpicture} 211 | % vim:ft=tex: 212 | -------------------------------------------------------------------------------- /results_2.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | \subsection{Validation: 2D Cylinder, $Re=\num{40}$}\label{sec:val_2d_cylinder_re40} 4 | 5 | We used 2D cylinder flow at $Re=40$ to validate the solvers because it has a similar configuration with the $Re=200$ case that we will study later. 6 | The $Re=40$ flow, however, does not exhibit vortex shedding and reaches a steady-state solution, making it suitable for validating the core functionality of the code. 7 | Experimental data for this flow configuration is also widely available. 8 | 9 | The spatial and temporal computational domains are $[-10$, $30]$ $\times$ $[-10$, $10]$ and $t \in [0, 20]$. 10 | A cylinder with a nondimensional radius of $0.5$ sits at $x=y=0$. 11 | Density is $\rho=1$, and kinematic viscosity is $\nu=0.025$. 12 | The initial conditions are $u=1$ and $v=0$ everywhere in the spatial domain. 13 | The boundary conditions are $u=1$ and $v=0$ on $x=-10$ and $y=\pm 10$. 14 | At the outlet, i.e., $x=30$, the boundary conditions are set to 1D convective conditions: 15 | \begin{equation}\label{eq:convec-bc} 16 | \pdiff{}{t}\begin{bmatrix} u \\ v \end{bmatrix} 17 | + 18 | c\pdiff{}{\vec{n}}\begin{bmatrix} u \\ v \end{bmatrix} = 0, 19 | \end{equation} 20 | where $\vec{n}$ is the normal vector of the boundary (pointing outward), and $c=1$ is the convection speed. 21 | 22 | We ran the PetIBM validation case on a workstation with one (very old) NVIDIA K40 GPU and 6 CPU cores of the Intel i7-5930K processor. 23 | The grid resolution is $562 \times 447$ with $\Delta t=\num{e-2}$. 24 | The tolerance for all linear solvers in PetIBM was $\num{e-14}$. 25 | We used the same linear solver configurations as those in the TGV verification case. 26 | 27 | We validated two implementations of the PINN method with this cylinder flow because both codes were used in the $Re=200$ case (section \ref{sec:case-study}). 28 | The first implementation is an unsteady PINN solver, which is the same piece of code used in the verification case (section \ref{sec:verification}). 29 | It solves the unsteady Navier-Stokes equations as shown in figure \ref{fig:pinn-workflow}. 30 | The second one is a steady PINN solver, which solves the steady Navier-Stokes equations. 31 | The workflow of the steady PINN solver works similar to that in figure \ref{fig:pinn-workflow} except that all time-related terms and losses are dropped. 32 | 33 | Both PINN solvers used MLP networks with \num{6} hidden layers and \num{512} neurons each. 34 | The Adam optimizer configuration is the same as that in section \ref{sec:verification}. 35 | The learning rate scheduler is a cyclical learning rate with $\eta_{low}=\num{e-6}$, $\eta_{high}=\num{e-2}$, $N_c=\num{5000}$, and $\gamma={0.99998}$. 36 | We ran all PINN-related validations with one NVIDIA A100 GPU, 37 | all using single-precision floats. 38 | 39 | To evaluate PDE losses, \num{256e6} spatial-temporal points were randomly sampled from the computational domain and the desired simulation time range. 40 | The PDE losses were evaluated on \num{256e2} points in each iteration, so the Adam optimizer would see each point \num{40} times on average during the optimization with \num{4e5} iterations. 41 | On the boundaries, \num{256e5} points were sampled at $y=\pm 10$, and \num{128e5} at $x=-10$ and $x=30$. 42 | On the cylinder surface, the number of spatial-temporal points were \num{512e4}. 43 | In each iteration, \num{2560}, \num{1280}, and \num{512} points were used, respectively. 44 | 45 | Figure \ref{fig:cylinder-re40-pinn-loss} shows the training history of the PINN solvers. 46 | The total loss of the steady PINN solver converged to around \num{e-4}, while that of the unsteady PINN solver converged to around \num{e-2} after about 26 hours of training. 47 | Readers should be aware that the configuration of the PINN solvers might not be optimal, so the accuracy and the computational cost shown in this figure should not be treated as an indication of PINNs' general performance. 48 | In our experience, it is possible to reduce the run time in half but obtain the same level of accuracy by adjusting the number of spatial-temporal points used per iteration. 49 | 50 | \begin{figure} 51 | \centering% 52 | \includegraphics[width=\columnwidth]{cylinder-2d-re40/loss-hist.png}% 53 | \caption{% 54 | Training convergence history of 2D cylinder flow at $Re=\num{40}$ for both steady and unsteady PINN solvers. 55 | } 56 | \label{fig:cylinder-re40-pinn-loss}% 57 | \end{figure} 58 | 59 | 60 | \begin{figure} 61 | \centering% 62 | \includegraphics[width=\columnwidth]{cylinder-2d-re40/drag-lift-coeffs}% 63 | \caption{% 64 | Drag and lift coefficients of 2D cylinder flow at $Re=\num{40}$ w/ PINNs. 65 | } 66 | \label{fig:cylinder-re40-drag-lift}% 67 | \end{figure} 68 | 69 | Figure \ref{fig:cylinder-re40-drag-lift} gives the drag and lift coefficients ($C_D$ and $C_L$) with respect to simulation time, where PINN and PetIBM visually agree. 70 | Table \ref{table:cylinder-re40-cd-comparison} compares the values of $C_D$ against experimental data and simulation data from the literature. 71 | Values from different works in the literature do not closely agree with each other. 72 | Though there is not a single value to compare against, at least the $C_D$ from the PINN solvers and PetIBM fall into the range of other published works. 73 | We consider the results of $C_D$ validated for the PINN solvers and PetIBM. 74 | 75 | 76 | \begin{figure} 77 | \centering% 78 | \includegraphics[width=0.95\columnwidth]{cylinder-2d-re40/surface-pressure}% 79 | \caption{% 80 | Surface pressure distribution of 2D cylinder flow at $Re=\num{40}$ 81 | } 82 | \label{fig:cylinder-re40-pinn-surfp}% 83 | \end{figure} 84 | 85 | 86 | \begin{figure*} 87 | \centering% 88 | \includegraphics{cylinder-2d-re40/contour-comparison}% 89 | \caption{% 90 | Contour plots for 2D cylinder flow at $Re=\num{40}$ 91 | } 92 | \label{fig:cylinder-re40-contours}% 93 | \end{figure*} 94 | 95 | Figure \ref{fig:cylinder-re40-pinn-surfp} shows the pressure distribution on the cylinder surface. 96 | Again, though there is not a single solution that all works agree upon, the results from PetIBM and the PINN solvers visually agree with the published literature. 97 | We consider PetIBM and both PINN solvers validated. 98 | Finally, figure \ref{fig:cylinder-re40-contours} compares the steady-state flow fields (i.e., the snapshots at $t=20$ for PetIBM and the unsteady PINN solver). 99 | The PINN solvers' results visually agree with PetIBM's. 100 | The variation in the vorticity of PINNs only happens at the contour line of \num{0}, so it is likely caused by trivial rounding errors. 101 | Note that vorticity is obtained by post-processing for all solvers. 102 | PetIBM used central difference to calculate the vorticity, while the PINN solvers used automatic differentiation to obtain it. 103 | 104 | 105 | \begin{table}[h] 106 | \centering% 107 | \begin{threeparttable} 108 | \begin{tabular}{lccc} 109 | \toprule 110 | & $C_D$ & $C_{D_p}$ & $C_{D_f}$ \\ 111 | \midrule 112 | Steady PINN & 1.62 & 1.06 & 0.55 \\ 113 | Unsteady PINN & 1.60 & 1.06 & 0.55 \\ 114 | PetIBM & 1.63 & 1.02 & 0.61 \\ 115 | Rosetti et al., 2012\cite{rosetti_urans_2012}\tnote{1} & \num{1.74+-0.09} & n/a & n/a \\ 116 | Rosetti et al., 2012\cite{rosetti_urans_2012}\tnote{2} & 1.61 & n/a & n/a \\ 117 | Sen et al., 2009\cite{sen_steady_2009}\tnote{2} & 1.51 & n/a & n/a \\ 118 | Park et al., 1988\cite{park_numerical_1998}\tnote{2} & 1.51 & 0.99 & 0.53 \\ 119 | Tritton, 1959\cite{tritton_experiments_1959}\tnote{1} & 1.48--1.65 & n/a & n/a \\ 120 | Grove et al., 1964\cite{grove_experimental_1964}\tnote{1} & n/a & 0.94 & n/a \\ 121 | \bottomrule 122 | \end{tabular}% 123 | \begin{tablenotes} 124 | \footnotesize 125 | \item [1] Experimental result 126 | \item [2] Simulation result 127 | \end{tablenotes} 128 | \caption{% 129 | Validation of drag coefficients. % 130 | $C_D$, $C_{D_p}$, and $C_{D_f}$ denote the coefficients of total drag, pressure drag, % 131 | and friction drag, respectively.% 132 | }% 133 | \label{table:cylinder-re40-cd-comparison} 134 | \end{threeparttable} 135 | \end{table}% 136 | 137 | 138 | 139 | % vim:ft=tex: 140 | -------------------------------------------------------------------------------- /methods.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | We will be solving the 2D incompressible Navier-Stokes equations in primitive-variable form: 4 | \begin{empheq}[left=\left\{\,, right=\right.]{equation}\label{eq:orig-ns} 5 | \begin{aligned} 6 | &\nabla \cdot \vec{u} = 0 \\ 7 | &\pdiff{\vec{u}}{t} + \left(\vec{u} \cdot \nabla\right) \vec{u} 8 | = 9 | -\frac{1}{\rho}\nabla p + \nu \nabla^2 \vec{u} 10 | \end{aligned} 11 | \end{empheq} 12 | \noindent Here, $\vec{u} \equiv \left[ u \enspace v \right]^\mathsf{T}$, $p$, $\nu$, and $\rho$ denote the velocity vector, pressure, kinematic viscosity, and the density, respectively. 13 | Let $\vec{x} \equiv \left[ x \enspace y \right]^\mathsf{T} \in \Omega$ and $t \in \left[0,\enspace T\right]$ denote the spatial and temporal domains. 14 | The velocity $\vec{u}$ and pressure $p$ are functions of $\vec{x}$ and $t$ for given fluid properties $\rho$ and $\nu$. 15 | The solution to the Navier-Stokes equations is subject to initial conditions $\vec{u}(\vec{x}, t) = \left[ u_0(\vec{x}) \enspace v_0(\vec{x}) \right]^\mathsf{T}$ and $p(\vec{x}, t) = p_0(\vec{x})$ for $\vec{x} \in \Omega$ and $t=0$. 16 | The Dirichlet boundary conditions are $u(\vec{x}, t) = u_D(\vec{x}, t)$ and $v(\vec{x}, t) = v_D(\vec{x}, t)$, on domain boundaries $\vec{x} \in \Gamma_{\displaystyle u_D}$ and $\Gamma_{\displaystyle v_D}$, respectively. 17 | The Neumann boundary conditions are $\pdiff{u}{\vec{n}}(\vec{x}, t)=u_N(\vec{x}, t)$ and $\pdiff{v}{\vec{n}}=v_N(\vec{x}, t)$, defined on boundaries $\vec{x} \in \Gamma_{\displaystyle u_N}$ and $\Gamma_{\displaystyle v_N}$ correspondingly. 18 | Note that in incompressible flow pressure is a Lagrangian multiplier to enforce the divergence-free condition and does not need boundary conditions theoretically. 19 | 20 | \begin{figure*} 21 | \centering 22 | \normalsize 23 | \resizebox{\textwidth}{!}{\input{figs/pinn.tikz}\unskip} 24 | \caption{ 25 | A graphical illustration of the workflow in PINNs: 26 | $\vec{x} \equiv \left[ x \enspace y \right]^\mathsf{T} \in \Omega$ and $t \in \left[0,\enspace T\right]$ denote the spatial and temporal domains. 27 | $\vec{u} \equiv \left[ u \enspace v \right]^\mathsf{T}$, $p$, $\nu$, and $\rho$ represent the velocity vector, pressure, kinematic viscosity, and the density, respectively. 28 | $G(\vec{x}, t; \theta)$ is a neural network model that approximates the solution to the Navier-Stokes equations with a set of free model parameters denoted by $\theta$. 29 | The $\left\{h_1^1, \cdots, h_{N_1}^1, \cdots, h_1^\ell, \cdots, h_{N_\ell}^\ell\right\}$, called hidden layers in neural networks, can be deemed as some intermediate values or temporary results during the calculations of the approximate solutions. 30 | Given spatial-temporal coordinates $(x, y, t)$, the neural network returns an approximate solution $(u, v, p)$ at this coordinate. 31 | We then apply automatic differentiation to obtain required derivatives. 32 | With the approximate solutions and the derivatives, we are able to calculate the residuals (also called losses, denoted by symbol $L$) between the approximates and PDEs, as well as the initial and boundary conditions. 33 | Finally, with the weighted sum of squared losses, we can determine the free model parameters $\theta$ by a least-square method. 34 | Note that the weights of the squared losses in this work are all $1$. 35 | } 36 | \label{fig:pinn-workflow} 37 | \end{figure*} 38 | 39 | When using physics-informed neural networks, PINNs, we approximate the solutions to equation \eqref{eq:orig-ns} with a neural network model $G(\vec{x}, t; \theta)$: 40 | \begin{equation}\label{eq:G-network} 41 | \begin{bmatrix} 42 | u(\vec{x}, t) \\ v(\vec{x}, t) \\ p(\vec{x}, t) 43 | \end{bmatrix} 44 | \approx 45 | G(\vec{x}, t; \theta), 46 | \end{equation} 47 | where $\theta$ represents a set of free model parameters we need to determine later. 48 | A common choice of $G$ is an MLP (multilayer perceptron) network, which can be represented as follows: 49 | 50 | \begin{gather}\label{eq:mlp-formula} 51 | \vec{h}^0 \equiv \begin{bmatrix} x & y & t \end{bmatrix}^\mathsf{T} \\ 52 | \vec{h}^k = 53 | \sigma_{k-1}\left(\mat{A}^{k-1}\vec{h}^{k-1}+\vec{b}^{k-1}\right) 54 | \text{, for } 1 \le k \le \ell \\ 55 | \begin{bmatrix} u & v & p \end{bmatrix}^\mathsf{T} 56 | \approx 57 | \vec{h}^{\ell+1} = \sigma_\ell\left(\mat{A}^\ell\vec{h}^\ell+\vec{b}^\ell\right) 58 | \end{gather} 59 | The vectors $\vec{h}^k$ for $1 \le k \le \ell$ are called hidden layers, and carry intermediate evaluations of the transformations that take the input (spatial and temporal variables) to the output (velocity and pressure values). 60 | $\ell$ denotes the number of hidden layers. 61 | The elements in these vectors are called neurons, and $N_k$ for $1 \le k \le \ell$ represents the number of neurons in each hidden layer. 62 | To have a consistent notation, we use $\vec{h}^0$ to denote the vector of the inputs to the model $G$, which contains spatial-temporal coordinates. 63 | Similarly, $\vec{h}^{\ell+1}$ denotes the outputs of $G$, corresponding to the approximate solutions $u$, $v$, and $p$ at every spatial point and time instant. 64 | $\mat{A}^k$ and $\vec{b}^k$ for $0 \le k \le \ell$ are parameter matrices and vectors holding the free model parameters that will be found via optimization, 65 | $\theta = \left\{ \mat{A}^0, \vec{b}^0, \cdots, \mat{A}^\ell, \vec{b}^\ell\right\}$. 66 | Finally, $\sigma_k$ for $0 \le k \le \ell$ are vector-valued functions, called activation functions, that are applied element-wise to the vectors $\vec{h}^k$. 67 | In neural networks, the activation functions are responsible for providing the non-linearity in an MLP model. 68 | Throughout this work, we use $\sigma_0 = \cdots = \sigma_\ell = \sigma(\vec{z}) = \frac{\vec{z}}{1 + \exp(\vec{z})}$, the classical sigmoid function. 69 | The parameters $\ell$, $N_k$, and the choices of $\sigma_k$ together control the model complexity of the PINNs that use MLP networks. 70 | 71 | As with all other numerical methods for PDEs, the calculations of spatial and temporal derivatives of velocity and pressure play a crucial role. 72 | While a numerical approximation (e.g., finite difference) may be a more robust choice---as seen in early-day literature on neural networks for differential equations \cite{dissanayake_neural-network-based_1994,lagaris_artificial_1998}---, it is common to see the use of automatic differentiation nowadays. 73 | Automatic differentiation is a general technique to find derivatives of a function by decomposing it into elementary functions with a known derivative, and then applying the chain rule of calculus to get exact derivative of the more complex function. 74 | Note that the word {\it exact} here refers to being exact in terms of the model $G$, rather than to the true solution of the Navier-Stokes equations. 75 | A detailed review of automatic differentiation can be found in reference \cite{griewank_automatic_1988}. 76 | Major deep learning programming libraries, such as TensorFlow and PyTorch, offer the user automatic differentiation features. 77 | 78 | Once we have obtained derivatives, we are able to calculate residuals, also called losses in the terminology of machine learning. 79 | As shown in figure \ref{fig:pinn-workflow}, given a spatial-temporal coordinate $(x, y, t)$, we can calculate up to \num{10} loss terms, depending on where in the domain this spatial-temporal point is located. 80 | Figure \ref{fig:pinn-workflow} is only shown as an illustration of the PINN methodology using the solution workflow specifically for the Navier-Stokes equations \eqref{eq:orig-ns}. 81 | The number and definitions of loss terms may change, for example, when we have some boundary segments with Robin conditions or when we are solving 3D problems. 82 | 83 | Finally, we determine the free model parameters using a least-squares optimization, as shown in the last block in figure \ref{fig:pinn-workflow}. 84 | To be specific, in this work we used the Adam stochastic optimization method for this process. 85 | We first randomly sampled some spatial-temporal collocation points from the computational domain, including all boundaries. 86 | These points are called {\it training points} in the terminology of machine learning. 87 | Depending on where a training point is located in the domain, it may result in multiple loss terms, as described in the previous paragraph. 88 | An aggregated squared loss is obtained over all loss terms of all training points. 89 | In this work, all loss terms were taken to have the same weights. 90 | The Adam optimization then finds the optimal model parameters, i.e., $\theta=\left\{\mat{A}^0, \vec{b}^0, \cdots, \mat{A}^\ell, \vec{b}^\ell\right\}$, based on the gradients of the aggregated loss with respect to model parameters. 91 | In other words, the desired model parameters are those giving the minimal aggregated squared loss. 92 | 93 | Note that in figure \ref{fig:pinn-workflow} we consider that if-conditions determine the loss terms to calculate on a training point. 94 | In practice, however, we sample points in subgroups separately from within the domain, on the boundaries, and at $t=0$. 95 | Each subgroup of training points is only responsible for specific loss terms. 96 | We also use a batched approach for the optimization, 97 | meaning that not all training points are used during each individual optimization iteration. 98 | The batched approach only uses a sample of the training points to calculate the losses and the gradients of the aggregated loss in each optimization iteration. 99 | Hereafter, the term {\it training} will be used interchangeably with the optimization process. 100 | 101 | In this section, we only introduce the specific details of PINNs required for our work. 102 | References \cite{dissanayake_neural-network-based_1994,lagaris_artificial_1998,cai_physics-informed_2021} provide more details of these methods in general. 103 | 104 | % vim:ft=tex: -------------------------------------------------------------------------------- /discussion.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | This case study raises significant concerns about the ability of the PINN method to predict flows with instabilities, specifically vortex shedding. 4 | In the real world, vortex shedding is triggered by natural perturbations. 5 | In traditional numerical simulations, however, the shedding is triggered by various numerical noises, including rounding and truncation errors. 6 | These numerical noises mimic natural perturbations. 7 | Therefore, a steady solution could be physically valid for cylinder flow at $Re = 200$ in a perfect world with no numerical noise. 8 | As PINNs are also subject to numerical noise, we expected to observe vortex shedding in the simulations, but the results show that instead the data-free unsteady PINN converged to a steady-state solution. 9 | Even the data-driven PINN reverted back to a steady-state solution beyond the timeframe that was fed with PetIBM's data. 10 | It is unlikely that the steady-state behavior has to do with perturbations. 11 | In traditional numerical simulations, it is sometimes challenging to induce vortex shedding, particularly in symmetrical computational domains. 12 | However, we can still trigger shedding by incorporating non-uniform initial conditions, which serve as perturbations to the steady state solution. 13 | In the data-driven PINN, the training data from PetIBM can be considered as such non-uniform initial conditions. 14 | The vortex shedding already exists in the training data, yet it did not continue beyond the period of data input, indicating that the perturbation is not the primary factor responsible for the steady-state behavior. 15 | This suggests that PINNs have a different reason for their inability to generate vortex shedding compared to traditional CFD solvers. 16 | Other results in the literature that show the two-dimensional cylinder wake \cite{jin_nsfnets_2021} in fact are using high-fidelity DNS data to provide boundary and initial data for the PINN model. 17 | The failure to capture vortex shedding in the data-free mode of PINN was confirmed in recent work by Rohrhofer et al. \cite{rohrhofer_fixedpoints_2023}. 18 | 19 | The steady-state behavior of the PINN solutions may be attributed to spectral bias. 20 | Rahaman et al. \cite{rahaman_spectral_2019} showed that neural networks exhibit spectral bias, meaning they tend to prioritize learning low-frequency patterns in the training data. 21 | For cylinder flow, the lowest frequency corresponds to Strouhal number $St=0$. 22 | The data-free unsteady PINN may be prioritizing learning the mode at $St=0$ (i.e., the steady mode) from the Navier-Stokes equations. 23 | The same may apply to the data-driven PINN beyond the timeframe with training data from PetIBM, resulting in a rapid restoration to the non-oscillating solution. 24 | Even within the timeframe with the PetIBM training data, the data-driven PINN may prioritize learning the $St=0$ mode in PetIBM's data. 25 | Although the vortex shedding in PetIBM's data forces the PINN to learn higher-frequency modes to some extent, the shedding modes are generally more difficult to learn due to the spectral bias. 26 | This claim is supported by the history of the drag and lift coefficients of the data-driven PINN (the red dashed line in figure \ref{fig:cylinder-re200-drag-lift}), which was still unable to predict the peak values in $t \in \left[125, 140\right]$, despite extensive training. 27 | 28 | The suspicion of spectral bias prompted us to conduct spectral analysis by obtaining Koopman modes, presented in section \ref{sec:cylinder-re200-koopman}. 29 | The Koopman analysis results are consistent with the existence of spectral bias: the data-driven PINN is not able to learn discrete frequencies well, even when trained with PetIBM's data that contain modes with discrete frequencies. 30 | 31 | The Koopman analysis on the data-driven PINN's prediction reveals many additional frequencies that do not exist in the training data from PetIBM, and many damped modes that have a damping effect and reduce or prohibit oscillation. 32 | These damped modes may be the cause of the solution restoring to a steady-state flow beyond the timeframe with PetIBM's data. 33 | 34 | From a numerical-method perspective, the Koopman analysis shows that the PINN methods in our work are dissipative and dispersive. 35 | The Q-criterion result (figure \ref{fig:cylinder-re200-pinn-qcriterion}) also demonstrates dissipative behavior, which inhibits oscillation and instabilities. 36 | Dispersion can also contribute to the reduction of oscillation strength. 37 | However, it is unclear whether dispersion and dissipation are intrinsic numerical properties or whether we did not train the PINNs sufficiently, even though the aggregated loss had converged (figure \ref{fig:cylinder-re200-pinn-loss}). 38 | Unfortunately, limited computing resources prevented us from continuing the training---already taking orders of magnitude longer than the traditional CFD solver. 39 | More theoretical work may be necessary to study the intrinsic numerical properties of PINNs beyond computational experiments. 40 | 41 | Another point worth discussing is the generalizability of data-driven PINNs. 42 | Our case study demonstrates that data-driven PINNs may not perform well when predicting data they have not seen during training, as illustrated by the unphysical predictions generated for $t = 10$ and $t = 50$ in figures \ref{fig:cylinder-re200-pinn-contours-u}, \ref{fig:cylinder-re200-pinn-contours-v}, \ref{fig:cylinder-re200-pinn-contours-p}, and \ref{fig:cylinder-re200-pinn-contours-omega_z}. 43 | While data-driven PINNs are believed to have the advantage of performing extrapolation in a meaningful way by leveraging existing data and physical laws, our results suggest that this ``extrapolation'' capability may be limited. 44 | In data-driven approaches, the training data typically consists of observation data (e.g., experimental or simulation data) and pure spatial-temporal points. 45 | The ``extrapolation'' capability is therefore constrained to the coordinates seen during training, rather than arbitrary coordinates beyond the observation data. 46 | 47 | For example, in our case study, $t \in [0, 125]$ corresponds to spatial-temporal points that were never seen during training, $t \in [125, 140]$ contains observation data, and $t \in [140, 200]$ corresponds to spatial-temporal points seen during training but without observation data. 48 | The PINN method's prediction for $t \in [125, 140]$ is considered interpolation. 49 | Even if we accept the steady-state solution as physically valid, then the data-driven PINN can only extrapolate for $t \in [140, 200]$, and fails to extrapolate for $t \in [0, 125]$. 50 | This limitation means that the PINN method can only extrapolate on coordinates it has seen during training. 51 | If the steady-state solution is deemed unacceptable, then the data-driven PINN lacks extrapolation capability altogether and is limited to interpolation. 52 | This raises the interesting research question of how data-driven PINNs compare to traditional deep learning approaches (i.e., those not using PDEs for losses), particularly in terms of performance and accuracy benefits. 53 | 54 | It is worth noting that Cai et al. \cite{cai_physics-informed_2021} argue that data-driven PINNs are useful in scenarios where only sparse observation data are available, such as when an experiment only measures flow properties at a few locations, or when a simulation only saves transient data at a coarse-grid level in space and time. 55 | In such cases, data-driven PINNs may outperform traditional deep learning approaches, which typically require more data for training. 56 | However, as we discussed in our previous work \cite{chuang_experience_2022}, using PDEs as loss functions is computationally expensive, increasing the overall computational graph exponentially. 57 | Thus, even in the context of interpolation problems under sparse observation data, the research question of how much additional accuracy can be gained at what cost in computational expense remains an open and interesting question. 58 | 59 | Other works have brought up concerns about the limitations of PINN methods in certain scenarios, like flows with shocks \cite{fuks_limitations_2020} and flows with fast variations \cite{krishnapriyan_failure_2021}. 60 | These researchers suggested that the optimization process on the complex landscape of the loss function may be the cause of the failure. 61 | And other works have also highlighted the performance penalty of PINNs compared to traditional numerical methods \cite{grossmann_pinnvsfem_2023}. 62 | In comparison with finite element methods, PINNs were found to be orders of magnitude slower in terms of wall-clock time. 63 | We also observed a similar performance penalty in our case study, where the PINN method took orders of magnitude longer to train than the traditional CFD solver. 64 | We purposely used a very old GPU (NVIDIA Tesla K40) with PetIBM, running on our lab-assembled workstation, while the PINN method was run on a modern GPU (NVIDIA Tesla A100) on a high-performance computing cluster. 65 | However, we did not conduct a thorough performance comparison. 66 | It is unclear what a ``fair'' performance comparison would look like, as the factors affecting runtime are so different between the two methods. 67 | 68 | An interesting third option was proposed recently, where the discretized form of the differential equations is used in the loss function, rather than the differential equations themselves \cite{karnakov_odil_2022}. 69 | This approach foregoes the neural-network representation altogether, as the unknowns are the solution values themselves on a discretization grid. 70 | It shares with PINNs the features of solving a gradient-based optimization problem, taking advantae of automatic differentiation, and being easily implemented in a few lines of code thanks to modern ML frameworks. 71 | But it does not suffer from the performance penalty of PINNs, showing an advantage of several orders of magnitude in terms of wall-clock time. 72 | Given that this approach uses a completely different loss function, it supports the claims of other researchers that the loss-function landscape is the source of problems for PINNs. 73 | 74 | In this work, we analized data-free PINNs' failure in predicting vortex shedding using dynamic mode analysis, which is a technique based on the physical and mathematical natures of the problem. 75 | Another direction to analyze data-free PINNs' failure is by an ablation study on data-driven PINNs, which is a machine-learning approach. 76 | An ablation study is a technique to investigate the functionality and importance of different components in a machine-learning model by removing components one by one from a working system. 77 | This being said, an ablation study on the interpolation part of the data-driven PINNs can potentially hint us on how vortex shedding is generated in PINNs. 78 | Such an ablation study, nevertheless, will take non-trivial time and resources to conduct, and is left for future work. 79 | Note that an ablation study requires a system that is working to some degree. 80 | It is therefor not proper to apply ablation study with the data-free PINNs and the extrapolation region of the data-driven PINNs as they have not been shown to work yet. 81 | 82 | % vim:ft=tex 83 | -------------------------------------------------------------------------------- /results_1.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | This section presents the verification and validation (V\&V) of our PINN solvers and PetIBM, an in-house CFD solver \cite{chuang_petibm_2018}. 4 | V\&V results are necessary to build confidence in our case study described later in section \ref{sec:case-study}. 5 | For verification, we solved a 2D Taylor-Green vortex (TGV) at Reynolds number $Re=\num{100}$, which has a known analytical solution. 6 | For validation, on the other hand, we use 2D cylinder flow at $Re=40$, which exhibits a well-known steady state solution with plenty of experimental data available in the published literature. 7 | 8 | For the hyperparameters used in PINNs (both in neural network architectures and training processes), we only present the combinations that yield affordable computational costs and reasonable loss levels. 9 | In Chuang's thesis~\cite{chuang_thesis_2023}, we explored several different combinations to evaluate their impact on the results. 10 | None of these combinations altered the findings and conclusions presented in this paper. 11 | 12 | \subsection{Verification: 2D Taylor-Green Vortex (TGV), $Re=\num{100}$}\label{sec:verification} 13 | 14 | Two-dimensional Taylor-Green vortices with periodic boundary conditions have closed-form analytical solutions, 15 | and serve as standard verification cases for CFD solvers. 16 | We used the following 2D TGV configuration, wih $Re=\num{100}$, to verify both the PINN solvers and PetIBM: 17 | \begin{equation}\label{eq:tgv} 18 | \left\{ 19 | \begin{aligned} 20 | u(x, y, t) &= \cos(x)\sin(y)\exp(-2 \nu t) \\ 21 | v(x, y, t) &= - \sin(x)\cos(y)\exp(-2 \nu t) \\ 22 | p(x, y, t) &= -\frac{\rho}{4}\left(\cos(2x) + \cos(2y)\right)\exp(-4 \nu t) 23 | \end{aligned} 24 | \right. 25 | \end{equation} 26 | where $\nu=\num{0.01}$ and $\rho=\num{1}$ are the kinematic viscosity and the density, respectively. 27 | The spatial and temporal domains are $x, y \in \left[-\pi, \pi\right]$ and $t \in [0, 100]$. 28 | 29 | Periodic conditions are applied at all boundaries by modifying the inputs of the neural network $G$: $x$ and $y$ are periodic at $-\pi$ and $\pi$, so instead of using $G=G(x, y, t)$, we modified the neural network $G$ to 30 | \begin{equation}\label{eq:periodic-bc} 31 | G = G(\sin(x), \cos(x), \sin(y), \cos(y), t) 32 | \end{equation} 33 | This approach builds the periodicity into the model directly, and hence no boundary losses are needed. 34 | In general, says, if $x$ is periodic at $x_1$ adn $x_2$, to build the periodicity in to the neural networks, one can change the inputs to neural networks of $\sin({2\pi\frac{x-x_1}{x_2-x_1}})$ and $\cos({2\pi\frac{x-x_1}{x_2-x_1}})$. 35 | 36 | In PetIBM, we used the Adams-Bashforth and the Crank-Nicolson schemes for the temporal discretization of convection and diffusion terms, respectively. 37 | The spatial discretization is central difference for all terms. 38 | Theoretically, we expect to see second-order convergence in both time and space for this 2D TGV problem in PetIBM. 39 | We used the following $L_2$ spatial-temporal error to examine the convergence: 40 | \begin{equation}\label{eq:spt-err-def} 41 | \begin{aligned} 42 | L_{2,sp-t} \equiv &\sqrt{ 43 | \frac{1}{L_x L_y T} 44 | \iiint\limits_{x, y, t} \lVert f - f_{ref} \rVert^2 \diff x \diff y \diff t 45 | } \\ 46 | = & 47 | \sqrt{\frac{1}{N_x N_y N_t}\sum\limits_{i=1}^{N_x}\sum\limits_{j=1}^{N_y}\sum\limits_{k=1}^{N_t}\left(f^{(i, j, k)} - f_{ref}^{(i, j, k)}\right)^2} 48 | \end{aligned} 49 | \end{equation} 50 | Here, $N_x$, $N_y$, and $N_t$ represent the number of solution points in $x$, $y$, and $t$; 51 | $L_x$ and $L_y$ are the domain lengths in $x$ and $y$; 52 | $T$ is the total simulation time; 53 | $f$ is the flow quantity of interest, while $f_{ref}$ is the corresponding analytical solution. 54 | The superscript $(i, j, k)$ denotes the value at the $(i, j, k)$ solution point in the discretized spatial-temporal space. 55 | We used Cartesian grids with $2^{n} \times 2^{n}$ cells for $i=4$, $5$, $\dots$, $10$. 56 | The time step size $\Delta t$ does not follow a fixed refinement ratio, and takes the values $\Delta t = \num{1.25e-1}$, $\num{8e-2}$, $\num{4e-2}$, $\num{2e-2}$, $\num{1e-2}$, $\num{5e-3}$, and $\num{1.25e-3}$, respectively. 57 | $\Delta t$ was determined based on the maximum allowed CFL number and whether it was a factor of $2$ to output transient results every $\num{2}$ simulation seconds. 58 | The velocity and pressure linear systems were both solved with BiCGSTAB (bi-conjugate gradient stabilized method). 59 | The preconditioners of the two systems are the block Jacobi preconditioner and the algebraic multigrid preconditioner from NIVIDA's AmgX library. 60 | At each time step, both linear solvers stop when the preconditioned residual reaches \num{e-14}. 61 | The hardware used for PetIBM simulations contains 5 physical cores of Intel E5-2698 v4 and 1 NVIDIA V100 GPU. 62 | 63 | Figure \ref{fig:tgv-petibm-convergence} shows the spatial-temporal convergence results of PetIBM. 64 | \begin{figure} 65 | \centering% 66 | \includegraphics[width=\columnwidth]{tgv-2d-re100/petibm-tgv-2d-re100-convergence}% 67 | \caption{% 68 | Grid-convergence test and verification of PetIBM using 2D TGV at $Re=\num{100}$. 69 | The spatial-temporal $L_2$ error is defined in equation \eqref{eq:spt-err-def}. 70 | Taking the cubic root of the total spatial-temporal solution points gives the characteristic cell size. 71 | Both $u$ and $v$ velocities follow second-order convergence, while the pressure $p$ follows the trend with some fluctuation. 72 | } 73 | \label{fig:tgv-petibm-convergence}% 74 | \end{figure} 75 | Both $u$ and $v$ follow an expected second-order convergence before the machine round-off errors start to dominate on the $1024 \times 1024$ grid. 76 | The pressure follows the expected convergence rate with some fluctuations. 77 | Further scrutiny revealed that the AmgX library was not solving the pressure system to the desired tolerance. 78 | The AmgX library has a hard-coded stop mechanism when the relative residual (relative to the initial residual) reaches machine precision. 79 | So while we configured the absolute tolerance to be \num{e-14}, the final preconditioned residuals of the pressure systems did not match this value. 80 | On the other hand, the velocity systems were solved to the desired tolerance. 81 | With this minor caveat, we consider the verification of PetIBM to be successful, as the minor issue in the convergence of pressure is irrelevant to the code implementation in PetIBM. 82 | 83 | Next, we solved this same TGV problem using an unsteady PINN solver---a PINN having $t$ as one of its inputs and trained against extra loss terms for initial conditions ($L_4$ to $L_6$ in figure \ref{fig:pinn-workflow}). 84 | For the optimization, we used PyTorch's Adam optimizer 85 | with the following parameters: $\beta_1=\num{0.9}$, $\beta_2=\num{0.999}$, and $\epsilon=\num{e-8}$. 86 | The total iteration number in the optimization is \num{4e5}. 87 | Two learning-rate schedulers were tested: the exponential learning rate and the cyclical learning rate. 88 | Both learning rates are from PyTorch and were used merely to satisfy our curiosity. 89 | The exponential scheduler has only one parameter in PyTorch: $\gamma=0.95^{\frac{1}{5000}}$. 90 | The cyclical scheduler has the following parameters: $\eta_{low}=\num{1.5e-5}$, $\eta_{high}=\num{1.5e-3}$, $N_c=\num{5000}$, and $\gamma=0.999989$. 91 | These values were chosen so that the peak learning rate at each cycle is slightly higher than the exponential rates. 92 | Figure \ref{fig:tgv-learning-rate-hist} shows a comparison of the two schedulers. 93 | 94 | \begin{figure} 95 | \centering% 96 | \includegraphics[width=\columnwidth]{tgv-2d-re100/learning-rate-hist}% 97 | \caption{% 98 | Learning-rate history of 2D TGV $Re=\num{100}$ w/ PINN 99 | The exponential learning rate scheduler is denoted as {\it Exponential}, and the cyclical scheduler is denoted as {\it Cyclical}. 100 | } 101 | \label{fig:tgv-learning-rate-hist}% 102 | \end{figure} 103 | 104 | The MLP network used consisted of \num{3} hidden layers and \num{128} neurons per layer. 105 | \num{8192e4} spatial-temporal points were used to evaluate the PDE losses (i.e., the $L_1$, $L_2$, and $L_3$ in figure \ref{fig:pinn-workflow}). 106 | We randomly sampled these spatial-temporal points from the spatial-temporal domain$\left[-\pi, \pi\right] \times \left[-\pi, \pi\right] \times \left(0, 100\right]$. 107 | During each optimization iteration, however, we only used \num{8192} points to evaluate the PDE losses. 108 | It means the optimizer sees each point \num{40} times on average because we have a total of \num{4e5} iterations. 109 | Similarly, \num{8192e4} spatial-temporal points were sampled from $x,y \in \left[-\pi, \pi\right] ] \times \left[-\pi, \pi\right]$ and $t=0$ for the initial condition loss (i.e., $L_4$ to $L_6$). 110 | And the same number of points were sampled from each domain boundary ($x=\pm\pi$ and $y=\pm\pi$) and $t\in\left(0, 100\right]$ for boundary-condition losses ($L_7$ to $L_{10}$). 111 | A total of \num{8192} points were used in each iteration for these losses as well. 112 | 113 | We used one NVIDIA V100 GPU to run the unsteady PINN solver for the TGV problem. 114 | Note that the PINN solver used single-precision floats, which is the default in PyTorch. 115 | Note that PyTorch also works with double-precision floats. 116 | However, our main interest is whether PINNs can exhibit vortex shedding, rather than the numeric accuracy. 117 | Hence, to save computational resources, we argue that single precision is sufficient for this purpose. 118 | 119 | After training, we evaluated the PINN solver's errors at cell centers in a $512$ $\times$ $512$ Cartesian grid and at $t=0$, $2$, $4$, $\cdots$, $100$. 120 | Figure \ref{fig:tgv-pinn-loss} shows the histories of the optimization loss and the $L_2$ errors at $t=0$ and $t=40$ of the $u$ velocity on the left vertical axis. 121 | The reader should not confuse these two different metrics: whereas the prediction accuracy (measured by the $L_2$ errors) is the primary concern, the optimization loss is only a proxy for the accuracy. 122 | The optimization loss is a combination of the PDE losses and the initial and boundary condition losses, and it depends on how one optimizes the neural network parameters. 123 | \begin{figure} 124 | \centering% 125 | \includegraphics[width=\columnwidth]{tgv-2d-re100/pinn-nl3-nn128-npts8192-convergence}% 126 | \caption{% 127 | Histories with respect to optimization iterations for the total loss and the $L_2$ errors of $u$ at $t=0$ and $40$ in the TGV verification of the unsteady PINN solver. 128 | On the left vertical axis is the magnitude of the total loss and the errors. 129 | (Note that loss is merely a proxy for the accuracy, whereas the error measures it directly.) 130 | On the right vertical axis is the run time. 131 | The cyclical scheduler has a slightly better accuracy at $t=40$ with a slightly more time cost, though its total loss is higher. 132 | } 133 | \label{fig:tgv-pinn-loss}% 134 | \end{figure} 135 | The same figure also shows the run time (wall time) on the right vertical axis. 136 | The total loss converges to an order of magnitude of \num{e-6}, which may reflect the fact that PyTorch uses single-precision floats. 137 | The errors at $t=0$ and $t=40$ converge to the orders of \num{e-4} and \num{e-2}, respectively. 138 | This observation is reasonable because the net errors over the whole temporal domain is, by definition, similar to the square root of the total, which is \num{e-3}. 139 | The PINN solver got exact initial conditions for training (i.e., $L_4$ to $L_6$), so it is reasonable to see a better prediction accuracy at $t=0$ than later times. 140 | Finally, though the computational performance is not the focus of this paper, for the interested reader's benefit we would like to point out that the PINN solver took about 6 hours to converge with a V100 GPU, while PetIBM needed less than 10 seconds to get an error level of \num{e-2} using a very dated K40 GPU (and most of the time was overhead to initialize the solver). 141 | 142 | In sum, we determined the PINN solution to be verified, although the accuracy and the computational cost were not satisfying. 143 | The relatively low accuracy is likely a consequence of the use of single-precision floats and the intrinsic properties of PINNs, rather than implementation errors. 144 | Figure \ref{fig:tgv-pinn-contours} shows the contours of the PINN solver's predictions at $t=40$ for reference. 145 | For more details about this verification exercise, we refer readers to our conference proceedings paper \cite{chuang_experience_2022}. 146 | 147 | \begin{figure}[t] 148 | \centering% 149 | \includegraphics[width=0.95\columnwidth]{tgv-2d-re100/pinn-nl3-nn256-npts4096-contours.png} 150 | \caption{% 151 | Contours at $t=40$ of 2D TGV at $Re=\num{100}$ primitive variables and errors using the unsteady PINN solver. 152 | Roughly speaking, the absolute errors are at the level of \num{e-3} for primitive variables ($u$, $v$, and $p$), which corresponds to the square root of the total loss. 153 | The vorticity was obtained from post-processing and hence was augmented in terms of errors. 154 | } 155 | \label{fig:tgv-pinn-contours}% 156 | \end{figure} 157 | 158 | % vim:ft=tex: -------------------------------------------------------------------------------- /main.bbl: -------------------------------------------------------------------------------- 1 | \begin{thebibliography}{10} 2 | \expandafter\ifx\csname url\endcsname\relax 3 | \def\url#1{\texttt{#1}}\fi 4 | \expandafter\ifx\csname urlprefix\endcsname\relax\def\urlprefix{URL }\fi 5 | \expandafter\ifx\csname href\endcsname\relax 6 | \def\href#1#2{#2} \def\path#1{#1}\fi 7 | 8 | \bibitem{dissanayake_neural-network-based_1994} 9 | M.~W. M.~G. Dissanayake, N.~Phan-Thien, 10 | \href{https://onlinelibrary.wiley.com/doi/10.1002/cnm.1640100303}{Neural-network-based 11 | approximations for solving partial differential equations}, Communications in 12 | Numerical Methods in Engineering 10~(3) (1994) 195--201. 13 | \newblock \href {http://dx.doi.org/10.1002/cnm.1640100303} 14 | {\path{doi:10.1002/cnm.1640100303}}. 15 | \newline\urlprefix\url{https://onlinelibrary.wiley.com/doi/10.1002/cnm.1640100303} 16 | 17 | \bibitem{lagaris_artificial_1998} 18 | I.~E. Lagaris, A.~Likas, D.~I. Fotiadis, 19 | \href{http://ieeexplore.ieee.org/document/712178/}{Artificial neural networks 20 | for solving ordinary and partial differential equations}, {IEEE} Transactions 21 | on Neural Networks 9~(5) (1998) 987--1000. 22 | \newblock \href {http://arxiv.org/abs/physics/9705023} 23 | {\path{arXiv:physics/9705023}}, \href {http://dx.doi.org/10.1109/72.712178} 24 | {\path{doi:10.1109/72.712178}}. 25 | \newline\urlprefix\url{http://ieeexplore.ieee.org/document/712178/} 26 | 27 | \bibitem{vinuesa_emerging_2022} 28 | R.~Vinuesa, S.~L. Brunton, 29 | \href{https://doi.org/10.1109/MCSE.2023.3264340}{Emerging trends in machine 30 | learning for computational fluid dynamics}, Computing in Science \& 31 | Engineering 24~(5) (2022) 33--41. 32 | \newblock \href {http://dx.doi.org/10.1109/MCSE.2023.3264340} 33 | {\path{doi:10.1109/MCSE.2023.3264340}}. 34 | \newline\urlprefix\url{https://doi.org/10.1109/MCSE.2023.3264340} 35 | 36 | \bibitem{chuang_experience_2022} 37 | P.-Y. Chuang, L.~Barba, 38 | \href{https://conference.scipy.org/proceedings/scipy2022/PiYueh_Chuang.html}{Experience 39 | report of physics-informed neural networks in fluid simulations: pitfalls and 40 | frustration}, in: Proceedings of the 21st {Python} in {Science} {Conference}, 41 | Austin, Texas, 2022, pp. 28--36. 42 | \newblock \href {http://dx.doi.org/10.25080/majora-212e5952-005} 43 | {\path{doi:10.25080/majora-212e5952-005}}. 44 | \newline\urlprefix\url{https://conference.scipy.org/proceedings/scipy2022/PiYueh_Chuang.html} 45 | 46 | \bibitem{krishnapriyan_failure_2021} 47 | A.~Krishnapriyan, A.~Gholami, S.~Zhe, R.~Kirby, M.~W. Mahoney, 48 | \href{https://openreview.net/forum?id=a2Gr9gNFD-J}{Characterizing possible 49 | failure modes in physics-informed neural networks}, in: A.~Beygelzimer, 50 | Y.~Dauphin, P.~Liang, J.~W. Vaughan (Eds.), Advances in Neural Information 51 | Processing Systems, 2021, pp. 1--13. 52 | \newline\urlprefix\url{https://openreview.net/forum?id=a2Gr9gNFD-J} 53 | 54 | \bibitem{fuks_limitations_2020} 55 | O.~Fuks, H.~A. Tchelepi, Limitations of physics informed machine learning for 56 | nonlinear two-phase transport in porous media, Journal of Machine Learning 57 | for Modeling and Computing 1~(1) (2020) 19--37. 58 | \newblock \href {http://dx.doi.org/10.1615/JMachLearnModelComput.2020033905} 59 | {\path{doi:10.1615/JMachLearnModelComput.2020033905}}. 60 | 61 | \bibitem{rohrhofer_fixedpoints_2023} 62 | F.~M. Rohrhofer, S.~Posch, C.~G{\"o}{\ss}nitzer, B.~C. Geiger, 63 | \href{https://openreview.net/forum?id=56cTmVrg5w}{On the role of fixed points 64 | of dynamical systems in training physics-informed neural networks}, 65 | Transactions on Machine Learning Research. 66 | \newline\urlprefix\url{https://openreview.net/forum?id=56cTmVrg5w} 67 | 68 | \bibitem{chuang_petibm_2018} 69 | P.-Y. Chuang, O.~Mesnard, A.~Krishnan, L.~A. Barba, 70 | \href{https://doi.org/10.21105/joss.00558}{{PetIBM}: toolbox and applications 71 | of the immersed-boundary method on distributed-memory architectures}, J. Open 72 | Source Software 3~(25) (2018) 558. 73 | \newblock \href {http://dx.doi.org/10.21105/joss.00558} 74 | {\path{doi:10.21105/joss.00558}}. 75 | \newline\urlprefix\url{https://doi.org/10.21105/joss.00558} 76 | 77 | \bibitem{griewank_automatic_1988} 78 | A.~Griewank, On automatic differentiation, in: Mathematical Programming: Recent 79 | Developments and Applications, Vol.~6 of Mathematics and its Applications, 80 | Springer Netherlands, 1988, pp. 83--107. 81 | 82 | \bibitem{cai_physics-informed_2021} 83 | S.~Cai, Z.~Mao, Z.~Wang, M.~Yin, G.~E. Karniadakis, 84 | \href{https://link.springer.com/10.1007/s10409-021-01148-1}{Physics-informed 85 | neural networks ({PINNs}) for fluid mechanics: a review}, Acta Mechanica 86 | Sinica 37~(12) (2021) 1727--1738. 87 | \newblock \href {http://dx.doi.org/10.1007/s10409-021-01148-1} 88 | {\path{doi:10.1007/s10409-021-01148-1}}. 89 | \newline\urlprefix\url{https://link.springer.com/10.1007/s10409-021-01148-1} 90 | 91 | \bibitem{rosetti_urans_2012} 92 | G.~F. Rosetti, G.~Vaz, A.~L.~C. Fujarra, 93 | \href{https://asmedigitalcollection.asme.org/fluidsengineering/article/doi/10.1115/1.4007571/372953/URANS-Calculations-for-Smooth-Circular-Cylinder}{{URANS} 94 | {Calculations} for {Smooth} {Circular} {Cylinder} {Flow} in a {Wide} {Range} 95 | of {Reynolds} {Numbers}: {Solution} {Verification} and {Validation}}, Journal 96 | of Fluids Engineering 134~(12) (2012) 121103. 97 | \newblock \href {http://dx.doi.org/10.1115/1.4007571} 98 | {\path{doi:10.1115/1.4007571}}. 99 | \newline\urlprefix\url{https://asmedigitalcollection.asme.org/fluidsengineering/article/doi/10.1115/1.4007571/372953/URANS-Calculations-for-Smooth-Circular-Cylinder} 100 | 101 | \bibitem{sen_steady_2009} 102 | S.~Sen, S.~Mittal, G.~Biswas, 103 | \href{https://www.cambridge.org/core/product/identifier/S0022112008004904/type/journal_article}{Steady 104 | separated flow past a circular cylinder at low {Reynolds} numbers}, Journal 105 | of Fluid Mechanics 620 (2009) 89--119. 106 | \newblock \href {http://dx.doi.org/10.1017/S0022112008004904} 107 | {\path{doi:10.1017/S0022112008004904}}. 108 | \newline\urlprefix\url{https://www.cambridge.org/core/product/identifier/S0022112008004904/type/journal_article} 109 | 110 | \bibitem{park_numerical_1998} 111 | J.~Park, K.~Kwon, H.~Choi, 112 | \href{http://link.springer.com/10.1007/BF02942594}{Numerical solutions of 113 | flow past a circular cylinder at {Reynolds} numbers up to 160}, KSME 114 | International Journal 12~(6) (1998) 1200--1205. 115 | \newblock \href {http://dx.doi.org/10.1007/BF02942594} 116 | {\path{doi:10.1007/BF02942594}}. 117 | \newline\urlprefix\url{http://link.springer.com/10.1007/BF02942594} 118 | 119 | \bibitem{tritton_experiments_1959} 120 | D.~J. Tritton, 121 | \href{https://www.cambridge.org/core/product/identifier/S0022112059000829/type/journal_article}{Experiments 122 | on the flow past a circular cylinder at low {Reynolds} numbers}, Journal of 123 | Fluid Mechanics 6~(4) (1959) 547--567. 124 | \newblock \href {http://dx.doi.org/10.1017/S0022112059000829} 125 | {\path{doi:10.1017/S0022112059000829}}. 126 | \newline\urlprefix\url{https://www.cambridge.org/core/product/identifier/S0022112059000829/type/journal_article} 127 | 128 | \bibitem{grove_experimental_1964} 129 | A.~S. Grove, F.~H. Shair, E.~E. Petersen, 130 | \href{https://www.cambridge.org/core/product/identifier/S0022112064000544/type/journal_article}{An 131 | experimental investigation of the steady separated flow past a circular 132 | cylinder}, Journal of Fluid Mechanics 19~(1) (1964) 60--80. 133 | \newblock \href {http://dx.doi.org/10.1017/S0022112064000544} 134 | {\path{doi:10.1017/S0022112064000544}}. 135 | \newline\urlprefix\url{https://www.cambridge.org/core/product/identifier/S0022112064000544/type/journal_article} 136 | 137 | \bibitem{deng_hydrodynamic_2007} 138 | J.~Deng, X.-M. Shao, Z.-S. Yu, 139 | \href{http://aip.scitation.org/doi/10.1063/1.2814259}{Hydrodynamic studies on 140 | two traveling wavy foils in tandem arrangement}, Physics of Fluids 19~(11) 141 | (2007) 113104. 142 | \newblock \href {http://dx.doi.org/10.1063/1.2814259} 143 | {\path{doi:10.1063/1.2814259}}. 144 | \newline\urlprefix\url{http://aip.scitation.org/doi/10.1063/1.2814259} 145 | 146 | \bibitem{Rajani2009} 147 | B.~Rajani, A.~Kandasamy, S.~Majumdar, 148 | \href{http://dx.doi.org/10.1016/j.apm.2008.01.017}{Numerical simulation of 149 | laminar flow past a circular cylinder}, Applied Mathematical Modelling 33~(3) 150 | (2009) 1228--1247, arXiv: DOI: 10.1002/fld.1 Publisher: Elsevier Inc. ISBN: 151 | 02712091 10970363. 152 | \newblock \href {http://dx.doi.org/10.1016/j.apm.2008.01.017} 153 | {\path{doi:10.1016/j.apm.2008.01.017}}. 154 | \newline\urlprefix\url{http://dx.doi.org/10.1016/j.apm.2008.01.017} 155 | 156 | \bibitem{gushchin_numerical_1974} 157 | V.~Gushchin, V.~Shchennikov, 158 | \href{https://linkinghub.elsevier.com/retrieve/pii/0041555374900615}{A 159 | numerical method of solving the navier-stokes equations}, USSR Computational 160 | Mathematics and Mathematical Physics 14~(2) (1974) 242--250. 161 | \newblock \href {http://dx.doi.org/10.1016/0041-5553(74)90061-5} 162 | {\path{doi:10.1016/0041-5553(74)90061-5}}. 163 | \newline\urlprefix\url{https://linkinghub.elsevier.com/retrieve/pii/0041555374900615} 164 | 165 | \bibitem{fornberg_numerical_1980} 166 | B.~Fornberg, 167 | \href{http://www.journals.cambridge.org/abstract_S0022112080000419}{A 168 | numerical study of steady viscous flow past a circular cylinder}, Journal of 169 | Fluid Mechanics 98~(04) (1980) 819. 170 | \newblock \href {http://dx.doi.org/10.1017/S0022112080000419} 171 | {\path{doi:10.1017/S0022112080000419}}. 172 | \newline\urlprefix\url{http://www.journals.cambridge.org/abstract_S0022112080000419} 173 | 174 | \bibitem{chuang_barba_2023_fig12} 175 | P.-Y. Chuang, L.~A. Barba, Predictive limitations of physics-informed neural 176 | networks in vortex shedding: {Figure 12}, Figshare (2023). 177 | \newblock \href {http://dx.doi.org/10.6084/m9.figshare.23680983} 178 | {\path{doi:10.6084/m9.figshare.23680983}}. 179 | 180 | \bibitem{jeong_identification_1995} 181 | J.~Jeong, F.~Hussain, On the identification of a vortex, Journal of Fluid 182 | Mechanics 285 (1995) 69--94. 183 | \newblock \href {http://dx.doi.org/10.1017/S0022112095000462} 184 | {\path{doi:10.1017/S0022112095000462}}. 185 | 186 | \bibitem{chen_variants_2012} 187 | K.~K. Chen, J.~H. Tu, C.~W. Rowley, 188 | \href{http://link.springer.com/10.1007/s00332-012-9130-9}{Variants of 189 | {Dynamic} {Mode} {Decomposition}: {Boundary} {Condition}, {Koopman}, and 190 | {Fourier} {Analyses}}, Journal of Nonlinear Science 22~(6) (2012) 887--915. 191 | \newblock \href {http://dx.doi.org/10.1007/s00332-012-9130-9} 192 | {\path{doi:10.1007/s00332-012-9130-9}}. 193 | \newline\urlprefix\url{http://link.springer.com/10.1007/s00332-012-9130-9} 194 | 195 | \bibitem{rowley_spectral_2009} 196 | C.~W. Rowley, I.~Mezi{\'c}, S.~Bagheri, P.~Schlatter, D.~S. Henningson, 197 | \href{https://www.cambridge.org/core/product/identifier/S0022112009992059/type/journal_article}{Spectral 198 | analysis of nonlinear flows}, Journal of Fluid Mechanics 641 (2009) 115--127. 199 | \newblock \href {http://dx.doi.org/10.1017/S0022112009992059} 200 | {\path{doi:10.1017/S0022112009992059}}. 201 | \newline\urlprefix\url{https://www.cambridge.org/core/product/identifier/S0022112009992059/type/journal_article} 202 | 203 | \bibitem{chuang_barba_2023_fig19} 204 | P.-Y. Chuang, L.~A. Barba, Predictive limitations of physics-informed neural 205 | networks in vortex shedding: {Figure 19}, Figshare (2023). 206 | \newblock \href {http://dx.doi.org/10.6084/m9.figshare.23681085} 207 | {\path{doi:10.6084/m9.figshare.23681085}}. 208 | 209 | \bibitem{jin_nsfnets_2021} 210 | X.~Jin, S.~Cai, H.~Li, G.~E. Karniadakis, 211 | \href{https://doi.org/10.1016/j.jcp.2020.109951}{{NSFnets} ({Navier-Stokes} 212 | flow nets): Physics-informed neural networks for the incompressible 213 | navier-stokes equations}, Journal of Computational Physics 426 (2021) 109951. 214 | \newblock \href {http://dx.doi.org/10.1016/j.jcp.2020.109951} 215 | {\path{doi:10.1016/j.jcp.2020.109951}}. 216 | \newline\urlprefix\url{https://doi.org/10.1016/j.jcp.2020.109951} 217 | 218 | \bibitem{rahaman_spectral_2019} 219 | N.~Rahaman, A.~Baratin, D.~Arpit, F.~Draxler, M.~Lin, F.~Hamprecht, Y.~Bengio, 220 | A.~Courville, \href{https://proceedings.mlr.press/v97/rahaman19a.html}{On the 221 | {Spectral} {Bias} of {Neural} {Networks}}, in: K.~Chaudhuri, R.~Salakhutdinov 222 | (Eds.), Proceedings of the 36th {International} {Conference} on {Machine} 223 | {Learning}, Vol.~97 of Proceedings of {Machine} {Learning} {Research}, PMLR, 224 | 2019, pp. 5301--5310. 225 | \newline\urlprefix\url{https://proceedings.mlr.press/v97/rahaman19a.html} 226 | 227 | \bibitem{grossmann_pinnvsfem_2023} 228 | T.~G. Grossmann, U.~J. Komorowska, J.~Latz, C.-B. Sch{\"o}nlieb, 229 | \href{https://arxiv.org/abs/2302.04107}{an physics-informed neural networks 230 | beat the finite element method?}, arXiv preprint arXiv:2302.04107 (2023). 231 | \newline\urlprefix\url{https://arxiv.org/abs/2302.04107} 232 | 233 | \bibitem{karnakov_odil_2022} 234 | P.~Karnakov, S.~Litvinov, P.~Koumoutsakos, 235 | \href{https://arxiv.org/abs/2205.04611}{{Optimizing a DIscrete Loss (ODIL}) 236 | to solve forward and inverse problems for partial differential equations 237 | using machine learning tools}, arXiv preprint arXiv:2205.04611 (2022). 238 | \newline\urlprefix\url{https://arxiv.org/abs/2205.04611} 239 | 240 | \bibitem{chuang_thesis_2023} 241 | P.-Y. Chuang, Feasibility study of physics-informed neural network modeling in 242 | computational fluid dynamics, Ph.D. thesis, The George Washington University 243 | (2023). 244 | 245 | \end{thebibliography} 246 | -------------------------------------------------------------------------------- /references.bib: -------------------------------------------------------------------------------- 1 | %% This BibTeX bibliography file was created using BibDesk. 2 | %% https://bibdesk.sourceforge.io/ 3 | 4 | %% Created for Lorena A Barba at 2023-07-13 18:29:28 -0400 5 | 6 | 7 | %% Saved with string encoding Unicode (UTF-8) 8 | 9 | 10 | 11 | @misc{chuang_barba_2023_fig12, 12 | author = {Chuang, Pi-Yueh and Barba, Lorena A.}, 13 | date-added = {2023-07-13 18:28:09 -0400}, 14 | date-modified = {2023-07-13 18:29:22 -0400}, 15 | doi = {10.6084/m9.figshare.23680983}, 16 | howpublished = {Figshare}, 17 | title = {Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding: {Figure 12}}, 18 | year = {2023}, 19 | bdsk-url-1 = {https://doi.org/10.6084/m9.figshare.23680983}} 20 | 21 | @misc{chuang_barba_2023_fig19, 22 | author = {Chuang, Pi-Yueh and Barba, Lorena A.}, 23 | date-added = {2023-07-13 15:48:56 -0400}, 24 | date-modified = {2023-07-13 18:29:04 -0400}, 25 | doi = {10.6084/m9.figshare.23681085}, 26 | howpublished = {Figshare}, 27 | title = {Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding: {Figure 19}}, 28 | year = {2023}} 29 | 30 | @article{karnakov_odil_2022, 31 | author = {Karnakov, Petr and Litvinov, Sergey and Koumoutsakos, Petros}, 32 | title = "{Solving inverse problems in physics by optimizing a discrete loss: Fast and accurate learning without neural networks}", 33 | journal = {PNAS Nexus}, 34 | volume = {3}, 35 | number = {1}, 36 | pages = {pgae005}, 37 | year = {2024}, 38 | month = {01}, 39 | issn = {2752-6542}, 40 | doi = {10.1093/pnasnexus/pgae005}, 41 | url = {https://doi.org/10.1093/pnasnexus/pgae005}} 42 | 43 | @misc{grossmann_pinnvsfem_2023, 44 | author = {Grossmann, Tamara G and Komorowska, Urszula Julia and Latz, Jonas and Sch{\"o}nlieb, Carola-Bibiane}, 45 | date-added = {2023-05-31 14:05:32 -0400}, 46 | date-modified = {2023-05-31 14:07:12 -0400}, 47 | howpublished = {arXiv preprint arXiv:2302.04107}, 48 | title = {Can physics-informed neural networks beat the finite element method?}, 49 | url = {https://arxiv.org/abs/2302.04107}, 50 | year = {2023}, 51 | bdsk-url-1 = {https://arxiv.org/abs/2302.04107}} 52 | 53 | @article{jin_nsfnets_2021, 54 | author = {Jin, Xiaowei and Cai, Shengze and Li, Hui and Karniadakis, George Em}, 55 | date-added = {2023-05-31 12:31:30 -0400}, 56 | date-modified = {2023-05-31 12:33:01 -0400}, 57 | doi = {10.1016/j.jcp.2020.109951}, 58 | journal = {Journal of Computational Physics}, 59 | pages = {109951}, 60 | publisher = {Elsevier}, 61 | title = {{NSFnets} ({Navier-Stokes} flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations}, 62 | url = {https://doi.org/10.1016/j.jcp.2020.109951}, 63 | volume = {426}, 64 | year = {2021}, 65 | bdsk-url-1 = {https://doi.org/10.1016/j.jcp.2020.109951}} 66 | 67 | @article{fuks_limitations_2020, 68 | author = {Fuks, Olga and Tchelepi, Hamdi A}, 69 | date-added = {2023-05-30 16:27:11 -0400}, 70 | date-modified = {2023-05-30 16:36:22 -0400}, 71 | doi = {10.1615/JMachLearnModelComput.2020033905}, 72 | journal = {Journal of Machine Learning for Modeling and Computing}, 73 | number = {1}, 74 | pages = {19--37}, 75 | publisher = {Begel House Inc.}, 76 | title = {Limitations of physics informed machine learning for nonlinear two-phase transport in porous media}, 77 | volume = {1}, 78 | year = {2020}, 79 | bdsk-url-1 = {https://doi.org/10.1615/JMachLearnModelComput.2020033905}} 80 | 81 | @article{rohrhofer_fixedpoints_2023, 82 | author = {Franz M. Rohrhofer and Stefan Posch and Clemens G{\"o}{\ss}nitzer and Bernhard C Geiger}, 83 | date-added = {2023-05-30 15:44:30 -0400}, 84 | date-modified = {2023-05-30 15:44:50 -0400}, 85 | issn = {2835-8856}, 86 | journal = {Transactions on Machine Learning Research}, 87 | title = {On the Role of Fixed Points of Dynamical Systems in Training Physics-Informed Neural Networks}, 88 | url = {https://openreview.net/forum?id=56cTmVrg5w}, 89 | year = {2023}, 90 | bdsk-url-1 = {https://openreview.net/forum?id=56cTmVrg5w}} 91 | 92 | @inproceedings{krishnapriyan_failure_2021, 93 | author = {Aditi Krishnapriyan and Amir Gholami and Shandian Zhe and Robert Kirby and Michael W. Mahoney}, 94 | booktitle = {Advances in Neural Information Processing Systems}, 95 | date-added = {2023-05-30 15:27:08 -0400}, 96 | date-modified = {2023-05-30 15:58:08 -0400}, 97 | editor = {A. Beygelzimer and Y. Dauphin and P. Liang and J. Wortman Vaughan}, 98 | title = {Characterizing possible failure modes in physics-informed neural networks}, 99 | url = {https://openreview.net/forum?id=a2Gr9gNFD-J}, 100 | year = {2021}, 101 | pages = {1--13}, 102 | bdsk-url-1 = {https://openreview.net/forum?id=a2Gr9gNFD-J}} 103 | 104 | @phdthesis{chuang_thesis_2023, 105 | author = {Chuang, Pi-Yueh}, 106 | date-added = {2023-05-26 11:41:01 -0400}, 107 | date-modified = {2023-05-26 12:22:03 -0400}, 108 | school = {The George Washington University}, 109 | title = {Feasibility Study of Physics-Informed Neural Network Modeling in Computational Fluid Dynamics}, 110 | year = {2023}} 111 | 112 | @article{vinuesa_emerging_2022, 113 | author = {Vinuesa, Ricardo and Brunton, Steven L}, 114 | date-added = {2023-05-23 17:21:16 -0400}, 115 | date-modified = {2023-05-23 17:22:18 -0400}, 116 | doi = {10.1109/MCSE.2023.3264340}, 117 | journal = {Computing in Science \& Engineering}, 118 | number = {5}, 119 | pages = {33--41}, 120 | publisher = {IEEE}, 121 | title = {Emerging trends in machine learning for computational fluid dynamics}, 122 | url = {https://doi.org/10.1109/MCSE.2023.3264340}, 123 | volume = {24}, 124 | year = {2022}, 125 | bdsk-url-1 = {https://doi.org/10.1109/MCSE.2023.3264340}} 126 | 127 | @article{chuang_petibm_2018, 128 | author = {Chuang, Pi-Yueh and Mesnard, Olivier and Krishnan, Anush and Barba, Lorena A.}, 129 | date-added = {2023-05-22 14:05:41 -0400}, 130 | date-modified = {2023-05-22 14:07:40 -0400}, 131 | doi = {10.21105/joss.00558}, 132 | journal = {J. Open Source Software}, 133 | number = {25}, 134 | pages = {558}, 135 | title = {{PetIBM}: toolbox and applications of the immersed-boundary method on distributed-memory architectures}, 136 | url = {https://doi.org/10.21105/joss.00558}, 137 | volume = {3}, 138 | year = {2018}, 139 | bdsk-url-1 = {https://doi.org/10.21105/joss.00558}} 140 | 141 | @article{chen_variants_2012, 142 | author = {Chen, Kevin K. and Tu, Jonathan H. and Rowley, Clarence W.}, 143 | doi = {10.1007/s00332-012-9130-9}, 144 | issn = {0938-8974, 1432-1467}, 145 | journal = {Journal of Nonlinear Science}, 146 | language = {en}, 147 | month = dec, 148 | number = {6}, 149 | pages = {887--915}, 150 | shorttitle = {Variants of {Dynamic} {Mode} {Decomposition}}, 151 | title = {Variants of {Dynamic} {Mode} {Decomposition}: {Boundary} {Condition}, {Koopman}, and {Fourier} {Analyses}}, 152 | url = {http://link.springer.com/10.1007/s00332-012-9130-9}, 153 | urldate = {2022-11-29}, 154 | volume = {22}, 155 | year = {2012}, 156 | bdsk-url-1 = {http://link.springer.com/10.1007/s00332-012-9130-9}, 157 | bdsk-url-2 = {https://doi.org/10.1007/s00332-012-9130-9}} 158 | 159 | @article{rowley_spectral_2009, 160 | author = {Rowley, Clarence W. and Mezi{\'c}, Igor and Bagheri, Shervin and Schlatter, Philipp and Henningson, Dan S.}, 161 | doi = {10.1017/S0022112009992059}, 162 | issn = {0022-1120, 1469-7645}, 163 | journal = {Journal of Fluid Mechanics}, 164 | language = {en}, 165 | month = dec, 166 | pages = {115--127}, 167 | title = {Spectral analysis of nonlinear flows}, 168 | url = {https://www.cambridge.org/core/product/identifier/S0022112009992059/type/journal_article}, 169 | urldate = {2022-11-10}, 170 | volume = {641}, 171 | year = {2009}, 172 | bdsk-url-1 = {https://www.cambridge.org/core/product/identifier/S0022112009992059/type/journal_article}, 173 | bdsk-url-2 = {https://doi.org/10.1017/S0022112009992059}} 174 | 175 | @inproceedings{rahaman_spectral_2019, 176 | author = {Rahaman, Nasim and Baratin, Aristide and Arpit, Devansh and Draxler, Felix and Lin, Min and Hamprecht, Fred and Bengio, Yoshua and Courville, Aaron}, 177 | booktitle = {Proceedings of the 36th {International} {Conference} on {Machine} {Learning}}, 178 | editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, 179 | month = jun, 180 | pages = {5301--5310}, 181 | publisher = {PMLR}, 182 | series = {Proceedings of {Machine} {Learning} {Research}}, 183 | title = {On the {Spectral} {Bias} of {Neural} {Networks}}, 184 | url = {https://proceedings.mlr.press/v97/rahaman19a.html}, 185 | volume = {97}, 186 | year = {2019}, 187 | bdsk-url-1 = {https://proceedings.mlr.press/v97/rahaman19a.html}} 188 | 189 | @article{rosetti_urans_2012, 190 | author = {Rosetti, Guilherme F. and Vaz, Guilherme and Fujarra, Andr{\'e} L. C.}, 191 | doi = {10.1115/1.4007571}, 192 | issn = {0098-2202, 1528-901X}, 193 | journal = {Journal of Fluids Engineering}, 194 | language = {en}, 195 | month = dec, 196 | number = {12}, 197 | pages = {121103}, 198 | shorttitle = {{URANS} {Calculations} for {Smooth} {Circular} {Cylinder} {Flow} in a {Wide} {Range} of {Reynolds} {Numbers}}, 199 | title = {{URANS} {Calculations} for {Smooth} {Circular} {Cylinder} {Flow} in a {Wide} {Range} of {Reynolds} {Numbers}: {Solution} {Verification} and {Validation}}, 200 | url = {https://asmedigitalcollection.asme.org/fluidsengineering/article/doi/10.1115/1.4007571/372953/URANS-Calculations-for-Smooth-Circular-Cylinder}, 201 | urldate = {2022-08-23}, 202 | volume = {134}, 203 | year = {2012}, 204 | bdsk-url-1 = {https://asmedigitalcollection.asme.org/fluidsengineering/article/doi/10.1115/1.4007571/372953/URANS-Calculations-for-Smooth-Circular-Cylinder}, 205 | bdsk-url-2 = {https://doi.org/10.1115/1.4007571}} 206 | 207 | @article{sen_steady_2009, 208 | author = {Sen, Subhankar and Mittal, Sanjay and Biswas, Gautam}, 209 | doi = {10.1017/S0022112008004904}, 210 | issn = {0022-1120, 1469-7645}, 211 | journal = {Journal of Fluid Mechanics}, 212 | language = {en}, 213 | month = feb, 214 | pages = {89--119}, 215 | title = {Steady separated flow past a circular cylinder at low {Reynolds} numbers}, 216 | url = {https://www.cambridge.org/core/product/identifier/S0022112008004904/type/journal_article}, 217 | urldate = {2020-07-11}, 218 | volume = {620}, 219 | year = {2009}, 220 | bdsk-url-1 = {https://www.cambridge.org/core/product/identifier/S0022112008004904/type/journal_article}, 221 | bdsk-url-2 = {https://doi.org/10.1017/S0022112008004904}} 222 | 223 | @article{park_numerical_1998, 224 | author = {Park, Jeongyoung and Kwon, Kiyoung and Choi, Haecheon}, 225 | doi = {10.1007/BF02942594}, 226 | issn = {1226-4865}, 227 | journal = {KSME International Journal}, 228 | keywords = {proposal}, 229 | language = {en}, 230 | month = nov, 231 | number = {6}, 232 | pages = {1200--1205}, 233 | title = {Numerical solutions of flow past a circular cylinder at {Reynolds} numbers up to 160}, 234 | url = {http://link.springer.com/10.1007/BF02942594}, 235 | urldate = {2020-07-11}, 236 | volume = {12}, 237 | year = {1998}, 238 | bdsk-url-1 = {http://link.springer.com/10.1007/BF02942594}, 239 | bdsk-url-2 = {https://doi.org/10.1007/BF02942594}} 240 | 241 | @article{tritton_experiments_1959, 242 | author = {Tritton, D. J.}, 243 | doi = {10.1017/S0022112059000829}, 244 | issn = {0022-1120, 1469-7645}, 245 | journal = {Journal of Fluid Mechanics}, 246 | keywords = {proposal}, 247 | language = {en}, 248 | month = nov, 249 | number = {4}, 250 | pages = {547--567}, 251 | title = {Experiments on the flow past a circular cylinder at low {Reynolds} numbers}, 252 | url = {https://www.cambridge.org/core/product/identifier/S0022112059000829/type/journal_article}, 253 | urldate = {2020-09-18}, 254 | volume = {6}, 255 | year = {1959}, 256 | bdsk-url-1 = {https://www.cambridge.org/core/product/identifier/S0022112059000829/type/journal_article}, 257 | bdsk-url-2 = {https://doi.org/10.1017/S0022112059000829}} 258 | 259 | @article{grove_experimental_1964, 260 | author = {Grove, A. S. and Shair, F. H. and Petersen, E. E.}, 261 | doi = {10.1017/S0022112064000544}, 262 | issn = {0022-1120, 1469-7645}, 263 | journal = {Journal of Fluid Mechanics}, 264 | keywords = {proposal}, 265 | language = {en}, 266 | month = may, 267 | number = {1}, 268 | pages = {60--80}, 269 | title = {An experimental investigation of the steady separated flow past a circular cylinder}, 270 | url = {https://www.cambridge.org/core/product/identifier/S0022112064000544/type/journal_article}, 271 | urldate = {2020-09-18}, 272 | volume = {19}, 273 | year = {1964}, 274 | bdsk-url-1 = {https://www.cambridge.org/core/product/identifier/S0022112064000544/type/journal_article}, 275 | bdsk-url-2 = {https://doi.org/10.1017/S0022112064000544}} 276 | 277 | @article{deng_hydrodynamic_2007, 278 | author = {Deng, Jian and Shao, Xue-Ming and Yu, Zhao-Sheng}, 279 | doi = {10.1063/1.2814259}, 280 | issn = {1070-6631, 1089-7666}, 281 | journal = {Physics of Fluids}, 282 | keywords = {validation, cylinder flow}, 283 | language = {en}, 284 | month = nov, 285 | number = {11}, 286 | pages = {113104}, 287 | title = {Hydrodynamic studies on two traveling wavy foils in tandem arrangement}, 288 | url = {http://aip.scitation.org/doi/10.1063/1.2814259}, 289 | urldate = {2022-05-23}, 290 | volume = {19}, 291 | year = {2007}, 292 | bdsk-url-1 = {http://aip.scitation.org/doi/10.1063/1.2814259}, 293 | bdsk-url-2 = {https://doi.org/10.1063/1.2814259}} 294 | 295 | @article{Rajani2009, 296 | annote = {arXiv: DOI: 10.1002/fld.1 Publisher: Elsevier Inc. ISBN: 02712091 10970363}, 297 | author = {Rajani, B.N. and Kandasamy, A. and Majumdar, Sekhar}, 298 | doi = {10.1016/j.apm.2008.01.017}, 299 | issn = {0307904X}, 300 | journal = {Applied Mathematical Modelling}, 301 | month = mar, 302 | note = {arXiv: DOI: 10.1002/fld.1 Publisher: Elsevier Inc. ISBN: 02712091 10970363}, 303 | number = {3}, 304 | pages = {1228--1247}, 305 | pmid = {18628456}, 306 | title = {Numerical simulation of laminar flow past a circular cylinder}, 307 | url = {http://dx.doi.org/10.1016/j.apm.2008.01.017}, 308 | volume = {33}, 309 | year = {2009}, 310 | bdsk-url-1 = {http://dx.doi.org/10.1016/j.apm.2008.01.017}} 311 | 312 | @article{gushchin_numerical_1974, 313 | author = {Gushchin, V.A. and Shchennikov, V.V.}, 314 | doi = {10.1016/0041-5553(74)90061-5}, 315 | issn = {00415553}, 316 | journal = {USSR Computational Mathematics and Mathematical Physics}, 317 | language = {en}, 318 | month = jan, 319 | number = {2}, 320 | pages = {242--250}, 321 | title = {A numerical method of solving the navier-stokes equations}, 322 | url = {https://linkinghub.elsevier.com/retrieve/pii/0041555374900615}, 323 | urldate = {2022-05-26}, 324 | volume = {14}, 325 | year = {1974}, 326 | bdsk-url-1 = {https://linkinghub.elsevier.com/retrieve/pii/0041555374900615}, 327 | bdsk-url-2 = {https://doi.org/10.1016/0041-5553(74)90061-5}} 328 | 329 | @article{fornberg_numerical_1980, 330 | author = {Fornberg, Bengt}, 331 | doi = {10.1017/S0022112080000419}, 332 | issn = {0022-1120}, 333 | journal = {Journal of Fluid Mechanics}, 334 | month = jun, 335 | number = {04}, 336 | pages = {819}, 337 | title = {A numerical study of steady viscous flow past a circular cylinder}, 338 | url = {http://www.journals.cambridge.org/abstract_S0022112080000419}, 339 | volume = {98}, 340 | year = {1980}, 341 | bdsk-url-1 = {http://www.journals.cambridge.org/abstract_S0022112080000419}, 342 | bdsk-url-2 = {https://doi.org/10.1017/S0022112080000419}} 343 | 344 | @article{jeong_identification_1995, 345 | author = {Jeong, Jinhee and Hussain, Fazle}, 346 | doi = {10.1017/S0022112095000462}, 347 | issn = {0022-1120, 1469-7645}, 348 | journal = {Journal of Fluid Mechanics}, 349 | langid = {english}, 350 | month = feb, 351 | pages = {69--94}, 352 | shortjournal = {J. Fluid. Mech.}, 353 | title = {On the identification of a vortex}, 354 | urldate = {2022-11-29}, 355 | volume = {285}, 356 | year = {1995}, 357 | bdsk-url-1 = {https://doi.org/10.1017/S0022112095000462}} 358 | 359 | @article{dissanayake_neural-network-based_1994, 360 | author = {Dissanayake, M. W. M. G. and Phan-Thien, N.}, 361 | date = {1994-03}, 362 | date-modified = {2023-05-22 13:12:46 -0400}, 363 | doi = {10.1002/cnm.1640100303}, 364 | issn = {10698299}, 365 | journal = {Communications in Numerical Methods in Engineering}, 366 | langid = {english}, 367 | number = {3}, 368 | pages = {195--201}, 369 | shortjournal = {Commun. Numer. Meth. Engng.}, 370 | title = {Neural-network-based approximations for solving partial differential equations}, 371 | url = {https://onlinelibrary.wiley.com/doi/10.1002/cnm.1640100303}, 372 | urldate = {2022-07-04}, 373 | volume = {10}, 374 | year = {1994}, 375 | bdsk-url-1 = {https://onlinelibrary.wiley.com/doi/10.1002/cnm.1640100303}, 376 | bdsk-url-2 = {https://doi.org/10.1002/cnm.1640100303}} 377 | 378 | @article{lagaris_artificial_1998, 379 | author = {Lagaris, I. E. and Likas, A. and Fotiadis, D. I.}, 380 | date = {1998-09}, 381 | date-modified = {2023-05-22 13:13:26 -0400}, 382 | doi = {10.1109/72.712178}, 383 | eprint = {physics/9705023}, 384 | eprinttype = {arxiv}, 385 | issn = {10459227}, 386 | journal = {{IEEE} Transactions on Neural Networks}, 387 | journaltitle = {{IEEE} Transactions on Neural Networks}, 388 | langid = {english}, 389 | number = {5}, 390 | pages = {987--1000}, 391 | shortjournal = {{IEEE} Trans. Neural Netw.}, 392 | title = {Artificial neural networks for solving ordinary and partial differential equations}, 393 | url = {http://ieeexplore.ieee.org/document/712178/}, 394 | urldate = {2020-01-20}, 395 | volume = {9}, 396 | year = {1998}, 397 | bdsk-url-1 = {http://ieeexplore.ieee.org/document/712178/}, 398 | bdsk-url-2 = {https://doi.org/10.1109/72.712178}} 399 | 400 | @article{cai_physics-informed_2021, 401 | author = {Cai, Shengze and Mao, Zhiping and Wang, Zhicheng and Yin, Minglang and Karniadakis, George Em}, 402 | date = {2021-12}, 403 | year = {2021}, 404 | doi = {10.1007/s10409-021-01148-1}, 405 | issn = {0567-7718, 1614-3116}, 406 | journal = {Acta Mechanica Sinica}, 407 | langid = {english}, 408 | number = {12}, 409 | pages = {1727--1738}, 410 | shortjournal = {Acta Mech. Sin.}, 411 | shorttitle = {Physics-informed neural networks ({PINNs}) for fluid mechanics}, 412 | title = {Physics-informed neural networks ({PINNs}) for fluid mechanics: a review}, 413 | url = {https://link.springer.com/10.1007/s10409-021-01148-1}, 414 | urldate = {2022-06-28}, 415 | volume = {37}, 416 | bdsk-url-1 = {https://link.springer.com/10.1007/s10409-021-01148-1}, 417 | bdsk-url-2 = {https://doi.org/10.1007/s10409-021-01148-1}} 418 | 419 | @inproceedings{griewank_automatic_1988, 420 | author = {Griewank, Andreas}, 421 | booktitle = {Mathematical Programming: Recent Developments and Applications}, 422 | date = {1988-08-29}, 423 | eventtitle = {The 13th International Symposium on Mathematical Programming}, 424 | isbn = {978-0-7923-0490-6}, 425 | langid = {english}, 426 | location = {Tokyo, Japan}, 427 | pages = {83--107}, 428 | publisher = {Springer Netherlands}, 429 | series = {Mathematics and its Applications}, 430 | title = {On automatic differentiation}, 431 | year = {1988}, 432 | volume = {6}} 433 | 434 | @inproceedings{chuang_experience_2022, 435 | address = {Austin, Texas}, 436 | author = {Chuang, Pi-Yueh and Barba, Lorena}, 437 | doi = {10.25080/majora-212e5952-005}, 438 | pages = {28--36}, 439 | shorttitle = {Experience report of physics-informed neural networks in fluid simulations}, 440 | title = {Experience report of physics-informed neural networks in fluid simulations: pitfalls and frustration}, 441 | booktitle = {Proceedings of the 21st {Python} in {Science} {Conference}}, 442 | url = {https://conference.scipy.org/proceedings/scipy2022/PiYueh_Chuang.html}, 443 | urldate = {2022-08-18}, 444 | year = {2022}, 445 | bdsk-url-1 = {https://conference.scipy.org/proceedings/scipy2022/PiYueh_Chuang.html}, 446 | bdsk-url-2 = {https://doi.org/10.25080/majora-212e5952-005}} 447 | -------------------------------------------------------------------------------- /results_3.tex: -------------------------------------------------------------------------------- 1 | %! TEX root = main.tex 2 | 3 | The previous section presented successful verification with a Taylor-Green vortex having an analytical solution, and validation of the solvers with the $Re=40$ cylinder flow, which exhibits a steady state solution. 4 | Those results give confidence that the solvers are correctly solving the Navier-Stokes equations, and able to model vortical flow. In this section, we study the case of cylinder flow at $Re=200$, exhibiting vortex shedding. 5 | 6 | \subsection{Case configurations} 7 | 8 | The computational domain is $[-8$, $25]$ $\times$ $[-8$, $8]$ for $x$ and $y$, and $t\in[0$, $200]$. 9 | Other boundary conditions, initial conditions, and density were the same as those in section \ref{sec:val_2d_cylinder_re40}. 10 | The non-dimensional kinematic viscosity is set to $0.005$ to make the Reynolds number $200$. 11 | 12 | The PetIBM simulation was done with a grid resolution of $1485$ $\times$ $720$ and $\Delta t = \num{5e-3}$. 13 | The hardware used and the configurations of the linear solvers were the same as described in section \ref{sec:val_2d_cylinder_re40}. 14 | 15 | As for the PINN solvers, in addition to the steady and unsteady solvers, a third PINN solver was used: a data-driven PINN. 16 | In this case, the PINN solver receives data from a PetIBM simulation that includes fully developed vortex shedding, with the aim of testing the ability of PINN to keep generating vortices by following the dynamics only in time ranges that do not have observation data. 17 | The data-driven PINN solver is the same as the unsteady PINN solver but replaces the three initial condition losses ($L_4$ to $L_6$) with: 18 | 19 | \begin{equation}\label{eq:data-driven-loss} 20 | \left\{ 21 | \begin{array}{l} 22 | L_4 = u - u_{data}\\ 23 | L_5 = v - v_{data}\\ 24 | L_6 = p - p_{data}\\ 25 | \end{array} 26 | \right. 27 | ,\text{ if } 28 | \begin{array}{l} 29 | \vec{x} \in \Omega \\ 30 | t \in T_{data} 31 | \end{array} 32 | \end{equation} 33 | where the subscript $data$ denotes data from a PetIBM simulation. 34 | $T_{data}$ denotes the time range for which PetIBM simulation data is fed to the data-driven PINN solver. 35 | In the range $T_{data} \equiv \left[125, 140\right]$ the flow has achieved fully developed vortex shedding, and the data-driven PINN solver should keep generating vortices. 36 | The PetIBM simulation outputted transient snapshots every 1 second in simulation time, hence the data fed to the data-driven PINN solver consisted of 16 snapshots. 37 | These snapshots contain around $3$ full periods of vortex shedding. 38 | The total number of spatial-temporal points in these snapshots is around $\num{17000000}$, and we only used $\num{6400}$ every iteration, meaning each data batch was repeated approximately every $\num{2650}$ iterations. 39 | Except for replacing the IC losses with a data-driven approach, all other loss terms and the code in the data-driven PINN solver remain the same as the unsteady PINN solver. 40 | 41 | Note that for the data-driven PINN solver, the PDE and boundary condition losses were evaluated only in $t\in[125$, $200]$ because we treated the PetIBM snapshots as if they were initial conditions. 42 | Another note is the use of steady PINN solver. 43 | The $Re=200$ cylinder flow is not expected to have a steady-state solution. 44 | However, it is not uncommon to see a steady-state flow solver used for unsteady flows for engineering purposes, especially two or three decades ago when computing power was much more restricted. 45 | 46 | The MLP network used on all PINN solvers has 6 hidden layers and 512 neurons per layer. 47 | The configurations of spatial-temporal points are the same as those in section \ref{sec:val_2d_cylinder_re40}. 48 | The Adam optimizer is also the same, except that now we ran for \num{1000000} optimization iterations. 49 | The parameters of the cyclical learning rate scheduler are now: $\eta_{low}=10^{-6}$, $\eta_{high}=0.01$, $N_c=5000$, and $\gamma=0.9999915$. 50 | The hardware used was one NVIDIA A100 GPU for all PINN runs. 51 | 52 | \subsection{Results}\label{sec:cylinder-re200-results} 53 | 54 | The overall run times for the steady, unsteady, and data-driven PINN solvers were about 28 hours, 31 hours, and 33.5 hours using one A100 GPU. 55 | The PetIBM simulation, on the other hand, took around 1.7 hours with a K40 GPU, which is 5-generation-behind in terms of the computing technology. 56 | 57 | Figure \ref{fig:cylinder-re200-pinn-loss} shows the aggregated loss convergence history of all cases. 58 | \begin{figure} 59 | \centering% 60 | \includegraphics[width=0.95\columnwidth]{cylinder-2d-re200/loss-hist}% 61 | \caption{% 62 | Training convergence history of 2D cylinder flow at $Re=\num{200}$ w/ PINNs 63 | } 64 | \label{fig:cylinder-re200-pinn-loss}% 65 | \end{figure} 66 | It shows both the losses and the run times of the three PINN solvers. 67 | As seen in section \ref{sec:val_2d_cylinder_re40}, the unsteady PINN solver converges to a higher total loss than the steady PINN solver does. 68 | Also, the data-driven PINN solver converges to an even higher total loss. 69 | However, it is unclear at this point if having a higher loss means a higher prediction error in data-driven PINN because we replaced the initial condition losses with 16 snapshots from PetIBM and only ran the data-driven PINN solver for $t\in[125, 200]$. 70 | 71 | Figure \ref{fig:cylinder-re200-drag-lift} shows the drag and lift coefficients versus simulation time. 72 | \begin{figure}[t] 73 | \centering% 74 | \includegraphics[width=0.95\columnwidth]{cylinder-2d-re200/drag-lift-coeffs}% 75 | \caption{% 76 | Drag and lift coefficients of 2D cylinder flow at $Re=\num{200}$ w/ PINNs 77 | } 78 | \label{fig:cylinder-re200-drag-lift}% 79 | \end{figure} 80 | The coefficients from the steady case are shown as just a horizontal line since there is no time variable in this case. 81 | The unsteady case, to our surprise, does not exhibit oscillations, meaning it results on no vortex shedding, even though it fits well with the PetIBM result before vortex shedding starts (at about $t=75$). 82 | Comparing the coefficients between the steady, unsteady, and PetIBM's values before shedding, we believe the unsteady PINN in this particular case behaves just like a steady solver. 83 | This is supported by the values in table \ref{table:cylinder-2d-re200-cd}, in which we compare $C_D$ against published values in the literature of both unsteady and steady CFD simulations. 84 | \begin{table} 85 | \centering% 86 | \begin{threeparttable}[b] 87 | \begin{tabular}{lcc} 88 | \toprule 89 | & $C_D$ \\ 90 | \midrule 91 | PetIBM & 1.38 \\ 92 | Steady PINN & 0.95 \\ 93 | Unsteady PINN & 0.95 \\ 94 | Deng et al., 2007\cite{deng_hydrodynamic_2007}\tnote{1} & 1.25 \\ 95 | Rajani et al., 2009\cite{Rajani2009}\tnote{1} & 1.34 \\ 96 | Gushchin \& Shchennikov, 1974\cite{gushchin_numerical_1974}\tnote{2} & 0.97 \\ 97 | Fornberg, 1980\cite{fornberg_numerical_1980}\tnote{2} & 0.83 \\ 98 | \bottomrule 99 | \end{tabular}% 100 | \begin{tablenotes} 101 | \footnotesize 102 | \item [1] Unsteady simulations. 103 | \item [2] Steady simulations. 104 | \end{tablenotes} 105 | \caption{% 106 | PINNs, 2D Cylinder, $Re=200$: validation of drag coefficients.% 107 | The data-driven case is excluded because it does not have an obvious periodic state nor a steady-state solution.% 108 | }% 109 | \label{table:cylinder-2d-re200-cd} 110 | \end{threeparttable} 111 | \end{table}% 112 | The $C_D$ obtained from the unsteady PINN is the same as the steady PINN and close to those steady CFD simulations. 113 | 114 | As for the data-driven case, its temporal domain is $t\in[125$, $200]$, so the coefficients' trajectories start from $t=125$. 115 | The result, again unexpected to us, only exhibits shedding in the timeframe with PetIBM data, i.e., $t\in[125$, $140]$. 116 | This result also implies that data-driven PINNs may be more difficult to train, compared to data-free PINNs and regular data-only model fitting. 117 | Even in the time range with PetIBM data, the data-driven PINN solver is not able to reach the given maximal $C_L$, and the $C_D$ is obviously off from the given data. 118 | After $t=140$, the trajectories quickly fall back to the no-shedding pattern, though it still deviates from the trajectories of the steady and unsteady PINNs. 119 | Combining the loss magnitude shown in figure \ref{fig:cylinder-re200-pinn-loss}, the deviation of $C_D$ and $C_L$ from the data-driven PINN may be caused by not enough training. 120 | As figure \ref{fig:cylinder-re200-pinn-loss} shows data-driven PINN had already converged, other optimization techniques or hyperparameter tuning may be required to further reduce the loss. 121 | Insufficient training only explains why the data-driven case deviates from the PetIBM's data in $t \in [125, 140]$ and from the other two PINNs for $t > 140$. 122 | Even with a better optimization and eventually a lower loss, based on the trajectories, we do not believe the shedding will continue after $t=140$. 123 | 124 | To examine how the transient flow develops, we visually compared several snapshots of the flow fields from PetIBM, unsteady PINN, and the data-driven PINN, shown in figures \ref{fig:cylinder-re200-pinn-contours-u}, \ref{fig:cylinder-re200-pinn-contours-v}, \ref{fig:cylinder-re200-pinn-contours-p}, and \ref{fig:cylinder-re200-pinn-contours-omega_z}. 125 | We also present the flow contours from the steady PINN in figure \ref{fig:cylinder-re200-steady-pinn-contours} for reference. 126 | 127 | \begin{figure*} 128 | \centering% 129 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/contour-comparison-u}% 130 | \caption{% 131 | $u$-velocity comparison of 2D cylinder flow of $Re=\num{200}$ between PetIBM, unsteady PINN, and data-driven PINN. Figure under CC-BY \cite{chuang_barba_2023_fig12}. 132 | } 133 | \label{fig:cylinder-re200-pinn-contours-u}% 134 | \end{figure*} 135 | 136 | \begin{figure*} 137 | \centering% 138 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/contour-comparison-v}% 139 | \caption{% 140 | $v$-velocity comparison of 2D cylinder flow of $Re=\num{200}$ between PetIBM, unsteady PINN, and data-driven PINN. 141 | } 142 | \label{fig:cylinder-re200-pinn-contours-v}% 143 | \end{figure*} 144 | 145 | \begin{figure*} 146 | \centering% 147 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/contour-comparison-p}% 148 | \caption{% 149 | Pressure comparison of 2D cylinder flow of $Re=\num{200}$ between PetIBM, unsteady PINN, and data-driven PINN. 150 | } 151 | \label{fig:cylinder-re200-pinn-contours-p}% 152 | \end{figure*} 153 | 154 | \begin{figure*} 155 | \centering% 156 | \includegraphics[width=0.95\textwidth]{cylinder-2d-re200/contour-comparison-omega_z}% 157 | \caption{% 158 | Vorticity ($\omega_z$) comparison of 2D cylinder flow of $Re=\num{200}$ between PetIBM, unsteady PINN, and data-driven PINN. 159 | } 160 | \label{fig:cylinder-re200-pinn-contours-omega_z}% 161 | \end{figure*} 162 | 163 | \begin{figure}[t] 164 | \centering% 165 | \includegraphics[width=0.95\columnwidth]{cylinder-2d-re200/contour-comparison-steady}% 166 | \caption{% 167 | Contours of 2D cylinder flow at $Re=\num{200}$ w/ steady PINN. 168 | } 169 | \label{fig:cylinder-re200-steady-pinn-contours}% 170 | \end{figure} 171 | 172 | At $t=10$, we can see the wake is still developing, and the unsteady PINN visually matches PetIBM. 173 | At $t=50$, the contours again show the unsteady PINN matching the PetIBM simulation before shedding. 174 | These observations verify that the unsteady PINN is indeed solving unsteady governing equations. 175 | The data-driven PINN does not show meaningful results because $t=10$ and $50$ are out of the data-driven PINN's temporal domain. 176 | These results also indicate that the data-driven PINN is not capable of extrapolating backward in time in dynamical systems. 177 | 178 | At $t=140$, vortex shedding has already happened. 179 | However, the unsteady PINN solution does not show any shedding. 180 | Moreover, the unsteady PINN's contour plot is similar to the steady case in figure \ref{fig:cylinder-re200-steady-pinn-contours}. 181 | $t=140$ is also the last snapshot we fed into the data-driven PINN for training. 182 | The contour of the data-driven PINN at this time shows that it at least could qualitatively capture the shedding, which is expected. 183 | At $t=144$, it is just $4$ time units from the last snapshot being fed to the data-driven PINN. 184 | At such time, the data-driven PINN has already stopped generating new vortices. 185 | The existing vortex can be seen moving toward the boundary, and the wake is gradually restoring to the steady state wake. 186 | Flow at $t=190$ further confirms that the data-driven PINN's behavior is tending toward that of the unsteady PINN, which behaves like a steady state solver. 187 | On the other hand, the solutions from the unsteady PINN for these times remain steady. 188 | 189 | Figure \ref{fig:cylinder-re200-pinn-vort-gen} shows the vorticity from PetIBM and the data-driven PINN in the vicinity of the cylinder in $t \in [140, 142.5]$, which contains a half cycle of vortex shedding. 190 | \begin{figure} 191 | \centering% 192 | \includegraphics[width=\columnwidth]{cylinder-2d-re200/vorticity_z}% 193 | \caption{% 194 | Vorticity generation near the cylinder for 2D cylinder flow of $Re=\num{200}$ at $t=140$, $141$, $142$, and $142.5$ w/ data-driven PINNs. 195 | } 196 | \label{fig:cylinder-re200-pinn-vort-gen}% 197 | \end{figure} 198 | These contours compare how vorticity was generated right after we stopped feeding PetIBM data into the data-driven PINN. 199 | These comparisons might shed some light on why the data-free PINN cannot generate vortex shedding and why the data-driven PINN stops to do so after $t=140$. 200 | 201 | At $t=140$, PetIBM and the data-driven PINN show visually indistinguishable vorticity contours. 202 | This is expected as the data-driven PINN has training data from PetIBM at this time. 203 | At $t=141$ in PetIBM's results, the main clockwise vortex (the blue circular pattern in the domain of $[1, 2]\times[-0.5, 0.5]$) moves downstream. 204 | It slows down the downstream $u$ velocity and accelerates the $v$ velocity in $y<0$. 205 | Intuitively, we can treat the main clockwise vortex as a blocker that blocks the flow in $y<0$ and forces the flow to move upward. 206 | The net effect is the generation of a counterclockwise vortex at around $x\approx 1$ and $y \in [-0.5, 0]$. 207 | This new counterclockwise vortex further generates a small but strong secondary clockwise vortex on the cylinder surface in $y\in[-0.5, 0]$. 208 | On the other hand, the result of the data-driven PINN at $t=141$ shows that the main clockwise vortex becomes more dissipated and weaker, compared to that in PetIBM. 209 | It is possible that the main clockwise vortex is not strong enough to slow down the flow in $y<0$ nor to bring the flow upward. 210 | The downstream flow in $y<0$ (the red arm-like pattern below the cylinder) thus does not change its direction and keeps going straight down in the $x$ direction. 211 | In the results of $t=142$ and $t=142.5$ from PetIBM, the flow completes a half cycle. 212 | That is, the flow pattern at $t=142.5$ is an upside down version of that at $t=140$. 213 | The results from the data-driven PINN, however, do not have any new vortices and the wake becomes more like steady flow. 214 | These observations might indicate that the PINN is either diffusive or dissipative (or both). 215 | 216 | Next, we examined the Q-criterion in the same vicinity of the cylinder in $t\in[140, 142.5]$, shown in figure \ref{fig:cylinder-re200-pinn-qcriterion}. 217 | \begin{figure} 218 | \centering% 219 | \includegraphics[width=\columnwidth]{cylinder-2d-re200/qcriterion}% 220 | \caption{% 221 | Q-criterion generation near the cylinder for 2D cylinder flow of $Re=\num{200}$ at $t=140$, $141$, $142$, and $142.5$ w/ data-driven PINNs. 222 | } 223 | \label{fig:cylinder-re200-pinn-qcriterion}% 224 | \end{figure} 225 | The Q-criterion is defined as follows \cite{jeong_identification_1995}: 226 | \begin{equation} 227 | Q \equiv \frac{1}{2}\left(\lVert \Omega \rVert^2 - \lVert S \rVert^2\right), 228 | \end{equation} 229 | where $\Omega\equiv\frac{1}{2}\left(\nabla\vec{u}-\nabla\vec{u}^\mathsf{T}\right)$ is the vorticity tensor; 230 | $S\equiv\frac{1}{2}\left(\nabla\vec{u}+\nabla\vec{u}^\mathsf{T}\right)$ is the strain rate tensor; 231 | and $\nabla\vec{u}$ is the velocity gradient tensor. 232 | A criterion $Q > 0$ identifies a vortex structure in the fluid flow, that is, where the rotation rate is greater than the strain rate. 233 | 234 | We observe that vortices in the data-driven PINN are diffusive and could be dissipative. 235 | Moreover, judging by the locations of vortex centers, vortices also move slower in the PINN solution than with PetIBM. 236 | The edges of the vortices move at a different speed from that of the vortex centers in the PINN case. 237 | This might be hinting at the existence of numerical dispersion in the PINN solver. 238 | 239 | \subsection{Dynamical Modes and Koopman Analysis}\label{sec:cylinder-re200-koopman} 240 | 241 | We conducted spectral analysis on the cylinder flow to extract frequencies embedded in the simulation results. 242 | Fluid flow is a dynamical system, and how information (or signals) propagates in time plays an important role. 243 | Information with different frequencies advances at different speeds in the temporal direction, and the superposition of information forms complicated flow patterns over time. 244 | Spectral analysis reveals a set of modes, each associated with a fixed oscillation frequencies and decay or growth rate, called {\it dynamic modes} in fluid dynamics. 245 | By comparing the dynamic modes in the solutions obtained with PINNs and PetIBM, we may examine how well or how badly the data-driven PINN learned information with different frequencies. 246 | Koopman analysis is a method to achieve such spectral analysis for dynamical systems. 247 | Please refer to {\it the method of snapshots} in reference \cite{chen_variants_2012} and reference \cite{rowley_spectral_2009} for the theory and the algorithms used in this work. 248 | 249 | We analyzed the results from PetIBM and the data-driven PINN in $t\in$$[125$, $140]$, which contains about three full cycles of vortex shedding. 250 | A total of $76$ solution snapshots were used from each solver. 251 | The time spacing is $\Delta t = 0.2$. 252 | The Koopman analysis would result in $75$ modes. 253 | Since the snapshots cover three full cycles, we would expect only $25$ distinct frequencies and $25$ nontrivial modes---only $25$ out of $76$ snapshots are distinct. 254 | However, this expectation only happens when the data are free from noise and numerical errors and when the number of three periods is exact. 255 | We would see more than $25$ distinct frequencies and modes if the data were not ideal. 256 | In $t \in [125, 140]$, the data-driven PINN was trained against PetIBM's data, so we expected to see similar spectral results between the two solvers. 257 | 258 | To put it simply, each dynamic mode is identified by a complex number. 259 | Taking logarithm on the complex number's absolute value gives a mode's growth rate, and the angle of the complex number corresponds to a mode's frequency. 260 | Figure \ref{fig:cylinder-re200-koopman-eig-dist} shows the distributions of the dynamic modes on the complex plane. 261 | \begin{figure} 262 | \centering% 263 | \includegraphics[width=\columnwidth]{cylinder-2d-re200/koopman_eigenvalues_complex}% 264 | \caption{% 265 | Distribution of the Koopman eigenvalues on the complex plane for 2D cylinder flow at $Re=\num{200}$ obtained with PetIBM and with data-driven PINN. Figure under CC-BY \cite{chuang_barba_2023_fig19}. 266 | } 267 | \label{fig:cylinder-re200-koopman-eig-dist}% 268 | \end{figure} 269 | The color of each dot represents the normalized strength of the corresponding mode, which is also obtained from the Koopman analysis. 270 | The star marker denotes the mode with a frequency of zero, i.e., a steady or time-independent mode. 271 | This mode usually has much higher strength than others, so we excluded it from the color map and annotated its strength directly. 272 | Koopman analysis delivers dynamical modes with complex conjugate pairs, so the modes are symmetric with respect to the real-number axis. 273 | A conjugate pair has an opposite sign in the frequencies mathematically but has the same physical frequency. 274 | 275 | We also plotted a circle with a radius of one on each figure. 276 | As flow has already reached a fully periodic regime, the growth rates should be zero because no mode becomes stronger nor weaker. 277 | In other words, all modes were expected to fall on this circle on the complex plane. 278 | If a mode falls inside the circle, it has a negative growth rate, and its contribution to the solution diminishes over time. 279 | Similarly, a mode falling outside the circle has a positive growth rate and becomes stronger over time. 280 | 281 | On the complex plane, all the modes captured by PetIBM (the left plot in figure \ref{fig:cylinder-re200-koopman-eig-dist}) fall onto the circle or very close to the circle. 282 | The plot shows $75$---rather than $25$---distinct $\lambda_j$ and modes, but the modes are evenly clustered into 25 groups. 283 | Each group has three modes, among which one or two modes fall on the circle, while the remaining one(s) falls inside but very close to the circle. 284 | Modes within each group have a similar frequency, and the one precisely on the circle has significantly higher strength than other modes (if not all modes in the group are weak). 285 | Due to the numerical errors in PetIBM's solutions, data in a vortex period are similar to but not exactly the same as those in another period. 286 | The strong modes falling precisely on the circle may represent the period-averaged flow patterns and are the $25$ modes we expected earlier. 287 | The effect of numerical errors was filtered out from these modes. 288 | We call these 25 modes primary modes and all other modes secondary modes. 289 | Secondary modes are mostly weak and may come from the numerical errors in the PetIBM simulation. 290 | The plot shows these secondary modes are slightly dispersive but non-increasing over time, which is reasonable because the numerical schemes in PetIBM are stable. 291 | 292 | As for the PINN result (the right pane in figure \ref{fig:cylinder-re200-koopman-eig-dist}), the mode distribution is not as structured as with PetIBM. 293 | It is hard to distinguish if all 25 expected modes also exist in this plot. 294 | However, we observe that at least the top 7 primary modes (the steady mode, two purple and 4 orange dots on the circle) also exist in the PINN case. 295 | Secondary modes spread out more widely on and inside the circle, compared to the clustered modes in PetIBM. 296 | We believe this means that PINN is more numerically dispersive and noisy. 297 | The frequencies of many of these secondary modes do not exist in PetIBM. 298 | So one possible source of these additional frequencies and modes may be the PINN method itself. 299 | It could be insufficient training or that the neural network itself inherently is dispersive. 300 | However, secondary modes on the circle are weak. 301 | We suspect that their contribution to the solution may be trivial. 302 | 303 | A more concerning observation is the presence of damped modes (modes that fall inside the circle). 304 | These modes have negative growth rates and hence are damped over time. 305 | We believe these modes contribute significantly to the solution because their strengths are substantial. 306 | The existence of the damped modes also means that PINN's predictions have more important discrepancies from one vortex period to another vortex period, compared to the PetIBM simulation. 307 | In addition, the flow pattern in PINN would keep changing after $t=140$. 308 | They may be the culprits causing the PINN solution to quickly fall back to a non-oscillating flow pattern for $t>140$. 309 | We may consider these errors as numerical dissipation. 310 | However, whether these errors came from insufficient training or were inherent in the PINN is unclear. 311 | 312 | Note that the spectral analysis was done against data in $t\in[125, 140]$. 313 | It does not mean the solutions in $t>140$ also have the same spectral characteristics: the flow system is nonlinear, but the Koopman analysis uses linear approximations \cite{rowley_spectral_2009}. 314 | 315 | Figure \ref{fig:cylinder-re200-koopman-mode-strength} shows mode strengths versus frequencies. 316 | \begin{figure} 317 | \centering% 318 | \includegraphics[width=\columnwidth]{cylinder-2d-re200/koopman_mode_strength}% 319 | \caption{% 320 | Mode strengths versus mode frequencies for 2D cylinder flow at $Re=\num{200}$. 321 | Note that we use a log scale for the vertical axis. 322 | } 323 | \label{fig:cylinder-re200-koopman-mode-strength}% 324 | \end{figure} 325 | The plots use nondimensional frequency, i.e., Strouhal number, in the horizontal axes. 326 | We only plotted modes with positive numerical frequencies for a concise visualization. 327 | Plots in this figure also show the same observations as in the previous paragraphs: the data-driven PINN is more dispersive and dissipative. 328 | 329 | An observation that is now clearer from figure \ref{fig:cylinder-re200-koopman-mode-strength} is the strength distribution. 330 | In PetIBM's case, strengths decrease exponentially from the steady mode (i.e., $St=0$) to high-frequency modes. 331 | One can deduce a similar conclusion from PetIBM's simulation result. 332 | The vortex shedding is dominated by a single frequency (this frequency is $St\approx 0.2$ because $t\in[125, 140]$ contains three periods). 333 | Therefore, the flow should be dominated by the steady mode and a mode with a frequency close to $St=0.2$. 334 | We can indeed verify this statement for PetIBM's case in figure \ref{fig:cylinder-re200-koopman-mode-strength}: the primary modes of $St=0$ and $St\approx 0.2$ are much stronger than others. 335 | The strength of the immediately next mode, i.e., $St\approx 0.4$, drops by an order of magnitude. 336 | Note the use of a logarithmic scale. 337 | If we re-plot the figure using a regular scale, only $St=0$ and $St=0.2$ would be visible in the figure. 338 | 339 | The strength distribution in the case of PINN also shows that $St=0$ and $St\approx 0.2$ are strong. 340 | However, they are not the only dominating modes. 341 | Some other modes also have strengths at around \num{e-1}. 342 | As discussed in the previous paragraphs, these additional strong modes are damped modes. 343 | We also observed that some damped modes have the same frequencies as primary modes. 344 | For example, the secondary modes at $St=0$ and $St=0.2$ are damped modes. 345 | Note that for $St=0$, if a mode is damped, then it is not a steady mode anymore because its magnitude changes with time, though it is still non-oscillating. 346 | 347 | Table \ref{table:koopman-petibm} summarizes the top 4 modes (ranked by their strengths) in PetIBM's spectral result. 348 | \begin{table} 349 | \begin{threeparttable}[b] 350 | \begin{tabular}{ccccc} 351 | \toprule 352 | $St$ & Strength & Growth Rate & Contours \\ 353 | \midrule 354 | 0 & 0.96 & 1.3e-7 & Figure \ref{fig:cylinder-re200-koopman-petibm-1st}\\ 355 | 0.201 & 0.20 & -4.3e-7 & Figure \ref{fig:cylinder-re200-koopman-petibm-2nd}\\ 356 | 0.403 & 0.04 & 1.7e-6 & Figure \ref{fig:cylinder-re200-koopman-petibm-3rd}\\ 357 | 0.604 & 0.03 & 2.7e-6 & Figure \ref{fig:cylinder-re200-koopman-petibm-4th}\\ 358 | \bottomrule 359 | \end{tabular}% 360 | \caption{% 361 | 2D Cylinder, $Re=200$: top 4 primary dynamic modes (sorted by strengths) for PetIBM% 362 | }% 363 | \label{table:koopman-petibm} 364 | \end{threeparttable} 365 | \end{table}% 366 | For reference, these modes' contours are provided in the appendex as denoted in the table. 367 | The dynamic modes are complex-valued, and the contours include both the real and the imaginary parts. 368 | Note the growth rates of these 4 modes are not exactly zero but around \num{e-6} and \num{e-7}. 369 | We were unsure if we could treat them as zero at these orders of magnitude. 370 | If not, and if they do cause the primary modes to be slightly damped or augmented over time, then we believe they also serve as a reasonable explanation for the existence of the other 50 non-primary modes in PetIBM---to compensate for the loss or the gain in the primary modes. 371 | 372 | Table \ref{table:koopman-pinn-primary} lists the PINN solution's top 4 primary modes, which are the same as those in table \ref{table:koopman-petibm}. 373 | \begin{table} 374 | \begin{threeparttable}[b] 375 | \begin{tabular}{ccccc} 376 | \toprule 377 | $St$ & Strength & Growth Rate & Contours \\ 378 | \midrule 379 | 0 & 0.97 & -2.2e-6 & Figure \ref{fig:cylinder-re200-koopman-pinn-primary-1st}\\ 380 | 0.201 & 0.18 & -9.4e-6 & Figure \ref{fig:cylinder-re200-koopman-pinn-primary-2nd}\\ 381 | 0.403 & 0.03 & 2.3e-5 & Figure \ref{fig:cylinder-re200-koopman-pinn-primary-3rd}\\ 382 | 0.604 & 0.03 & -8.6e-5 & Figure \ref{fig:cylinder-re200-koopman-pinn-primary-4th}\\ 383 | \bottomrule 384 | \end{tabular}% 385 | \caption{% 386 | 2D Cylinder, $Re=200$: top 4 primary dynamic modes (sorted by strengths) for PINN% 387 | }% 388 | \label{table:koopman-pinn-primary} 389 | \end{threeparttable} 390 | \end{table}% 391 | Table \ref{table:koopman-pinn-damped} shows the top 4 secondary modes in the PINN method's result. 392 | \begin{table} 393 | \begin{threeparttable}[b] 394 | \begin{tabular}{ccccc} 395 | \toprule 396 | $St$ & Strength & Growth Rate & Contours \\ 397 | \midrule 398 | 1.142 & 0.12 & -0.24 & Figure \ref{fig:cylinder-re200-koopman-pinn-damped-1st}\\ 399 | 1.253 & 0.08 & -0.22 & Figure \ref{fig:cylinder-re200-koopman-pinn-damped-2nd}\\ 400 | 0.633 & 0.05 & -0.14 & Figure \ref{fig:cylinder-re200-koopman-pinn-damped-3rd}\\ 401 | 0.761 & 0.04 & -0.13 & Figure \ref{fig:cylinder-re200-koopman-pinn-damped-4th}\\ 402 | \bottomrule 403 | \end{tabular}% 404 | \caption{% 405 | 2D Cylinder, $Re=200$: top 4 damped dynamic modes (sorted by strengths) for PINN% 406 | }% 407 | \label{table:koopman-pinn-damped} 408 | \end{threeparttable} 409 | \end{table}% 410 | Corresponding contours are also included in the appendix and denoted in the tables for readers' reference. 411 | The growth rates of the primary modes in the PINN method's result are around \num{e-5} and \num{e-6}, slightly larger than those of PetIBM. 412 | If these orders of magnitude can not be deemed as zero, then these primary modes are slightly damped and dissipative, though the major source of the numerical dissipation may still be the secondary modes in table \ref{table:koopman-pinn-damped}. 413 | 414 | % vim:ft=tex: 415 | --------------------------------------------------------------------------------