from ROMs to Discretization Methods
Try to clarify the connections between reduce order models and finite element methods, especially in terms of the weighted residual and the Galerkin projection methods. Concepts of finite volume and finite difference methods are also encapsulated in this framework.
Intuitive Introduction
Better than Linear regression
Linear regression is a widely used method seeking the 'best' linear relation between two input and output variables, say \(x\) and \(y\).
As shown below, there is a difference between the linear regression results considering input as \(x\) output as \(y\) and the other way around. The difference between the actual value of \(y\) and the predicted value for a few samples are plotted as well.

The reason of the difference is shown clearly. The direction along which the (Ordinary Least Squares) error is minimized is different in different coordinates. If the two coordinates are correlated with each other, then typically regression outputs become sensitive to noise.
If we develop a algorithm to find a new set of coordinates such that as least orthogonal error as possible is in one direction of the new coordinate system, we can have a better regression. Unluckily we have this method, called PCA. As shown below, the orthogonal error to the model line is minimized.

The result comparison is shown below, not surprisingly the PCA "regression" is in between of the two traditional results.

Coordinate shifting and weighted inner production is ignored, i.e. the variable is normalized.
PCA and model reduction
Slightly different from last subsection, intuatively, what PCA does is to find a rotation of the coordinate s.t. as much variance as possible is represented in one direction of the new coordinate system.

Actually the main area of PCA is not regression, but the model reduction. In a random system, the variance means information. And according to the intuitive definition, PCA holds the most variance, as well as the most information in one direction. So if one wants to represent/approximate the data in an 1D coordinate, instead of 2D as before. The best coordinate is the dominant direction found by PCA. And what they do next is projecting the 2D data onto the coordinate.
Of course, nobody reduces data from 2D to 1D, usually people are talking about reduce order in the contexts of
- M dimension vector space -> N (100-1000) dimension vector space
- M dimension random variables -> N (100-1000) dimension random variables
- Infinite dimension space (Hilbert space) -> finite dimension system
- Discretization of PDE
And the method is renamed as SVD, PCA, POD, and {spectral, spectral collocation, spectral-element, and finite-element}, in above contexts respectively. But the underlying ideas are the same.
Technically, SVD dentoes the algorithm used by PCA and POD to find the basis of the decomposition. For the diecretization methods, a human defined basis is selected, instead of obtained by SVD as before.
Here are more names for SVD in other fields: Hotelling analysis, empirical component analysis, quasiharmonic modes, empirical eigenfunction decomposition and so on.
Model reduction
Same approaches can be generalized as:
- define inner product
- define representation in subspace spanned by orthogonal basis/trail vectors and the resultant residual/error compared to the original space
- define test/weight vectors and require them orthogonal (or oblique orthogonal) to the error (let the inner product of the test vector and the error equals to 0)
- rearranging and define the representations of basis, parameters, and projection matrix
We are going to apply this process on different kinds of spaces.
Finite dimensional vector space
Consider a vector space \(\mathbf{X}\in\mathbb{R}^m\) as \(\mathbf{X}=\left[\begin{array}{llll}x_1 & x_2 & \ldots & x_m\end{array}\right]^T\)
inner product of arbitrary vectors \(\mathbf{u}\in \mathbb{R}^m\) and \(\mathbf{v}\in \mathbb{R}^m\) is defined simply as: \[ (\mathbf{u}, \mathbf{v})=\mathbf{v}^T \mathbf{u} \]
The reduced-order representation of \(\mathbf{X}\in \mathbb{R}^m\) is always defined as: \[ \tilde{\mathbf{X}}=\sum_{i=1}^p \alpha_i \boldsymbol{\phi}_i=\boldsymbol{\Phi} \boldsymbol{\alpha},\qquad \tilde{\mathbf{X}}\in\mathbb{R}^{m}, \mathbf{\Phi}\in\mathbb{R}^{m\times p}, \boldsymbol{\alpha}\in\mathbb{R}^{p} \] where the basis \(\mathbf{\Phi}=\left[\begin{array}{llll}\phi_1 & \phi_2 & \ldots & \phi_p\end{array}\right]\)
And the error is considered as: \[ \boldsymbol{\epsilon} = \mathbf{X}-\tilde{\mathbf{X}}, \qquad \boldsymbol{\epsilon}\in\mathbb{R}^{m} \] note that \(n\) dimensional does not expand the \(m\) dimensional space. Unless \(m=p\), we cannot make the error zero everywhere. The exact equality requirment can be loosened by introducing a test vector as shown below.
Define a set of \(p\) test/weight vectors \(\boldsymbol{\Psi}\), and require them to be orthogonal to the residual (error), letting the inner product of the test vecror and the error equals to 0 \[ (\boldsymbol{\Psi}, \boldsymbol{\epsilon})=\boldsymbol{\Psi}^T\boldsymbol{\epsilon} =0,\qquad \mathbf{\Psi}\in\mathbb{R}^{m\times p} \] where \(\boldsymbol{\Psi}=\left[\begin{array}{llll}\psi_1 & \psi_2 & \ldots & \psi_p\end{array}\right]\) refers to the plane that the system is projected to. We require the error to be zero in the projected sub space, instead of exact zero error everywhere.
rearranging the last representation: \[ \begin{aligned} \boldsymbol{\Psi}^T(\mathbf{X}-\tilde{\mathbf{X}}) &=0 \\ \boldsymbol{\Psi}^T \mathbf{X} &=\boldsymbol{\Psi}^T \boldsymbol{\Phi} \boldsymbol{\alpha} \end{aligned} \]
Note that \(\boldsymbol{\Psi}^T \boldsymbol{\Phi}\) must be non-singular i.e. the null space of \(\boldsymbol{\Phi}\) cannot be in the subspace \(\boldsymbol{\Psi}\).
continue \[ \begin{aligned} \left(\boldsymbol{\Psi}^T \boldsymbol{\Phi}\right)^{-1} \boldsymbol{\Psi}^T \mathbf{X}&=\boldsymbol{\alpha} \\ \boldsymbol{\Phi}\left(\boldsymbol{\Psi}^T \boldsymbol{\Phi}\right)^{-1} \boldsymbol{\Psi}^T \mathbf{X}&=\boldsymbol{\Phi} \boldsymbol{\alpha}=\tilde{\mathbf{X}} \\ \mathbf{P X}&=\tilde{\mathbf{X}} \\ \text { where } \mathbf{P}&=\boldsymbol{\Phi}\left(\boldsymbol{\Psi}^T \boldsymbol{\Phi}\right)^{-1} \boldsymbol{\Psi}^T \end{aligned} \] \(\mathbf{P}\) refers to the projection matrix, projecting \(m\) dimensional space into \(p\) dimensional sub space.
Note the projection matrix holds the property as \(\mathbf{P}^2=\mathbf{P}\)
Now we can gathered what we have so far: \[ \begin{aligned} \tilde{\mathbf{X}}&=\boldsymbol{\Phi} \boldsymbol{\alpha}\\ \mathbf{P X}&=\tilde{\mathbf{X}} \\ \text { where } \mathbf{P}&=\boldsymbol{\Phi}\left(\boldsymbol{\Psi}^T \boldsymbol{\Phi}\right)^{-1} \boldsymbol{\Psi}^T \\ \text { and } \boldsymbol{\alpha} &=\left(\boldsymbol{\Psi}^T \boldsymbol{\Phi}\right)^{-1} \boldsymbol{\Psi}^T \mathbf{X} \end{aligned} \] Treat this as a system of functions, since we know \(\mathbf{X}\), the system is closed if we define the test function \(\boldsymbol{\Psi}\) and the basis \(\mathbf{\Phi}\).
There are several ways of choosing \(\mathbf{\Phi}\) and \(\mathbf{\Psi}\), here we only present one of them.
- Orthogonal/Galerkin projection, letting \(\boldsymbol{\Psi}=\boldsymbol{\Phi}\) i.e. letting error \(\boldsymbol{\alpha}\) orthogonal to the subspace \(\boldsymbol{\Phi}\)
- Orthonormal basis, \(\boldsymbol{\Phi}\) is chosen as orthonormal i.e. \(\boldsymbol{\Phi}^T\boldsymbol{\Phi}=\mathbf{I}\)
In this case, the representation \(\mathbf{P}\) and \(\boldsymbol{\alpha}\) are simplified as: \[ \begin{aligned} \mathbf{P}^\bot&=\boldsymbol{\Phi}\boldsymbol{\Phi}^T \\ \boldsymbol{\alpha} &=\boldsymbol{\Phi}^T \mathbf{X} \end{aligned} \] Galerkin projection has some very good properties:
- minimum length (norm) of the error \(\boldsymbol{\epsilon}^T\boldsymbol{\epsilon}\) based on the basis
- same as the standard least square solution to the over-constrained (inconsistent) system
Still, there is an infinite number of orthonormal basis we can choose (Fourier expansion for example). But one optimal set of basis is called the (left) singular vector, can be obtained by singular vector decomposition SVD (what shown in the introduction section), where \(\boldsymbol{\Phi} = \tilde{\mathbf{U}}=\left[\begin{array}{llll}U_1 & U_2 & \ldots & U_p\end{array}\right]\) is the basis (i.e. first \(p\) columns of \(\mathbf{U}\)) and the corresponding parameter \(\boldsymbol{\alpha} = \tilde{\boldsymbol{\Sigma}}\tilde{\mathbf{V}}^T = \tilde{\mathbf{U}}^T\mathbf{X}\) if orthogonally projecting vector space \(\mathbf{X}\in\mathbb{R}^{m}\) into subspace \(\mathbb{R}^{p}\)


Note in SVD, \(\mathbf{U}\) (and \(\mathbf{V}\)) is unitary, as a result, if \(m=p\), \(\mathbf{P}=\mathbf{I}\) and \(\mathbf{X}=\tilde{\mathbf{X}}\)
SVD finds the subspace \(\boldsymbol{\Phi}\) that maximizes the individual element of the approximation of the matrix, good for model reduction as
The best approximation to a matrix (Frobenius norm) for a given rank is given by a projection into a subspace spanned by the first \(p\) columns of \(\mathbf{U}\)
POD/PCA examples
Here are POD analysis on two similar fluid related scenarios. We can see the flow field can be reconstructed by a small number of modes. More fascinatingly, we can see a strong resembleness of the extracted modes between these two cases, indicating a similarity of the underlying coherent structres of nearly laminar and high turbulent flows.
![Modal decomposition of two-dimensional incompressible flow over a flat-plate wing (Re = 100 and α = 30 deg). This example shows complex nonlinear separated flow being well represented by only two POD(PCA) unsteady modes and the mean flowfield. Visualized are the streamwise velocity profiles. From [1], quoted from [4][5]](/2022/10/21/from-Reduce-Order-Models-to-Discretization-Methods/Modal decomposition of two-dimensional incompressible flow over a flat-plate wing.png)
![POD analysis of turbulent flow over a NACA0012 airfoil at Re = 23,000 and α = 9◦. Shown are the instantaneous and time-averaged streamwise velocity fields and the associated four most dominant POD modes, From [1], quoted from [6][7]](/2022/10/21/from-Reduce-Order-Models-to-Discretization-Methods/Modal decomposition of three-dimensional incompressible flow over a NACA0012 airfoil.png)
Note the instantaneous flow fields represents a stack of snapshotes, instead of one picture.
POD/PCA technique is applied in a large varity of fields including fundamental analysis of fluids flows, reducedorder modeling, data compression/reconstruction, flow control, and aerodynamic design optimization. See Section III of Ref.[1].
Generalization
The real space \(\mathbb{R}^m\) can be generalized seamlessly to complex space \(\mathbb{C}^m\) within which the transpose of a real value vector space \(\mathbf{v}\) as \(\mathbf{v}^T\)becomes the Hermitian/conjugate transpose of a complex value vector as \(\mathbf{v}^*\)
The inner product can be generalized to be weighted inner product as (in complex value space) \[ (\mathbf{u}, \mathbf{v})_{\mathbf{w}}=\mathbf{v}^* \mathbf{W} \mathbf{u} \] where the weighting matrix \(\mathbf{W}\) inside the inner product is symmetric and positive definite
As a result the weighted projection matrix becomes: \[ \mathbf{P}=\boldsymbol{\Phi}\left(\boldsymbol{\Phi}^* \mathbf{W} \boldsymbol{\Phi}\right)^{-1} \boldsymbol{\Phi}^* \mathbf{W} \] Resulting in the orthogonal projection \(\boldsymbol{\Psi}=\boldsymbol{\Phi}\) to be an oblique projection as \(\boldsymbol{\Psi}=\mathbf{W}\boldsymbol{\Phi}\)
Random variables
Lets change our context back to that of the introduction section:
Consider a set of random variables \(\mathbf{x}\in\mathbb{R}^m\) as \(\mathbf{x}=\left[\begin{array}{llll}x_1 & x_2 & \ldots & x_m\end{array}\right]^T\)
In order to reduce the order of the \(\mathbf{x}\), first we need to build a data matrix based on \(n\) observations of \(\mathbf{x}\), such that \[ \mathbf{X}=\left[\begin{array}{llll}\mathbf{x^1} & \mathbf{x^2} & \ldots & \mathbf{x^n}\end{array}\right] \]
Note: \(rank(\mathbf{X})\leq min(m,n)\)
- Typical in statistics: \(n \gg m\)
- Typical in fluid dynamics \(m \ll n\)
- The inequality holds for repeated observations (duplicate columns) or with linear dependencies amongst data points (linear combinations of rows)
Center the data matrix by subtracting by the sampled expected/mean value \[ \mathbf{X_c}=\frac{1}{\sqrt{n-1}}\left[\begin{array}{llll} \mathbf{x}^1-\overline{\mathbf{x}} & \mathbf{x}^2-\overline{\mathbf{x}} & \cdots & \mathbf{x}^n-\overline{\mathbf{x}} \end{array}\right] \] where \(\overline{\mathbf{x}}=\frac{1}{n} \sum_{k=1}^n \mathbf{x}^k\)
It is possible to get the expect value via integration over the probability space, but theoretically
Then we can project it into a subspace, using the basis obtained by SVD (called PCA in this context) of the centered data matrix. The projection process is well illustrated above. Here we only talked about another benefit of the basis obtained by PCA.
Recal efficient SVD and the projection relationship, we set \(p=rank(\mathbf{X})=rank(\mathbf{U1})\) so that no information truncation is performed, we map the data matrix in terms of a "full" order space, i.e. a rotation of the coordinate. \[ \begin{aligned} \mathbf{X_c}&=\mathbf{U_1}\boldsymbol{\Sigma_1}\mathbf{V^*_1}\\ \mathbf{U_1}\mathbf{U_1}^*\mathbf{X_c}&=\tilde{\mathbf{X_c}} \\ \end{aligned} \] Take the sampled covariance of the varible as: \[ \begin{aligned} \mathbf{C_s} &= \mathbf{X_c}\mathbf{X^*_c}=\mathbf{U_1}\boldsymbol{\Sigma^2_1}\mathbf{U^*_1} \\ \mathbf{U^*_1}\mathbf{C_s}\mathbf{U_1} &= \tilde{\mathbf{X_c}}\tilde{\mathbf{X^*_c}} = \mathbf{\Sigma^2_1}\\ \end{aligned} \] We can see columns of \(\mathbf{U_1}\) are eigenvectors of the covariance matrix, more importantly, after projection, the sampled covariance of the resultant varibles are diagonal, meas the resultant variables are un-correlated.
Naturally, we can concludes that PCA
- Maximizes the total variance (trace of the covariance matrix) in the subspace
- Minimizes the mean-square error between vectors in the original space and their projections (approximations) in the subspace
We can still change the inner product as weighted production, in order to scale the variables into same unit.
Infinite dimensional (function) space
Consider a Hilbert space \(L2([a,b])\), we use the same process as before. But the basis function is not chosen by SVD/PCA/POD.
Weighted inner product of two complex functions \(f(\xi), g(\xi)\) are: \[ (f(\xi), g(\xi))_w=\int_a^b f(\xi) g^*(\xi) w(\xi) d \xi, \quad w(\xi) \geq 0 \]
For a function \(x(t, \xi)\), we can represented it into a finite dimensional space spanned by a set of basis/trail functions, \[ \phi_j(t, \xi), \qquad j=1,2,\dots, p \] such that reduced order function \(\tilde{x}(t, \xi)\) in the sub space can be expressed as: \[ \tilde{x}(t, \xi)=\sum_{j=1}^p \alpha_j \phi_j(t, \xi) \]
Morden method decompose the function such that the space and time is split. \[\tilde{x}(t, \xi)=\sum_{j=1}^p \alpha_j(t) \phi_j(\xi)\]
Consider test functions \[ \psi_j(t, \xi), \qquad j=1,2,\dots, p \] and the wighted function of error is such that \[ (\epsilon(t, \xi), \psi_j(t, \xi))_w=\int_a^b \epsilon(t, \xi) \psi^*_j(t, \xi) w(t, \xi) d \xi = 0 \] as a result: \[ \left(\tilde{x}, \psi_j\right)_w-\sum_i \alpha_i\left(\phi_i, \psi_j\right)_w=0, \quad j=1,2, \ldots, p \] rearranging: \[ \begin{aligned} \mathbf{r} &= \mathbf{M} \boldsymbol{\alpha} \\ \text{where: }\{\mathbf{r}\}_j &=\left(\tilde{x}, \psi_j\right) \\ \{\mathbf{M}\}_{j i} &=\left(\phi_i, \psi_j\right) \end{aligned} \]
Still here leaves several choices of test functions and basis functions:
Choice of test functions
- Galerkin method/orthogonal projection, finite element
- Collocation method (non-orthogonal), finite difference
- Step function, finite volume
Choice of basis function
- Fourier series / Trigonometric series, spectral methods
- Polynomials
- Global, usually Chebyshev poly
- Local, e.g. Piecewise Polynomials
- linear "hat/tent function" (finite element)
- higher order (spectral element)
Generally to be computationally efficient, we want the matrix \(\mathbf{M}\) to be sparse
- \(\mathbf{M}\) is diagonal for Galerkin with orthogonal basis functions
- \(\mathbf{M}\) is sparse (banded) for Galerkin/collocation with piecewise polynomials
- \(\mathbf{M}\) is diagonal for bi-orthogonal function
Discretization methods of PDE
The discretization methods of a PDE is basically mapping a dynamic system (PDE) from Hilbert space into finite dimensional space, just as illustrated above. As a result, in this section, same methods is applied on an example.
Consider a 1D Poisson equation with Dirichlet and Neumann boundary condition \[ \begin{aligned} \mathbb{L}(u) \equiv \nabla^2u = f(\xi) &\qquad\text{in } \Omega = \{\xi | 0 < \xi < 1\} \\ u(0)=\mathcal{g_D}, &\qquad \frac{\partial u}{\partial \xi}(1)=\mathcal{g_N} \end{aligned} \] with a solution \(u\) in Hilbert space.
\(u\) is denotes the data variable, instead of \(x\) as before. \(\xi\) represents the spatial coordinates
Poisson equation is a typical elliptic PDE
Inner production in domain \(\Omega\) \[ (f(\xi), g(\xi))=\int_{\Omega} f(\xi) g(\xi) d \xi \]
We have a reduced order solution \(\tilde{u}(\xi)\) in \(N\) dimensional space spanned by basis \(\phi_i(\xi)\): \[ \tilde{u}(\xi)=\sum_{i=1}^N \hat{u}_i \phi_i(\xi) \] and therefore \[ \tilde{\mathbb{L}}(\tilde{u}(\xi)) = \nabla^2\tilde{u} \] and the error/residual: \[ \epsilon = \tilde{\mathbb{L}}-\mathbb{L}= \nabla^2\tilde{u} - f(\xi) \]
Consider a \(N\) dimensional test function \(\Psi\) spanned by \(\psi_i(\xi)\), and consider inner production of test function and error function: \[ (\Psi(\xi), \epsilon(\xi))= \int_{\Omega} \Psi(\xi) (\tilde{\mathbb{L}}(\tilde{u}(\xi))-f(\xi)) d \xi = 0 \] Above is called the weak formulation. Similar to the test vector, the test function release the "strict" requirement of an exact solution everywhere.
We are going to use the finite element, so we choose the basis as a set of local linear tent functions as:
\[
\psi_i(\xi)= \begin{cases}\frac{\xi-\xi_{i-1}}{\xi_i-\xi_{i-1}} & \text { if } \xi \in\left[\xi_{i-1}, \xi_i\right] \\ \frac{x_{i+1}-\xi}{\xi_{i+1}-\xi_i} & \text { if } \xi \in\left[\xi_i, \xi_{i+1}\right] \\ 0 & \text { otherwise }\end{cases}
\] In such a basis, the solution is required to be exact only locally in each cell.
For test function, we use Galerkin formulation as \[ \psi_j(\boldsymbol{\xi}) =\phi_j \]
Different test/weight functions lead to different methods of discretization
\[\begin{aligned}&\text { Test functions used in the method of weighted residuals}\\&\begin{aligned}\hline & \text { Test function } & & \text { Type of method } \\ & & &\\\psi_j(\boldsymbol{\xi}) &=\delta\left(\boldsymbol{\xi}-\boldsymbol{\xi}_j\right) & & \text { Collocation/Finite Difference } \\\psi_j(\boldsymbol{\xi}) &=\left\{\begin{array}{llll}1 & \text { inside } & \Omega^j \\0 & \text { outside } & \Omega^j\end{array}\right.& & \text { Finite volume (subdomain) } \\\psi_j(\boldsymbol{\xi}) &=\frac{\partial R}{\partial \hat{u}_j} & &\text { Least-squares} \\\psi_j(\boldsymbol{\xi}) &=\boldsymbol{\xi}^j & & \text { Method of moments } \\\psi_j(\boldsymbol{\xi}) &=\phi_j & & \text { Galerkin } \\\psi_j(\boldsymbol{\xi}) &\neq \phi_j & & \text { Petrov-Galerkin moments } \\\hline\end{aligned}\end{aligned}\]
Weak formulation of Poisson equation
Now add some details, the weak formulation in step 3 should be: \[ \int_{0}^1 (\nabla^2\tilde{u}-f(\xi))\Psi(\xi) d\xi = 0 \] Use the integration by parts rule:
\[ \int_0^1 \frac{\partial \psi}{\partial \xi} \frac{\partial \tilde{u}}{\partial \xi} d \xi=\int_0^1 \psi f d \xi+\left[\psi \frac{\partial \tilde{u}}{\partial \xi}\right]_0^1 \]
Implementation of boundary conditions
As \(\psi(0)=0\), apply the Neumann boundary condition \(\frac{\partial u}{\partial \xi}(1)=\mathcal{g_N}\) directly, \[ \int_0^1 \frac{\partial \psi}{\partial \xi} \frac{\partial \tilde{u}}{\partial \xi} d \xi=\int_0^1 \psi f d \xi+\psi(1) {\mathcal{g_N}} \] For the Dirichlet boundary \(u(0)=\mathcal{g_D}\), it cannot be implemented directly. Instead, this can treat as an inhomogeneity part of the solution, such as: \[ \tilde{u}=\tilde{u}^\mathcal{H}+\tilde{u}^\mathcal{D} \] where \(\tilde{u}^\mathcal{H}\) and \(\tilde{u}^\mathcal{D}\) denote the homogenize and inhomogenize part respectively, and we have: \[ \tilde{u}^\mathcal{H}(0) = 0 \qquad \tilde{u}^\mathcal{D}(0) = \mathcal{g_D} \] Plug into the formulation as: \[ \int_0^1 \frac{\partial \psi}{\partial \xi} \frac{\partial \tilde{u}^\mathcal{H}}{\partial \xi} d \xi + \int_0^1 \frac{\partial \psi}{\partial \xi} \frac{\partial \tilde{u}^\mathcal{D}}{\partial \xi} d \xi=\int_0^1 \psi f d \xi+\psi(1) {\mathcal{g_N}} \] As a result, the finite element form of the PDE is: \[ \int_0^1 \frac{\partial \psi}{\partial \xi} \frac{\partial \tilde{u}^\mathcal{H}}{\partial \xi} d \xi =\int_0^1 \psi f d \xi+\underbrace{\psi(1) {\mathcal{g_N}}}_{Neumann} - \underbrace{\int_0^1 \frac{\partial \psi}{\partial \xi} \frac{\partial \tilde{u}^\mathcal{D}}{\partial \xi} d\xi }_{Dirichlet} \] And the \(\tilde{u}\) can be solved by solving the ODE.
In the \(h-type\) method a fixed order polynomial is used in every element and convergence is achieved by reducing the size of the elements. \(h\) represents the characteristic size of an element.
In the \(p-type\) method a fixed mesh is used and convergence is achieved by increasing the order of the polynomial every element. \(p\) represents the polynomial order in the elements.
In the \(spectral\) method is the \(p-type\) method but the whole solution domain is treated as a single element.
The \(hp-element\) method combines attributes from both of these methods.
Reference
- Taira, K., Brunton, S. L., Dawson, S. T., Rowley, C. W., Colonius, T., McKeon, B. J., ... & Ukeiley, L. S. (2017). Modal analysis of fluid flows: An overview. Aiaa Journal, 55(12), 4013-4041. ↩︎
- Taira, K., Hemati, M. S., Brunton, S. L., Sun, Y., Duraisamy, K., Bagheri, S., ... & Yeh, C. A. (2020). Modal analysis of fluid flows: Applications and outlook. AIAA journal, 58(3), 998-1022. Archive V2 ↩︎
- Rowley, C. W., & Dawson, S. T. (2017). Model reduction for flow analysis and control. Annu. Rev. Fluid Mech, 49(1), 387-417. ↩︎
- Taira, K. and Colonius, T., “Three-dimensional flows around low-aspect-ratio flat-plate wings at low Reynolds numbers,” J. Fluid Mech., Vol. 623, 2009, pp. 187–207. 26 ↩︎
- Colonius, T. and Taira, K., “A fast immersed boundary method using a nullspace approach and multi-domain far-field boundary conditions,” Comput. Methods Appl. Mech. Engrg., Vol. 197, 2008, pp. 2131–2146. ↩︎
- Kajishima, T. and Taira, K., Computational fluid dynamics: incompressible turbulent flows, Springer, 2017. ↩︎
- Munday, P. M. and Taira, K., “Quantifying wall-normal and angular momentum injections in airfoil separation control,” AIAA J., 2017 (in review). ↩︎
- Holmes, P., Lumley, J. L., Berkooz, G., and Rowley, C. W., Turbulence, coherent structures, dynamical systems and symmetry, Cambridge Univ. Press, 2nd ed., 2012. ↩︎
- Berkooz, G., Holmes, P., and Lumley, J. L., “The proper orthogonal decomposition in the analysis of turbulent flows,” Annu. Rev. Fluid Mech., Vol. 25, 1993, pp. 539–575. ↩︎