Hostname: page-component-77c89778f8-vpsfw Total loading time: 0 Render date: 2024-07-21T13:21:15.330Z Has data issue: false hasContentIssue false

Application of physics-constrained data-driven reduced-order models to shape optimization

Published online by Cambridge University Press:  19 January 2022

Hamid R. Karbasian*
Affiliation:
Department of Mechanical, Industrial and Aerospace Engineering, Concordia University, Montreal, H3G 1M8, Canada
Brian C. Vermeire
Affiliation:
Department of Mechanical, Industrial and Aerospace Engineering, Concordia University, Montreal, H3G 1M8, Canada
*
Email address for correspondence: h.karbasian@utoronto.ca

Abstract

This study proposes a novel approach for building surrogate models, in the form of reduced-order models(ROMs), for partial differential equation constrained optimization. A physics-constrained data-driven framework is developed to transform large-scale nonlinear optimization problems into ROM-constrained optimization problems. Unlike conventional methods, which suffer from instability of the forward sensitivity function, the proposed approach maps optimization problems to system dynamics optimization problems in Hilbert space to improve stability, reduce memory requirements, and lower computational cost. The utility of this approach is demonstrated for aerodynamic optimization of an NACA 0012 airfoil at $Re = 1000$. A drag reduction of 9.35 % is obtained at an effective angle of attack of eight degrees, with negligible impact on lift. Similarly, a drag reduction of 20 % is obtained for fully separated flow at an angle of attack of $25^{\circ }$. Results from these two optimization problems also reveal relationships between optimization in physical space and optimization of dynamical behaviours in Hilbert space.

Type
JFM Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

Optimization arises in a wide range of applications from engineering design (Ravindran Reference Ravindran2000; Bergmann, Cordier & Brancher Reference Bergmann, Cordier and Brancher2005; Rozza & Manzoni Reference Rozza and Manzoni2010) to control systems (Lassila & Rozza Reference Lassila and Rozza2010; Manzoni, Quarteroni & Rozza Reference Manzoni, Quarteroni and Rozza2012). Increasing computational power has recently enabled the application of optimization techniques for industrial-scale problems. These problems are usually governed by large nonlinear partial differential equations (PDEs) with suitable initial and boundary conditions, which are often referred to as full-order models (FOMs). These FOMs arise in computational mechanics applications, such as computational fluid dynamics (CFD) (Orozco & Ghattas Reference Orozco and Ghattas1992; Newman et al. Reference Newman, Taylor, Barnwell, Newman and Hou1999; Nadarajah & Jameson Reference Nadarajah and Jameson2000). In practice, these FOMs act as a constraint during optimization, referred to as PDE-constrained optimization. This type of optimization can be prohibitively expensive, since it requires one or more high-fidelity FOM solutions per design iteration (Yamaleev, Diskin & Nielsen Reference Yamaleev, Diskin and Nielsen2008; Zahr & Persson Reference Zahr and Persson2016). Solving multiple large-scale FOMs using high-fidelity solvers in the context of CFD, such as large-eddy simulation and direct numerical simulation, can be prohibitively expensive. Therefore, the majority of optimization studies are performed using lower-fidelity models, such as the Reynolds-averaged Navier–Stokes approach.

To alleviate the prohibitive cost of PDE-constrained optimization using FOMs, a wide range of studies have developed surrogate modelling techniques (Moarref et al. Reference Moarref, Jovanović, Tropp, Sharma and McKeon2014; Ling & Templeton Reference Ling and Templeton2015; Mohebujjaman, Rebholz & Iliescu Reference Mohebujjaman, Rebholz and Iliescu2018). Surrogate models of nonlinear PDEs are usually referred to as reduced-order models (ROMs) (Oliveira & Patera Reference Oliveira and Patera2007; Bui-Thanh, Willcox & Ghattas Reference Bui-Thanh, Willcox and Ghattas2008a; Hinze & Matthes Reference Hinze and Matthes2011). Dimensionality reduction using ROMs has become an attractive approach, and is employed successfully in optimal control (Bergmann & Cordier Reference Bergmann and Cordier2008; Kunisch & Volkwein Reference Kunisch and Volkwein2008), optimization (Rozza & Manzoni Reference Rozza and Manzoni2010; Zahr & Farhat Reference Zahr and Farhat2015) and sensitivity analysis (Bui-Thanh, Willcox & Ghattas Reference Bui-Thanh, Willcox and Ghattas2008b; Zahr & Farhat Reference Zahr and Farhat2015; Zahr & Persson Reference Zahr and Persson2016). Optimization combined with surrogate models introduces a new approach called ROM-constrained optimization. Generally speaking, ROMs can be embedded within machine learning frameworks (Bright, Lin & Kutz Reference Bright, Lin and Kutz2013; Rowley & Dawson Reference Rowley and Dawson2017; Taira et al. Reference Taira, Brunton, Dawson, Rowley, Colonius, McKeon, Schmidt, Gordeyev, Theofilis and Ukeiley2017), and are based on a projection of FOMs from physical space into a lower-dimensional model space (i.e. Hilbert space, eigenspace, Banach space, etc.) and vice versa (Rowley & Dawson Reference Rowley and Dawson2017; Taira et al. Reference Taira, Brunton, Dawson, Rowley, Colonius, McKeon, Schmidt, Gordeyev, Theofilis and Ukeiley2017, Reference Taira, Hemati, Brunton, Sun, Duraisamy, Bagheri, Dawson and Yeh2020). This projection typically employs trial basis functions. These trial basis functions typically use matrix representations, which define the reduced-order basis (ROB). For example, the ROB can be eigenvectors or bases of a dataset obtained via proper orthogonal decomposition (POD). The ROMs leveraging machine learning features are usually constructed by collecting data initially obtained from FOMs (Brunton & Kutz Reference Brunton and Kutz2019; Brunton, Noack & Koumoutsakos Reference Brunton, Noack and Koumoutsakos2020; Murata, Fukami & Fukagata Reference Murata, Fukami and Fukagata2020). After dimensionality reduction, other machine learning techniques can be applied to develop new models that can predict the behaviour of the dynamical system. Depending on the system's complexity, or nonlinearity, hidden layers can be used to build deep learning algorithms.

Despite recent advancements in machine learning, ROMs specialized for optimization have not yet achieved promising results for unsteady flows. This is because existing ROMs only estimate the state vector of the FOM at specified time intervals, but not its sensitivity to perturbations of design parameters. Furthermore, conventional ROMs can yield non-physical solutions when these design variables are perturbed (Epureanu Reference Epureanu2003; Amsallem & Farhat Reference Amsallem and Farhat2008; Hay, Borggaard & Pelletier Reference Hay, Borggaard and Pelletier2009). This issue arises from the fact that FOMs of nonlinear dynamical systems are highly sensitive to perturbations (Wang, Hu & Blonigan Reference Wang, Hu and Blonigan2014; Karbasian & Vermeire Reference Karbasian and Vermeire2022). Trust-region approaches, such as trust-region POD (Fahl Reference Fahl2000) and optimal system POD (Kunisch & Volkwein Reference Kunisch and Volkwein2008), have been proposed to improve the robustness of ROMs with respect to perturbations of design parameters (Arian, Fahl & Sachs Reference Arian, Fahl and Sachs2000; Carlberg & Farhat Reference Carlberg and Farhat2011). However, these methods still need multiple FOMs to be used for sensitivity analysis.

In addition to PDE-constrained optimization, ROMs for unsteady optimization often fail to obtain the sensitivity function. The main reason for this failure is that the behaviour of nonlinear dynamical systems governing high-fidelity FOMs leads to unconditionally unstable sensitivity functions for long-term data analysis (which is usually the case in unsteady CFD simulations). Secondly, a typical ROM ignores low-energy ranks that might significantly affect the evolutional behaviour of the dynamical system. This issue results in missing information in the sensitivity functions. Therefore, adding extra terms (or innovatively designing the surrogate model) to compensate for the effect of these low-energy ranks results in a closure model in the form of the ROM. Although a short-term data analysis might exhibit stability over a short period, improper ROM development ultimately leads to inaccurate results.

To the authors’ knowledge, aerodynamic optimization in the presence of unsteady flows has not been considered thoroughly by other researchers. Although there has been some progress in this area, it has been limited to quasi-steady, periodic and laminar flows (Yoon Reference Yoon2016; Rubino et al. Reference Rubino, Pini, Colonna, Albring, Nimmagadda, Economon and Alonso2018; He et al. Reference He, Mader, Martins and Maki2019; Apponsah & Zingg Reference Apponsah and Zingg2020; He & Martins Reference He and Martins2021). Directly optimizing chaotic FOMs has also been considered, but is limited to relatively small-scale systems (Blonigan, Gomez & Wang Reference Blonigan, Gomez and Wang2014; Wang et al. Reference Wang, Hu and Blonigan2014). In this study, we develop a closure model for a ROM by leveraging physics-constrained data-driven principles to overcome limitations for unsteady optimization of unstable flows. In the proposed algorithm, we derive sensitivity functions from the Navier–Stokes equations, such that the closure model can be represented as a physical (i.e. interpretable) surrogate model in Hilbert space. The proposed algorithm leverages the minimum-residual property to assure the optimality condition throughout the optimization procedure. This optimality condition means searching for optimal directions in subspaces to minimize the residual in the closure model. This condition is assessed via the least squares Petrov–Galerkin (LSPG) approach. Additionally, in the ROB, subspaces are built via POD modes. It is noteworthy that the trial basis function is updated at each design cycle to improve the predictions of the closure model.

The remainder of this paper is organized as follows. In § 2, a general form of a PDE-constrained optimization is defined, and principles of the forward sensitivity function are discussed. Section 3 discusses the intuition of solving minimization problems in low-dimensional subspaces, and it provides a basic concept for the ROM-constrained optimization. In § 4, a closure model in the form of the ROM is developed via the LSPG approach. Section 5 extends this closure model to the forward sensitivity function in Hilbert space. In § 6, we implement the physics-constrained ROM into a platform with which sensitivities are computed. In § 7, different strategies for building the ROB are considered. These strategies help shape the manifolds in Hilbert space. In § 8, unsteady aerodynamic optimization of an NACA 0012 at high angles of attack is considered, and the resulting designs are discussed in detail. Section 9 provides overall conclusions, and explains the utility of the ROM-constrained optimization for future unsteady aerodynamic designs.

2. PDE-constrained optimization

Let us consider a dynamical system represented as a set of PDEs over a smooth time-domain $\mathbb {T}$, as follows:

(2.1a,b)\begin{equation} \frac{\partial \boldsymbol{u}_{a}}{\partial t} = \boldsymbol{f}_{a}(\boldsymbol{u}_{a}, t, \mathcal{S}), \quad \boldsymbol{u}_{a}(0, \mathcal{S})=\boldsymbol{u}_{a, 0}, \end{equation}

where $t \in \mathbb {R}_{+}$ is time and $t \in \mathbb {T}$, $\boldsymbol {u}_{a}(t, \mathcal {S}) \in \mathbb {R}^{\infty }$ denotes the state vector, $\boldsymbol {u}_{a, 0} \in \mathbb {R}^{\infty }$ is the initial state vector at $t=0$. Additionally, $\mathcal {S}$ is a vector of design variables $\mathcal {S}=[\mathcal {S}_{1}, \mathcal {S}_{2},\ldots, \mathcal {S}_{n_{s}}]^{\top }$, where $n_{s}$ shows the total number of design variables in the design space, $\mathscr {D}$. Also, $\boldsymbol {f}_{a}: \mathbb {R}^{\infty } \rightarrow \mathbb {R}^{\infty }$ characterizes the dynamical behaviour of the system on the right-hand side of (2.1a,b). In the context of CFD, a spatial discretization can be applied to $\boldsymbol {f}_{a}$, yielding a discrete approximation with $n_{u}$ dimensions, $\boldsymbol {f}: \mathbb {R}^{n_{u}} \rightarrow \mathbb {R}^{n_{u}}$, and a system of ODEs that approximate the original PDE. The approximate state vector found using this system of ODEs is denoted $\boldsymbol {u}(t, \mathcal {S})$. Numerical error introduced by discretization is usually expressed as an additional residual term

(2.2a,b)\begin{equation} \frac{\partial \boldsymbol{u}}{\partial t}= \boldsymbol{f}(\boldsymbol{u},t, \mathcal{S}) + \boldsymbol{r}(\boldsymbol{u}, t, \mathcal{S}),\quad \boldsymbol{u}(0, \mathcal{S})=\boldsymbol{u}_{0}, \end{equation}

where $\boldsymbol {r}(\boldsymbol {u}, t, \mathcal {S}):\mathbb {R}^{n_{u}} \rightarrow \mathbb {R}^{n_{u}}$ is the residual of the discretized PDEs. In practice, the dynamical behaviour of (2.2a,b) is investigated via a set of observables, which are called objective functions. Consider a scalar objective function dependent on the full state $\boldsymbol {u}$ and design variables $\mathcal {S}$. This can be defined as $\mathcal {J}(\boldsymbol {u}, \mathcal {S}): \mathbb {R}^{n_{u}} \rightarrow \mathbb {R}$. Usually, the objective function takes the form of a time average

(2.3)\begin{equation} \bar{\mathcal{J}}(\mathcal{S})= \frac{1}{T}\int_{0}^{\textrm{T}}\mathcal{J}(\boldsymbol{u}, t, \mathcal{S})\,\textrm{d} t, \end{equation}

where $\bar {\mathcal {J}}$ is the time-averaged objective function and $T \in \mathbb {T}$ is a time averaging period. Using (2.2a,b) and (2.3), we can define a PDE-constrained optimization

(2.4)\begin{equation} \left.\begin{aligned} \underset{\boldsymbol{u} \in \mathbb{R}^{n_{u}}, \mathcal{S} \in \mathscr{D}}{\text{minimize}} & \quad \bar{\mathcal{J}}=\frac{1}{T} \int_{0}^{\textrm{T}} \mathcal{J}(\boldsymbol{u},t, \mathcal{S})\,\textrm{d} t,\\ \text{subject to} & \quad \boldsymbol{r}(\boldsymbol{u}, t, \mathcal{S}) = 0,\\ & \quad \frac{\textrm{d}\boldsymbol{r}}{\textrm{d}\mathcal{S}}(\boldsymbol{u}, t, \mathcal{S}) = 0,\\ & \quad \boldsymbol{C}(\boldsymbol{u}, \mathcal{S}) \leq 0,\\ \end{aligned}\right\} \end{equation}

where $\textrm {d}\boldsymbol {r}/\textrm {d}\mathcal {S}$ is the sensitivity of residual to a set of design variables, a critical constraint when the dynamical system is nonlinear, and $\boldsymbol {C}(\boldsymbol {u}, \mathcal {S})$ is a set of user-defined constraints (i.e. geometrical constraints). It is worth mentioning that adding the sensitivity of the residual to (2.4) is sometimes unusual. However, to keep consistency in formulations and have a better connection between PDE-constrained and ROM-constrained optimizations, we used this sensitivity function in the general definition of optimization problem. To solve large-scale minimization problems of this form via gradient-based methods, the time-averaged sensitivity of (2.3) with respect to $\mathcal {S}$ is required:

(2.5)\begin{equation} \frac{\textrm{d}\mathcal{J}}{\textrm{d}\mathcal{S}} =\left[\frac{\partial \mathcal{J}}{\partial \boldsymbol{u}}\right]^{\top}\boldsymbol{v} + \frac{\partial \mathcal{J}}{\partial \mathcal{S}}, \end{equation}

where $\boldsymbol {v} = \partial \boldsymbol {u}/\partial \mathcal {S} \in \mathbb {R}^{n_{u}}$. Substituting (2.5) into (2.3), the time-averaged sensitivity of the objective function with respect to $\mathcal {S}$ is

(2.6)\begin{equation} \overline{\frac{\textrm{d}\mathcal{J}}{\textrm{d}\mathcal{S}}} =\frac{1}{T}\int_{0}^{\textrm{T}} \left(\left[\frac{\partial \mathcal{J}}{\partial \boldsymbol{u}}\right]^{\top} \boldsymbol{v} + \frac{\partial \mathcal{J}}{\partial \mathcal{S}} \right) \textrm{d} t. \end{equation}

Hence, the sensitivity of the objective function is directly related to the sensitivity of the solution. In order to find this, we use the gradient of the dynamical system from (2.2a,b):

(2.7)\begin{equation} \frac{\partial \delta \boldsymbol{u}}{\partial t} =\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{u}} \delta \boldsymbol{u} + \frac{\partial \boldsymbol{f}}{\partial \mathcal{S}} \delta \mathcal{S} + \frac{\textrm{d}\boldsymbol{r}}{\textrm{d}\mathcal{S}}\delta \mathcal{S}, \end{equation}

where $\delta$ denotes perturbations. Premultiplying all terms in (2.7) by $1/\delta \mathcal {S}$, and assuming $\boldsymbol {v} = \partial \boldsymbol {u}/\partial \mathcal {S} \simeq \delta \boldsymbol {u}/\delta \mathcal {S}$, the following expression is obtained:

(2.8a,b)\begin{equation} \frac{\partial \boldsymbol{v}}{\partial t} =\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{u}}\boldsymbol{v} + \frac{\partial \boldsymbol{f}}{\partial \mathcal{S}} + \frac{\textrm{d}\boldsymbol{r}}{\textrm{d}\mathcal{S}},\quad \boldsymbol{v}(0, \mathcal{S})=\boldsymbol{v}_{0}. \end{equation}

Because (2.8a,b) evolves in time (i.e. $t^{(n)} < t^{(n+1)}$, where $n$ is the index of time intervals) from an initial condition $\boldsymbol {v}(0, \mathcal {S})=\boldsymbol {v}_{0}$, it is called the forward sensitivity function. Finally, the sensitivity of the solution to the design variables can be obtained by solving (2.8a,b), and then, the gradient of the objective function can be computed via (2.6) for each design cycle. However, (2.8a,b) diverges for chaotic systems because of unstable modes in the dynamical behaviour of the system. In the next section, we illustrate the concept of ROM-constrained optimization to reduce computational costs, and address this stability issue for the forward sensitivity function.

3. ROM-constrained optimization

In ROM-constrained optimization, the governing equations are projected onto low-dimensional subspaces. Therefore, instead of solving massive PDEs for each optimization iteration, we only solve a compact form of the governing equations, as a set of ROMs, which has fewer degrees of freedom and complexity. Figure 1 shows a general approach for optimization problems. Conventionally, optimization problems are solved in physical space, $\mathscr {P}$. Consider the left-hand side object as the initial design in figure 1. To optimize the shape of this object, we solve the optimization problem subject to a Hamiltonian dynamical system defined as $\boldsymbol {\nabla }\boldsymbol {\cdot } \mathcal {H}(\boldsymbol {u}, \mathcal {S})$. In this example, $\boldsymbol {u}$ and $\mathcal {S}$ represent state and shape parameters, respectively. The right-hand side object in $\mathscr {P}$ is the optimized shape. According to the properties of a Hamiltonian system, the dynamical system for each object has a unique representation in Hilbert space $\mathscr {H}$. Usually, this representation is a set of generalized coordinates, $\boldsymbol {q}$, that define the dynamical behaviour of the corresponding object in $\mathscr {H}$. Therefore, in order to find the representation of the reference object, we project its corresponding state vector from $\mathscr {P}$ to $\mathscr {H}$ using a projection function $\mathbb {P}$. We assume that $\mathbb {F}$ is a surrogate model that represents the exact evolution of the state vector in $\mathscr {H}$, such that that $\mathbb {F}: \boldsymbol {q}\times \mathbb {T} \rightarrow \boldsymbol {q}$ with $(\boldsymbol {q}, t) \mapsto \mathbb {F}(\boldsymbol {q}, t)$ in $t \in \mathbb {T}$. Now if we apply this optimization problem subject to $\mathbb {F}$, we obtain the optimized set of generalized coordinates. In the end, we lift back these optimized generalized coordinates from $\mathscr {H}$ to $\mathscr {P}$ via $\mathbb {P}'$, obtaining the same optimized object in $\mathscr {P}$. It is worth mentioning that $\mathbb {F}$ is unknown, and we should build a closure model with rank $r \in \mathbb {R}_{+}$ given by $\mathcal {F}: \mathbb {R}^{r}\rightarrow \mathbb {R}^{r}$, such that $\mathcal {F}:\boldsymbol {q}\times \mathbb {T} \rightarrow \boldsymbol {q}$, in the form of a ROM to approximate $\mathbb {F}$. The closer $\mathcal {F}$ to $\mathbb {F}$, the closer the optimization results will be to the true optimal design in $\mathscr {P}$.

Figure 1. Optimization of the shape of an arbitrary object subject to a Hamiltonian system in physical space $\mathscr {P}$ versus optimization of its dynamics subject to a surrogate model in Hilbert space $\mathscr {H}$.

4. Dimensionality reduction

The full state can be defined as a linear combination of vectors embedded in the matrix of trial basis functions (a combination of subspaces in $\mathscr {H}$ used as projection function $\mathbb {P}$). Therefore, the full state can be given by

(4.1)\begin{equation} \boldsymbol{u}(t, \mathcal{S}) = \bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}(t, \mathcal{S})+\boldsymbol{\varPhi}' \boldsymbol{q}'(t, \mathcal{S}), \end{equation}

where $\tilde {\boldsymbol {\varPhi }}$ and $\boldsymbol {\varPhi }'$ indicate the coarse-scale and fine-scale modes of the dynamical system. Furthermore, $\tilde {\boldsymbol {q}}$ and $\boldsymbol {q}'$ are generalized coordinates corresponding to $\tilde {\boldsymbol {\varPhi }}$ and $\boldsymbol {\varPhi }'$, respectively, and $\bar {\boldsymbol {u}} \in \mathbb {R}^{n_{u}}$ is the vector of reference states. Writing the full state in the form of (4.1) will help distinguish between different types of variables, dominant modes, and the dynamical response of the full state. For instance, in turbulent flow, the total velocity magnitude can be decomposed into different modes to identify vortex shedding patterns. Using this kind of decomposition, it is possible to recognize each mode's effect on the turbulent flow. Since energetic modes are embedded in $\tilde {\boldsymbol {\varPhi }}$, we can truncate (4.1) and approximate the full state by

(4.2)\begin{equation} \boldsymbol{u}(t, \mathcal{S})\approx \bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}(t, \mathcal{S}), \end{equation}

which is the reduced-order description of the full state $\boldsymbol {u}$ in a lower-dimensional subspace. Note that $\tilde {\boldsymbol {\varPhi }}$, as a projection function $\mathbb {P}$, can accurately approximate the full state if the truncated rank $r \in \mathbb {R}_{+}$ is sufficiently large (the rank is the number of modes embedded in $\tilde {\boldsymbol {\varPhi }}$) such that $\tilde {\boldsymbol {\varPhi }}$ can capture the majority of the dynamical system's energy. By substituting (4.2) into (2.2a,b), and constraining the nonlinear equations to be orthogonal to a set of test subspaces $\tilde {\boldsymbol {\varPsi }} \in \mathbb {R}^{n_{u} \times r}$, we can specify the ROM as

(4.3a,b)\begin{equation} \tilde{\boldsymbol{\varPsi}}^{\top}\left(\tilde{\boldsymbol{\varPhi}} \frac{\partial \tilde{\boldsymbol{q}}}{\partial t} - \boldsymbol{f}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S})\right) + \tilde{\boldsymbol{\varPsi}}^{\top} \boldsymbol{r}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S}) = 0,\quad \tilde{\boldsymbol{q}}(0, \mathcal{S})=\tilde{\boldsymbol{q}}_{0}, \end{equation}

where $\tilde {\boldsymbol {q}}_{0}$ is the initial condition of the ROM, and $\tilde {\boldsymbol {\varPsi }}$ is the test basis function, which is used to minimize the distance to the state of the FOM in some norm. Therefore, according to $\tilde {\boldsymbol {\varPsi }}^{\top } \boldsymbol {r}(\bar {\boldsymbol {u}} + \tilde {\boldsymbol {\varPhi }} \tilde {\boldsymbol {q}}, t, \mathcal {S})\approx 0$, (4.3a,b) can be simplified to

(4.4a,b)\begin{equation} \tilde{\boldsymbol{\varPsi}}^{\top}\left(\tilde{\boldsymbol{\varPhi}} \frac{\partial \tilde{\boldsymbol{q}}}{\partial t} - \boldsymbol{f}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S})\right) = 0,\quad \tilde{\boldsymbol{q}}(0, \mathcal{S})=\tilde{\boldsymbol{q}}_{0}. \end{equation}

If $\tilde {\boldsymbol {\varPsi }}=\tilde {\boldsymbol {\varPhi }}$, then the (4.4a,b) represents the Galerkin projection of the FOM (Carlberg, Barone & Antil Reference Carlberg, Barone and Antil2017). For $\tilde {\boldsymbol {\varPsi }}=({\partial \boldsymbol {r}}/{\partial \boldsymbol {u}})\tilde {\boldsymbol {\varPhi }}$, (4.4a,b) is the LSPG projection (Bui-Thanh et al. Reference Bui-Thanh, Willcox and Ghattas2008a,Reference Bui-Thanh, Willcox and Ghattasb; Carlberg, Bou-Mosleh & Farhat Reference Carlberg, Bou-Mosleh and Farhat2011; Carlberg et al. Reference Carlberg, Farhat, Cortial and Amsallem2013, Reference Carlberg, Barone and Antil2017). The distinguishing property of LSPG is that the projection remains optimal for dynamical systems with non-symmetric positive definite Jacobians (Zahr & Farhat Reference Zahr and Farhat2015). Furthermore, It has shown that LSPG has the minimum-residual property (Zahr & Farhat Reference Zahr and Farhat2015; Carlberg et al. Reference Carlberg, Barone and Antil2017), which can be written as

(4.5)\begin{equation} \tilde{\boldsymbol{q}} = \underset{\boldsymbol{z} \in \mathbb{R}^{r}}{\text{argmin }} \left\Vert \boldsymbol{r}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \boldsymbol{z}, t, \mathcal{S}) \right\Vert_{2}^{2}. \end{equation}

Generally speaking, LSPG adds a term to the Galerkin projection in order to stabilize it numerically. This stabilization becomes essential when FOMs possess nonlinear behaviour in the design space $\mathscr {D}$. Consequently, in this study, the closure model for the ROM is built based on LSPG projection.

5. Sensitivity analysis

The state sensitivity can be obtained by taking the derivative of (4.2) with respect to $\mathcal {S}$ such that

(5.1)\begin{equation} \boldsymbol{v}(t, \mathcal{S})\approx \frac{\partial \bar{\boldsymbol{u}}}{\partial \mathcal{S}} + \frac{\partial \tilde{\boldsymbol{\varPhi}}}{\partial \mathcal{S}}\tilde{\boldsymbol{q}}(t, \mathcal{S}) + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{h}}(t, \mathcal{S}), \end{equation}

where ${\partial \bar {\boldsymbol {u}}}/{\partial \mathcal {S}} \in \mathbb {R}^{n_{u}}$ is the vector of reference sensitivities. According to (5.1), the solution of the sensitivity $\tilde {\boldsymbol {h}}={\partial \tilde {\boldsymbol {q}}}/{\partial \mathcal {S}} \in \mathbb {R}^{r}$ can be approximated by the sensitivities of the prominent modes embedded in $\tilde {\boldsymbol {\varPhi }}$. Moreover, ${\partial \tilde {\boldsymbol {\varPhi }}}/{\partial \mathcal {S}}$ could be negligible if the perturbation of design parameters does not have a notable influence on the state. Otherwise, ${\partial \tilde {\boldsymbol {\varPhi }}}/{\partial \mathcal {S}}$ will be a required term in the optimization. To develop the sensitivity function, we assume that the ROM is identical to (2.2a,b). Hence, a set of ODEs in the form of a ROM can be written as

(5.2a,b)\begin{equation} \frac{\partial \tilde{\boldsymbol{q}}}{\partial t} =\mathcal{F}( \tilde{\boldsymbol{q}}, t, \mathcal{S}) + \mathcal{R}(\tilde{\boldsymbol{q}}, t, \mathcal{S}),\quad \tilde{\boldsymbol{q}}(0, \mathcal{S})= \tilde{\boldsymbol{q}}_{0}, \end{equation}

where $\mathcal {R}:\mathbb {R}^{r} \rightarrow \mathbb {R}^{r}$ and $\mathcal {F}:\mathbb {R}^{r} \rightarrow \mathbb {R}^{r}$ with $\tilde {\boldsymbol {q}} \mapsto \mathcal {F}(\tilde {\boldsymbol {q}})$ are the residual and flux of the closure model, respectively. In the ideal case, the residual of the FOM and the ROM is zero. However, approximating the state of the FOM via dimensionality reduction (i.e. $\boldsymbol {u} \approx \bar {\boldsymbol {u}} + \tilde {\boldsymbol {\varPhi }} \tilde {\boldsymbol {q}}$) and frequent projection via $\mathbb {P}$ will introduce non-zero values to the residual $\mathcal {R}$. Petrov–Galerkin projection minimizes $\mathcal {R}(\tilde {\boldsymbol {q}}, t, \mathcal {S})$ by projecting it onto $\boldsymbol {r}(\bar {\boldsymbol {u}} + \tilde {\boldsymbol {\varPhi }} \tilde {\boldsymbol {q}}, t, \mathcal {S})$ in $\mathscr {P}$ by means of $\mathbb {P} = \tilde {\boldsymbol {\varPsi }}$. Therefore, we can write the Petrov–Galerkin projection of the residual as

(5.3)\begin{equation} \mathcal{R}(\tilde{\boldsymbol{q}}, t, \mathcal{S}) =\tilde{\boldsymbol{\varPsi}}^{\top} \boldsymbol{r}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S}). \end{equation}

Substituting (2.2a,b) into (5.3), yields the same equation as (4.4a,b). Note that this equation in discrete form corresponds to LSPG. Similar to (2.4), to satisfy the constraints of the optimization problem, the derivative of the residual should be set to zero. By assuming that the rank of the ROM is sufficient, we can derive the sensitivity function from (4.4a,b) as

(5.4)\begin{align} \frac{\textrm{d}\mathcal{R}}{\textrm{d}\mathcal{S}} &= \frac{\partial \tilde{\boldsymbol{\varPsi}}^{\top}}{\partial \mathcal{S}} \left(\frac{\partial(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}})}{\partial t}-\boldsymbol{f}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S})\right) \nonumber\\ &\quad +\tilde{\boldsymbol{\varPsi}}^{\top} \left(\frac{\partial \tilde{\boldsymbol{\varPhi}}}{\partial \mathcal{S}} \frac{\partial \tilde{\boldsymbol{q}}}{\partial t} + \tilde{\boldsymbol{\varPhi}} \frac{\partial \tilde{\boldsymbol{h}}}{\partial t} -\frac{\partial \boldsymbol{f}}{\partial \bar{\boldsymbol{u}}} \frac{\partial \bar{\boldsymbol{u}}}{\partial \mathcal{S}} -\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{u}} \frac{\partial \tilde{\boldsymbol{\varPhi}}}{\partial \mathcal{S}} \tilde{\boldsymbol{q}} -\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{u}} \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{h}} -\frac{\partial \boldsymbol{f}}{\partial \mathcal{S}}\right). \end{align}

Also, the vector of reference sensitivity is defined as $\bar {\boldsymbol {v}}= {\partial \bar {\boldsymbol {u}}}/{\partial \mathcal {S}}$. By leveraging the continuous representation of LSPG, we can define the test basis function as $\tilde {\boldsymbol {\varPsi }}=(\partial \boldsymbol {r}/\partial \boldsymbol {u})\tilde {\boldsymbol {\varPhi }}$, and then substitute it into (5.4) to obtain the following equation:

(5.5)\begin{align} \frac{\textrm{d}\mathcal{R}}{\textrm{d}\mathcal{S}} &= \left(\frac{\partial \tilde{\boldsymbol{\varPhi}}^{\top}}{\partial \mathcal{S}} \frac{\partial \boldsymbol{r}}{\partial \boldsymbol{u}}^{\top} +\boldsymbol{\varPhi}^{\top}\frac{\partial}{\partial \mathcal{S}} \frac{\partial \boldsymbol{r}}{\partial \boldsymbol{u}}^{\top}\right)\left(\tilde{\boldsymbol{\varPhi}} \frac{\partial \tilde{\boldsymbol{q}}}{\partial t}-\boldsymbol{f} (\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S})\right) \nonumber\\ &\quad +\tilde{\boldsymbol{\varPsi}}^{\top} \left(\frac{\partial \tilde{\boldsymbol{\varPhi}}}{\partial \mathcal{S}} \frac{\partial \tilde{\boldsymbol{q}}}{\partial t} + \tilde{\boldsymbol{\varPhi}} \frac{\partial \tilde{\boldsymbol{h}}}{\partial t} -\frac{\partial \boldsymbol{f}}{\partial \bar{\boldsymbol{u}}} \bar{\boldsymbol{v}} -\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{u}} \frac{\partial \tilde{\boldsymbol{\varPhi}}}{\partial \mathcal{S}} \tilde{\boldsymbol{q}} -\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{u}} \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{h}} -\frac{\partial \boldsymbol{f}}{\partial \mathcal{S}}\right). \end{align}

The second-order derivative of the residual, $({\partial }/{\partial \mathcal {S}})({\partial \boldsymbol {r}}/{\partial \boldsymbol {u}})^{\top }$, is usually not available or is expensive to compute. Furthermore, ${\partial \tilde {\boldsymbol {\varPhi }}}/{\partial \mathcal {S}}$ cannot be determined from the trial basis function. If the FOM is not very sensitive to the design variables, we can neglect it by setting ${\partial \tilde {\boldsymbol {\varPhi }}}/{\partial \mathcal {S}} = 0$. However, this is not acceptable in most cases. To compute ${\textrm {d}\mathcal {R}}/{\textrm {d}\mathcal {S}}$ efficiently, we need to remove unknown (or expensive) terms from (5.5). To this end, we rewrite (5.4) with reference to (5.2a,b) as

(5.6)\begin{equation} \frac{\textrm{d}\mathcal{R}}{\textrm{d}\mathcal{S}} = \frac{\partial \mathcal{R}}{\partial \tilde{\boldsymbol{q}}} \tilde{\boldsymbol{h}} + \frac{\partial \mathcal{R}}{\partial \mathcal{S}} + \frac{\partial \mathcal{R}}{\partial \varOmega}, \end{equation}

where ${\partial \mathcal {R}}/{\partial \varOmega } \in \mathbb {R}^{r}$ is derivative of the residual with respect to a source term. Also, derivatives in (5.6) have the following definition:

(5.7)\begin{equation} \left.\begin{gathered} \frac{\partial \mathcal{R}}{\partial \tilde{\boldsymbol{q}}}=\frac{\partial}{\partial t} - \frac{\partial \mathcal{F}}{\partial \tilde{\boldsymbol{q}}}, \\ \frac{\partial \mathcal{R}}{\partial \mathcal{S}}={-}\frac{\partial \mathcal{F}}{\partial \mathcal{S}},\\ \frac{\partial \mathcal{R}}{\partial \varOmega}={-}\frac{\partial \mathcal{F}}{\partial \bar{\boldsymbol{u}}} \bar{\boldsymbol{v}}. \end{gathered}\right\} \end{equation}

The difference between (5.5) and (5.6) arises from the fact that (5.5) is derived from a closure model that only includes the state information in its hierarchical structures. Here we assume that $\tilde {\boldsymbol {\varPhi }}$ includes the state information. However, (5.6) assumes that the closure model uses other hierarchical structures and information to approximate the sensitivity functions. In other words, $\tilde {\boldsymbol {h}}$ is approximated by minimizing the residual of the solution of the sensitivity in some norm (which is usually the $\ell ^2$ norm),

(5.8)\begin{equation} \tilde{\boldsymbol{h}} = \underset{\boldsymbol{z} \in \mathbb{R}^{r}}{\text{argmin }} \left\Vert \frac{\partial(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}})}{\partial \mathcal{S}} - \bar{\boldsymbol{v}} - \tilde{\boldsymbol{\varPhi}} \boldsymbol{z}\right\Vert_{2}^{2}, \end{equation}

where $\boldsymbol {v} \approx {\partial (\bar {\boldsymbol {u}} + \tilde {\boldsymbol {\varPhi }} \tilde {\boldsymbol {q}})}/{\partial \mathcal {S}}= \bar {\boldsymbol {v}} + \tilde {\boldsymbol {\varPhi }}\tilde {\boldsymbol {h}}$. It is worth pointing out that computing $\tilde {\boldsymbol {h}}$ requires embedding more information in $\tilde {\boldsymbol {\varPhi }}$, as shown in (5.8). Further details will be provided in § 7 to show how to revise $\tilde {\boldsymbol {\varPhi }}$ for sensitivity analysis. To solve (5.6), we discretize it using the backward differentiation formula (BDF), a linear multistep scheme. Therefore, a discrete form of (5.6) can be represented as

(5.9)\begin{equation} \frac{\zeta_0}{{\rm \Delta} t} \tilde{\boldsymbol{h}}^{(n)} + \left(\sum_{\textit{j}=1}^{k} \frac{\zeta_j}{{\rm \Delta} t} \tilde{\boldsymbol{h}}^{(n-j)}\right) -\beta_0 \frac{\partial \mathcal{F}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(n)}} \tilde{\boldsymbol{h}}^{(n)}+ \frac{\partial \mathcal{R}^{(n)}}{\partial \mathcal{S}} + \frac{\partial \mathcal{R}^{(n)}}{\partial \varOmega^{(n)}} =0, \end{equation}

such that

(5.10)\begin{equation} \lim_{\delta \mathcal{S} \rightarrow 0} \lim_{r \rightarrow n_{u}} \frac{\partial \mathcal{R}}{\partial \mathcal{S}} = 0. \end{equation}

Moreover, $\zeta$ and $\beta$ are coefficients of the temporal scheme. Rearranging (5.9) yields

(5.11)\begin{equation} \left(\frac{\zeta_0}{{\rm \Delta} t}-\beta_0 \frac{\partial \mathcal{F}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(n)}}\right) \tilde{\boldsymbol{h}}^{(n)} + \sum_{j=1}^{k} \frac{\zeta_j}{{\rm \Delta} t} \tilde{\boldsymbol{h}}^{(n-j)} + \frac{\partial \mathcal{R}^{(n)}}{\partial \mathcal{S}} + \frac{\partial \mathcal{R}^{(n)}}{\partial \varOmega^{(n)}}= 0. \end{equation}

Looking at (5.11), the first term is the derivative of the residual with respect to the vector of states at the $n$th time step, $\partial \mathcal {R}^{(n)}/\partial \tilde {\boldsymbol {q}}^{(n)}$. Additionally, the second term in this equation is the derivative of the residual with respect to the vector of states at different time steps, $\forall p \ne n : \{\partial \mathcal {R}^{(n)}/\partial \tilde {\boldsymbol {q}}^{(p)} = \zeta _{j}/{\rm \Delta} t\}$. Therefore, (5.11) can written as

(5.12)\begin{equation} \frac{\partial \mathcal{R}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(n)}} \tilde{\boldsymbol{h}}^{(n)} + \sum_{\textit{j}=1}^{k} \left(\frac{\partial \mathcal{R}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(n-j)}}\right) \tilde{\boldsymbol{h}}^{(n-j)} + \frac{\partial \mathcal{R}^{(n)}}{\partial \mathcal{S}} + \frac{\partial \mathcal{R}^{(n)}}{\partial \varOmega^{(n)}} =0. \end{equation}

Having all derivatives in the form of low-dimensional residuals allows us to convert them into smaller terms in the high-dimensional space. Therefore, we can obtain derivatives directly from the residual of the FOM. Using Petrov–Galerkin projection, the relationship between the ROM and the FOM can be expressed as $\mathcal {R}^{(n)} = \tilde {\boldsymbol {\varPsi }}^{(n) \top } \boldsymbol {r}^{(n)}$ in discrete time steps. Hence, the relation between derivatives of the residual can be represented as follows:

(5.13)\begin{equation} \left.\begin{gathered} \frac{\partial \mathcal{R}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(n)}} = \left(\frac{\partial \tilde{\boldsymbol{\varPsi}}^{(n) \top}}{\partial \boldsymbol{u}^{(n)}}\boldsymbol{r}^{(n)}+ \tilde{\boldsymbol{\varPsi}}^{(n) \top}\frac{\partial \boldsymbol{r}^{(n)}}{\partial \boldsymbol{u}^{(n)}} \right)\frac{\partial \boldsymbol{u}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(n)}},\\ \frac{\partial \mathcal{R}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(p)}} = \left(\frac{\partial \tilde{\boldsymbol{\varPsi}}^{(n) \top}}{\partial \boldsymbol{u}^{(p)}} \boldsymbol{r}^{(n)}+\tilde{\boldsymbol{\varPsi}}^{(n) \top} \frac{\partial \boldsymbol{r}^{(n)}}{\partial \boldsymbol{u}^{(p)}}\right) \frac{\partial \boldsymbol{u}^{(p)}}{\partial \tilde{\boldsymbol{q}}^{(p)}}, \quad n \ne p,\\ \frac{\partial \mathcal{R}^{(n)}}{\partial \mathcal{S}} = \left(\frac{\partial \tilde{\boldsymbol{\varPsi}}^{(n) \top}}{\partial \mathcal{S}}\boldsymbol{r}^{(n)} +\tilde{\boldsymbol{\varPsi}}^{(n) \top}\frac{\partial \boldsymbol{r}^{(n)}}{\partial \mathcal{S}}\right). \end{gathered}\right\} \end{equation}

Furthermore, the derivative of the residual with respect to the source term is also obtained by

(5.14)\begin{equation} \frac{\partial \mathcal{R}^{(n)}}{\partial \varOmega^{(n)}} = \tilde{\boldsymbol{\varPsi}}^{(n)\top}\sum_{\textit{j}=0}^{k} \frac{\partial \boldsymbol{r}^{(n)}}{\partial \boldsymbol{u}^{(n-j)}}\bar{\boldsymbol{v}}. \end{equation}

Applying LSPG to obtain a ROM with enough POD modes in $\mathscr {H}$ leads the residual of the FOM to be close to zero, such that $\forall \ n: \{\boldsymbol {r}^{(n)} :\tilde {\boldsymbol {\varPhi }} \tilde {\boldsymbol {q}} \rightarrow 0 \}$. This definition leads to the first term in the derivatives being negligible. On the other hand, $\partial \boldsymbol {u}^{(n)}/\partial \tilde {\boldsymbol {q}}^{(n)}=\tilde {\boldsymbol {\varPhi }}$. Recall $\tilde {\boldsymbol {\varPsi }}=({\partial \boldsymbol {r}}/{\partial \boldsymbol {u}})\tilde {\boldsymbol {\varPhi }}$, we can simplify (5.13) as follows:

(5.15)\begin{equation} \left.\begin{gathered} \frac{\partial \mathcal{R}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(n)}} = \tilde{\boldsymbol{\varPsi}}^{(n) \top} \tilde{\boldsymbol{\varPsi}}^{(n)},\\ \frac{\partial \mathcal{R}^{(n)}}{\partial \tilde{\boldsymbol{q}}^{(p)}} = \frac{\zeta_{p}}{{\rm \Delta} t}\tilde{\boldsymbol{\varPsi}}^{(n) \top} \tilde{\boldsymbol{\varPhi}}, \quad n \ne p,\\ \frac{\partial \mathcal{R}^{(n)}}{\partial \mathcal{S}} = \tilde{\boldsymbol{\varPsi}}^{(n) \top} \frac{\partial \boldsymbol{r}^{(n)}}{\partial \mathcal{S}}. \end{gathered}\right\}\end{equation}

In the end, by substituting (5.15) into (5.12), the discretized equation can be given by

(5.16)\begin{align} & \tilde{\boldsymbol{\varPsi}}^{(n) \top} \tilde{\boldsymbol{\varPsi}}^{(n)} \tilde{\boldsymbol{h}}^{(n)} + \sum_{\textit{j}=1}^{k} \frac{\zeta_j}{{\rm \Delta} t} \tilde{\boldsymbol{\varPsi}}^{(n) \top} \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{h}}^{(n-j)} + \tilde{\boldsymbol{\varPsi}}^{(n) \top} \frac{\partial \boldsymbol{r}^{(n)}}{\partial \mathcal{S}} \nonumber\\ &\quad + \tilde{\boldsymbol{\varPsi}}^{(n)\top}\left(\sum_{\textit{j}=1}^{k}\zeta_{j}\mathcal{I} + \frac{\partial \boldsymbol{r}^{(n)}}{\partial \boldsymbol{u}^{(n)}}\right)\bar{\boldsymbol{v}} =0, \end{align}

where $\mathcal {I}$ is the identity matrix. From (5.16), all terms can now be obtained from derivatives of the FOM. Moreover, (5.16) lets us advance the solution of the sensitivity in time with large time steps since this equation is solved implicitly. This capability has some significant advantages for optimization problems: (1) lower hard-disk requirements for a group of FOMs; (2) solving the ROM via an implicit scheme guarantees numerical stability (although poor datasets and configurations can lead to spurious results, proper configurations should always be taken into account); (3) LSPG performs efficiently for intermediate time steps (Carlberg et al. Reference Carlberg, Barone and Antil2017). Ultimately, the minimization problem of the FOM in (2.4) can be transformed into a new minimization problem for a ROM in lower dimensions as

(5.17)\begin{equation} \left.\begin{aligned} \underset{\tilde{\boldsymbol{q}} \in \mathbb{R}^{r}, \mathcal{S} \in \mathscr{D}}{\text{minimize}} & \quad \bar{\mathcal{J}}=\frac{1}{T} \int_{0}^{\textrm{T}} \mathcal{J}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S})\,\textrm{d} t,\\ \text{subject to} & \\ & \quad \mathcal{R}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S}) = 0,\\ & \quad \frac{\textrm{d}\mathcal{R}}{\textrm{d}\mathcal{S}}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S}) = 0,\\ & \quad \underset{\boldsymbol{z} \in \mathbb{R}^{r}}{\text{argmin }} \left\Vert \frac{\partial(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}})}{\partial \mathcal{S}} - \bar{\boldsymbol{v}} - \tilde{\boldsymbol{\varPhi}} \boldsymbol{z}\right\Vert_{2}^{2},\\ & \quad \boldsymbol{C}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, \mathcal{S}) \leq 0, \end{aligned}\right\} \end{equation}

where the optimizer uses the ROM to search in the design space $\mathscr {D}$. A distinguishing feature of this work is that the optimization in (5.17) is solved in $\mathscr {H}$, as opposed to conventional optimization, which is solved in $\mathscr {P}$. According to (5.17), we need to obtain manifolds in $\mathscr {H}$ to evolve the solution with time. These manifolds are responsible for the dynamical behaviour of the FOM in lower dimensions. Therefore, in the next section, we investigate ways to find such manifolds.

6. Physics-constrained ROMs

In this study, to build a closure model in lower dimensions, we explicitly derive the overall structures of manifolds in Hilbert space from the physical governing equations, and then shape them via a training dataset. This strategy is called a physics-constrained data-driven approach, and can result in accurate predictions of the closure model with less training data. Hence, we will leverage this approach to address downsides of conventional approaches, making optimization more robust with reduced requirements for the training dataset. To clarify what we seek when building a closure model, some essential points in this approach will be explained. If a closure model is represented as $\mathcal {F}$ with a solution $\tilde {\boldsymbol {q}}$, then an ideal model should guarantee four criteria, represented as follows:

  1. (i) accurate prediction of the solution, $\tilde {\boldsymbol {q}} \mapsto \boldsymbol {u}$;

  2. (ii) accurate prediction of the dynamical response, $\mathcal {F} \mapsto \boldsymbol {f}$;

  3. (iii) strong numerical stability, $\|\mathcal {F}\| \leq \epsilon$, where $\epsilon$ is a bounded value;

  4. (iv) representation of the derivatives in lower dimensions, $\boldsymbol {\nabla }\boldsymbol {\cdot }\mathcal {F} \cong \boldsymbol {\nabla }\boldsymbol {\cdot }\boldsymbol {f}$.

The first and second criteria denote that the dimensionality reduction should not influence the prediction accuracy in the closure model over a finite time interval $\mathbb {T}$. This accuracy can be improved by adding an adequate number of subspaces (i.e. POD modes) to the ROB. In the third criterion, having a rich training dataset can improve prediction accuracy in the closure model, but does not guarantee stability. The Galerkin projection is represented as a continuous form of minimization in the closure model (Carlberg et al. Reference Carlberg, Barone and Antil2017). This projection becomes highly unstable for nonlinear dynamical systems (i.e. in high-Reynolds-number flows). The reason is that small flow structures play an essential role in the subsequent flow evolution (Nair et al. Reference Nair, Yeh, Kaiser, Noack, Brunton and Taira2019). To overcome this issue, other implicit projection methods, such as LSPG, can be employed to build a stable closure model (Carlberg et al. Reference Carlberg, Bou-Mosleh and Farhat2011, Reference Carlberg, Barone and Antil2017; Parish, Wentland & Duraisamy Reference Parish, Wentland and Duraisamy2020). In the fourth criterion, derivatives of the closure model should closely approximate those of the FOM in lower dimensions. Most ROMs (linear, autoencoder or data-driven closure models) are only designed to approximate the state vector over $t \in \mathbb {T}$. In other words, there is no general model that represents the Jacobian of the ROM. Therefore, conventional ROMs may not be suitable for sensitivity analysis. Besides accurate prediction of the state vector, ROMs should project the most prominent features of the Jacobian and related derivatives of the FOM (i.e. ${\partial \boldsymbol {r}^{(n)}}/{\partial \boldsymbol {u}^{(n)}}$ and ${\partial \boldsymbol {r}^{(n)}}/{\partial \mathcal {S}}$) into lower dimensions.

In this study, the proposed algorithm relies on dimensionality reduction and dynamic prediction of a large-scale system using a ROM. Having a well-defined closure model for the ROM, sensitivity analysis of the FOM can then also be projected into lower dimensions. Figure 2 depicts the schematic of the proposed approach to build a ROM for approximating the sensitivity solution. The state vector $\boldsymbol {u}$ from higher dimensions, $\mathscr {P}$, is projected into lower dimensions, $\mathscr {H}_{1}$, via a projection function $\mathbb {P}$, where $\mathscr {H}_{1}$ represents a space embedding the dynamical system in (5.2a,b). In the next step, the closure model is built for the dynamical system via solving $\mathcal {R}(\bar {\boldsymbol {u}} + \tilde {\boldsymbol {\varPhi }} \tilde {\boldsymbol {q}}, t, \mathcal {S}) = 0$ for $\tilde {\boldsymbol {q}}$. These generalized coordinates, $\tilde {\boldsymbol {q}}$, evolve in $\mathscr {H}_{1}$ with time, and will be used in the next steps to build the Jacobian and other related matrices. If the model architecture and training procedure are fulfilled properly, the generalized coordinates typically will appear as a set of correlations. Subsequently, the steady-state forward sensitivity function is solved for $\boldsymbol {v}_{ss}$, which is in physical space $\mathscr {P}$, over a limited set of time intervals. The collected sensitivities are used to project these sensitivity solutions from $\mathscr {P}$ into $\mathscr {H}_{2}$ via a projection function $\mathbb {P}'$. Note that $\mathscr {H}_{2}$ is a space adjusted to the sensitivity function in (5.16), and the evolution of the sensitivity solution lies into this space. In the next step, $({\textrm {d} \mathcal {R}}/{\textrm {d} \mathcal {S}})(\bar {\boldsymbol {u}} + \tilde {\boldsymbol {\varPhi }} \tilde {\boldsymbol {q}}, t, \mathcal {S}) = 0$ is solved for $\tilde {\boldsymbol {h}}$, which is the sensitivity solution in $\mathscr {H}_{2}$. Note that $\tilde {\boldsymbol {q}}$ and $\tilde {\boldsymbol {h}}$ could have different dimensions. Finally, the sensitivity solutions in $\mathscr {H}_{2}$ can be lifted back from lower to higher dimensions through $\mathbb {P}''$ to reconstruct the solution of the sensitivity in $\mathscr {P}$. An important note is that the ROM should be able to reconstruct the full state vector over $\mathbb {T}$. Also, the proposed model receives only a limited set of the sensitivities, sampled sporadically in time. Then, it predicts the full solution of the sensitivity for all time intervals in $\mathbb {T}$. In order to build a ROM, we need to use some portion of data obtained from the FOM. The data collected from this FOM are usually vectors of the state, represented as a set of snapshots. It is worth mentioning that the time period considered for each simulation, $T \in \mathbb {T}$, is a range in which we capture snapshots and use them to build the ROB. Therefore, this period needs to be long enough to obtain most of the evolutional behaviour of the dynamical system. Otherwise, some portion of the dynamics will be missed in the ROM, leading to a spurious model. Hence, in this study, we selected $T$ to be long enough with adequate time intervals such that the collection of snapshots can represent most of the flow features in this period.

Figure 2. Schematic of the physics-constrained data-driven approach. The physics-constrained part uses information from the governing equations to apply strict constraints on the shape of manifolds in Hilbert space. Therefore, the overall structures of these manifolds are known, but their scale and details are unknown. The data-driven part helps fill this gap to find the shapes of these manifolds in Hilbert space.

7. ROB

Model reduction is usually based on the construction of low-dimensional subspaces in $\mathscr {H}$. Subsequently, a transformation function is applied to project the solution onto these low-dimensional subspaces. For the compressible flow problems considered here we build snapshots using the conserved variables $\boldsymbol {u} = [ \rho, \rho \textit {u}_{i}, \rho e ]$, where $\rho$ is density, $\textit {u}_{i}$ the velocity component in the $i$th direction and $e$ is energy. Additionally, other quantities in the form of vectors can enrich the trial basis function in $\mathscr {H}$. These quantities can be solutions of the sensitivity, residuals or fluxes (Zahr & Farhat Reference Zahr and Farhat2015). To define the trial basis function, first build a ROB, since the trial basis function is usually defined as a set of different ROB. As mentioned previously, having a well-trained ROB helps produce a robust trial basis function that can work in off-design conditions. Hence, the performance of a ROM with respect to any perturbation in $\mathscr {D}$ will be ensured. We may execute several FOMs for a different set of design parameters, $\mathcal {S}_{i}=\{\mathcal {S}_{1}, \mathcal {S}_{2},\ldots, \mathcal {S}_{n_{d}} \} \subset \mathscr {D}$, where $n_{d}$ indicates the total number of FOMs. Later, snapshots for each FOM are assembled according to

(7.1)\begin{equation} \boldsymbol{U}(\mathcal{S}_{i}) = \begin{bmatrix} | & | & & |\\ \boldsymbol{u}^{(1)}(\mathcal{S}_{i}) - \bar{\boldsymbol{u}} & \boldsymbol{u}^{(2)}(\mathcal{S}_{i}) - \bar{\boldsymbol{u}} & \cdots & \boldsymbol{u}^{(m_{u})}(\mathcal{S}_{i}) - \bar{\boldsymbol{u}}\\ | & | & & | \end{bmatrix} \in \mathbb{R}^{n_{u}\times m_{u}}, \end{equation}

where $m_{u}$ denotes the last snapshot. After building matrices using (7.1), we then collect them in a unified matrix

(7.2)\begin{equation} \begin{array}{llll} \boldsymbol{X}_{state}= [\boldsymbol{U}(\mathcal{S}_{1}) & \boldsymbol{U}(\mathcal{S}_{2}) & \cdots & \boldsymbol{U}(\mathcal{S}_{n_{d}})],\end{array} \end{equation}

where $\boldsymbol {X}_{state}$ is the dataset that will be used to train the ROB. Subsequently, each ROB is built via a set of POD modes

(7.3a,b)\begin{equation} \tilde{\boldsymbol{\varPhi}}_{u} = \boldsymbol{POD}(\boldsymbol{X}_{state}), \quad \text{and} \quad \boldsymbol{U}(\mathcal{S}_{i}) = \tilde{\boldsymbol{\varPhi}}_{u} \mathcal{Q}_{u}(\mathcal{S}_{i}),\quad i=1, 2,\ldots, n_{d}, \end{equation}

where $\tilde {\boldsymbol {\varPhi }}_{u} \in \mathbb {R}^{n_{u}\times r_{state}}$, and $r_{state}$ shows the rank of the ROB for the state vector. Moreover, $\boldsymbol {POD}$ denotes a function that returns the orthogonal modes (i.e. POD modes). This function can be developed using snapshot-POD or singular value decomposition. Additionally, $\mathcal {Q}(\mathcal {S}_{i})$ shows the generalized coordinates for each set of design parameters expressed as

(7.4)\begin{equation} \mathcal{Q}_{u}(\mathcal{S}_{i}) = \begin{bmatrix} | & | & & |\\ \tilde{\boldsymbol{q}}_{i}^{(1)} & \tilde{\boldsymbol{q}}_{i}^{(2)} & \cdots & \tilde{\boldsymbol{q}}_{i}^{(m_{u})} \\ | & | & & | \end{bmatrix} \in \mathbb{R}^{r_{state}\times m_{u}}. \end{equation}

To improve the trial basis function, it is also possible to train another ROB with the sensitivity dataset. The idea is that the sensitivity function is solved for a different set of design variables, $\mathcal {S}_{i} \subset [\mathcal {S}_{1}, \mathcal {S}_{2},\ldots, \mathcal {S}_{n_{d}}]$. Since the sensitivity function is usually unstable, we solve the steady-state sensitivity function for samples sporadically selected over different time intervals. Although steady-state solutions of the sensitivity, $\boldsymbol {v}_{ss} \in \mathbb {R}^{n_{u}}$, differ from unsteady ones, $\boldsymbol {v} \in \mathbb {R}^{n_{u}}$, they provide useful information about relevant structures in the dynamical behaviour of the forward sensitivity function. Therefore, we can use these results to approximate manifolds in Hilbert space to be able to solve this function in time. Therefore, snapshots of the solution of the sensitivity for all samples are defined as

(7.5)\begin{equation} \boldsymbol{V}_{ss}(\mathcal{S}_{i}) = \begin{bmatrix} | & | & & |\\ \boldsymbol{v}_{ss}^{(\mathbb{N}(1))}(\mathcal{S}_{i}) - \bar{\boldsymbol{v}}_{ss} & \boldsymbol{v}_{ss}^{(\mathbb{N}(2))}(\mathcal{S}_{i}) - \bar{\boldsymbol{v}}_{ss} & \cdots & \boldsymbol{v}_{ss}^{(\mathbb{N}(m_{p}))}(\mathcal{S}_{i}) - \bar{\boldsymbol{v}}_{ss}\\ | & | & & | \end{bmatrix} \in \mathbb{R}^{n_{u}\times m_{p}}, \end{equation}

where $\mathbb {N}(n')=\{\xi ^{(1)}, \ldots, \xi ^{(m_{p})}\}$ is the list of sampling time intervals, and $m_{p}$ is the last sample for each case. Additionally, $n'$ indicates an index for timing. Therefore, a dataset for all samples is represented as

(7.6)\begin{equation} \begin{array}{llll}\boldsymbol{X}_{sens}= [\boldsymbol{V}_{ss}(\mathcal{S}_{1}) & \boldsymbol{V}_{ss}(\mathcal{S}_{2}) & \cdots & \boldsymbol{V}_{ss}(\mathcal{S}_{n_{d}})]\end{array}\end{equation}

and, subsequently, the ROB for the solution of the sensitivity can be given by

(7.7a,b)\begin{equation} \tilde{\boldsymbol{\varPhi}}_{v} = \boldsymbol{POD}(\boldsymbol{X}_{sens}), \quad \text{and} \quad \boldsymbol{V}_{ss}(\mathcal{S}_{i}) = \tilde{\boldsymbol{\varPhi}}_{v} \mathcal{Q}_{v}(\mathcal{S}_{i}), \quad i=1,2,\ldots, n_{d}. \end{equation}

Please note that the ROB of the solution of the sensitivity could have a different rank, $r_{sens} \in \mathbb {R}_{+}$, such that $\tilde {\boldsymbol {\varPhi }}_{v} \in \mathbb {R}^{n_{u} \times r_{sens}}$. Therefore, the generalized coordinates for the solutions of the sensitivity can be written as

(7.8)\begin{equation} \mathcal{Q}_{v} = \begin{bmatrix} | & | & & |\\ \tilde{\boldsymbol{h}}^{(\mathbb{N}(1))} & \tilde{\boldsymbol{h}}^{(\mathbb{N}(2))} & \cdots & \tilde{\boldsymbol{h}}^{(\mathbb{N}(m_{p}))} \\ | & | & & | \end{bmatrix} \in \mathbb{R}^{r_{sens} \times m_{p}}. \end{equation}

Since the $\boldsymbol {POD}$ function is sensitive to the relative scale of the dataset in the snapshot matrix $\boldsymbol {X}$, it cannot be used directly on the set of states and solutions of the sensitivity at the same time, $\boldsymbol {POD}([\boldsymbol {X}_{state}, \boldsymbol {X}_{sens}])$ (Carlberg & Farhat Reference Carlberg and Farhat2011). To resolve this issue, a Gram–Schmidt-like procedure is used (Zahr & Farhat Reference Zahr and Farhat2015) in which the ROB for state vectors and solutions of the sensitivity are a subset of the trial basis function

(7.9)\begin{equation} \begin{array}{llll}\boldsymbol{\varPhi}= [\tilde{\boldsymbol{\varPhi}}_{u} & \tilde{\boldsymbol{\varPhi}}_{v}] \in \mathbb{R}^{n_{u} \times r},\end{array} \end{equation}

where $r = r_{state}+r_{sens}$. It has been shown that building a trial basis function like (7.9) guarantees the minimum-residual property for the ROM without introducing excessive error (Zahr & Farhat Reference Zahr and Farhat2015). Additionally, this type of trial basis function has been shown to be robust against perturbations in design parameters (Zahr & Farhat Reference Zahr and Farhat2015). It is worth mentioning that the method for building the ROB in (7.6) and (7.7a,b) will remain efficient if all design variables have the same topology. For example, if all design variables are coordinates of control points, this method can be useful. However, adding another variable with a different topology (i.e. angle of attack) may change the ROB significantly. Therefore, we recommend building the ROB as

(7.10)\begin{equation} \tilde{\boldsymbol{\varPhi}}_{v}= \left[\boldsymbol{POD}\left(\boldsymbol{V}_{ss}(\mathcal{S}_{1})\right), \boldsymbol{POD}\left(\boldsymbol{V}_{ss}(\mathcal{S}_{2})\right), \ldots, \boldsymbol{POD}\left(\boldsymbol{V}_{ss}(\mathcal{S}_{n_{d}})\right)\right], \end{equation}

where POD modes for each set of sensitivity snapshots are built separately and, consequently, they will be embedded into one unified matrix, $\tilde {\boldsymbol {\varPhi }}_{v}$. In (7.10), the POD modes are rearranged based on their eigenvalues.

In general, besides the aforementioned procedure, we can also use the following method if a specific application is required:

(7.11)\begin{equation} \tilde{\boldsymbol{\varPhi}}= \begin{cases} \tilde{\boldsymbol{\varPhi}}_{u}, & \text{if reconstruction of the state is required,}\\ \tilde{\boldsymbol{\varPhi}}_{v}, & \text{if forward sensitivity analysis is required.} \end{cases} \end{equation}

According to (7.11), the trial basis function could be the same as the ROB. If we use the ROM to approximate the state vector of the FOM, then $\tilde {\boldsymbol {\varPhi }}_{u}$ can be chosen since it still ensures the optimality condition over $\mathbb {T}$. Additionally, if we want to consider sensitivity analysis, then $\tilde {\boldsymbol {\varPhi }}_{v}$ can be used to obtain the sensitivity function of the closure model.

8. Application

In this section, the numerical model developed for the ROM is used for unsteady aerodynamic shape optimization. The objective is drag minimization of an NACA 0012 airfoil at $Re=1000$ and $M_{\infty }=0.1$. This is performed with multiple constraints to ensure the optimization is supervised in $\mathscr {D}$. Two different effective angles of attack, $\alpha _{eff}$, are selected to investigate the performance of the optimization procedure for prestall and post-stall conditions. In this study, $\alpha _{eff}=8^{\circ }$ and $\alpha _{eff}=25^{\circ }$ correspond to moderate (prestall) and massive (post stall) flow separation, respectively. We deliberately chose these conditions, where the Navier–Stokes equations show strong nonlinear dynamics, to demonstrate the robustness of the optimization approach proposed in this study. The ROM-constrained minimization in continuous form can be written as

(8.1)\begin{equation} \left.\begin{aligned} \underset{\tilde{\boldsymbol{q}} \in \mathbb{R}^{r}, \mathcal{S} \in \mathscr{D}}{\text{minimize}} & \quad \bar{\mathcal{J}} = \frac{1}{T}\int_{0}^{\textrm{T}}\beta (C_{D})^{2}\,\textrm{d} t + \frac{1}{T}\int_{0}^{\textrm{T}} \left(C_{L} - \overline{C}_{L, target} \right)^{2}\textrm{d} t,\\ \text{subject to} & \\ & \quad\mathcal{R}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S}) = 0,\\ & \quad\frac{\textrm{d}\mathcal{R}}{\textrm{d}\mathcal{S}}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}, t, \mathcal{S}) = 0,\\ & \quad\boldsymbol{h}=\underset{\boldsymbol{z} \in \mathbb{R}^{r}}{\text{argmin }} \left\Vert \frac{\partial (\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}})}{\partial \mathcal{S}} - \bar{\boldsymbol{v}} - \tilde{\boldsymbol{\varPhi}} \boldsymbol{z}\right\Vert_{2}^{2},\\ & \quad\boldsymbol{C}_{geom.}(\mathcal{S}) \leq 0,\\ & \quad B_{l}\leq \boldsymbol{C}_{bound}(\mathcal{S}) \leq B_{u},\\ \end{aligned}\right\} \end{equation}

where $C_{L}=\text {Lift}/(0.5\rho c u_{\infty }^{2})$ and $C_{D}=\text {Drag}/(0.5\rho c u_{\infty }^{2})$ are the lift and drag coefficients, respectively, $\rho$ is the fluid density, $u_{\infty }$ is the free stream velocity and $c$ is the airfoil chord length. Moreover, $\boldsymbol {C}_{geom.}(\mathcal {S})$ and $\boldsymbol {C}_{bound}(\mathcal {S})$ correspond to geometrical constraints and bounds included in the constraint function $\boldsymbol {C}(\mathcal {S})$. Additionally, $B_{u}$ and $B_{l}$ are the upper and lower limits of $\boldsymbol {C}_{bound}$, respectively, and $\beta$ is a weighting factor. This weighting is important when using a united objective function. The second term of the objective function guides the optimization toward a target lift coefficient, $\overline {C}_{L, target}$. The fully discrete form of (8.1) is given by

(8.2)\begin{equation} \left.\begin{aligned} \underset{\tilde{\boldsymbol{q}} \in \mathbb{R}^{r}, \mathcal{S} \in \mathscr{D}}{\text{minimize}} & \quad \bar{\mathcal{J}} = \frac{1}{m_{u}}\sum_{n=0}^{m_{u}}\beta \left(C_{D}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}^{(n)}, t^{(n)}, \mathcal{S})\right)^{2}\\ & \quad + \frac{1}{m_{u}}\sum_{n=0}^{m_{u}} \left(C_{L}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}^{(n)}, t^{(n)}, \mathcal{S}) - \overline{C}_{L, target} \right)^{2}, \\ \text{subject to} & \\ & \quad\mathcal{R}^{(n)}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}^{(n)}, t^{(n)}, \mathcal{S}) = 0,\\ & \quad\frac{\textrm{d}\mathcal{R}^{(n)}}{\textrm{d}\mathcal{S}}(\bar{\boldsymbol{u}} + \tilde{\boldsymbol{\varPhi}} \tilde{\boldsymbol{q}}^{(n)}, t^{(n)}, \mathcal{S}) = 0,\\ & \quad\boldsymbol{v}_{ss}^{(\mathbb{N}(n'))} = \underset{\tiny {\boldsymbol{z} \in \mathbb{R}^{n_{u}}}}{\text{argmin}} \left\Vert \frac{\partial \boldsymbol{r}^{(\mathbb{N}(n'))}}{\partial \boldsymbol{u}^{(\mathbb{N}(n'))}} \boldsymbol{z}+\frac{\partial \boldsymbol{r}^{(\mathbb{N}(n'))}}{\partial \mathcal{S}} \right\Vert_{2}^{2},\\ & \quad r_{state} = \underset{z \in \mathbb{R}_{+} \le n_{u}}{\text{argmin }} \left(\beta z + \left\Vert\sum_{n=0}^{m_{u}} \boldsymbol{u}^{(n)} - \bar{\boldsymbol{u}} - \tilde{\boldsymbol{\varPhi}}_{u}\tilde{\boldsymbol{q}}^{(n)} \right\Vert_{2}^{2}\right),\\ & \quad r_{sens} = \underset{z \in \mathbb{R}_{+} \le n_{u}}{\text{argmin }} \left( \beta z + \left\Vert\sum_{n'=1}^{m_{p}} \boldsymbol{v}_{ss}^{(\mathbb{N}(n'))} - \bar{\boldsymbol{v}}_{ss} - \tilde{\boldsymbol{\varPhi}}_{v}\tilde{\boldsymbol{h}}^{(\mathbb{N}(n'))} \right\Vert_{2}^{2}\right),\\ & \quad \boldsymbol{C}_{geom.}(\mathcal{S}) \leq 0,\\ & \quad B_{l}\leq \boldsymbol{C}_{bound}(\mathcal{S}) \leq B_{u}, \end{aligned}\right\}\end{equation}

where argmin in (8.1) is expanded to show the training process and the rank selection criteria in (8.2). The shape of the airfoil is changed using ‘B-spline’ shape functions, using control points shown in figure 3. These control points define smooth perturbations to the suction and pressure surfaces of the airfoil. Additionally, three fixed points at the leading edge, and another at the trailing edge, are used as constraints to avoid defective geometries in these regions. These fixed points are also responsible for keeping the angle of attack constant. As mentioned above, all of these constraints are included in $\boldsymbol {C}(\mathcal {S})$.

Figure 3. Geometrical definition of the airfoil using control and fixed points.

8.1. Computational set-up

A two-dimensional structured computational domain with a C-topology is used. To solve a compressible flow field, an in-house CFD software, the high-order unstructured solver (HORUS) version 0.2.0, is used. The spatial discretization in HORUS uses the flux reconstruction approach (Huynh Reference Huynh2007), and we set the solution polynomial degree to $p_{s}=2$, which recovers third-order accuracy. The second-order BDF (BDF2) is used for temporal discretization. The Prandtl number and Mach number are set to $Pr=0.72$ and $M_{\infty }=0.1$, respectively. Since the governing equations are solved implicitly, the time step is chosen to be ${\rm \Delta} t=0.05t^{*}$, where $t^{*}=c/u_{\infty }$ is the convective time.

The number of elements for the initial mesh is approximately $3.7\times 10^3$. With a solution polynomial degree $p_{s}=2$ the total number of solution points is $n_u=1.35\times 10^5$. Figure 4 shows a comparison of time-averaged lift and drag coefficients at several angles of attack, showing good agreement with available reference data. The baseline NACA 0012 airfoil was then modified using sequential least squares programming as an optimizer from the Scipy package (Virtanen Reference Virtanen2020). Additionally, the singular value decomposition, as an eigenvalue problem is solved via SLEPc (Hernandez, Roman & Vidal Reference Hernandez, Roman and Vidal2005) in parallel, and PETSc (Dalcin et al. Reference Dalcin, Paz, Kler and Cosimo2011; Balay Reference Balay2020), compiled with OpenMPI (Dalcín, Paz & Storti Reference Dalcín, Paz and Storti2005), is used for parallelization. We simulated each FOM case until $40 t^*$. It was observed that this period is sufficient to obtain statistically converged results. Data snapshots were then collected over a period of $T=15t^{*}$. The number of subspaces (the rank of the ROB) is a crucial parameter that should be assessed before starting optimization. Having an inappropriate rank may cause a bias in the ROM, creating a poor dynamical model. Therefore, a ROB was built for the reference case (NACA 0012 airfoil at $Re=1000$). Then the arrangement and accumulation of singular values were analysed according to (8.2). It is observed that if the rank of the ROM is selected to be $r_{state}=50$, the ROB contains most of the interpretive information of flow structures. Additionally, $m_{p}=20$ samples for each control point, P(1) to P(8), are collected to develop the ROB of the sensitivity function with the rank of $r_{sens}=80$. These derivatives are computed with a second-order central finite difference scheme. The sensitivity function is also discretized via the first-order BDF (BDF1) scheme for sensitivity analysis. It is worth mentioning that $r_{sens}$ should be modified since some modes in $\tilde {\boldsymbol {\varPhi }}_{v}$ might be unnecessary, leading to an increase in the bias of the sensitivity. On the other hand, $\tilde {\boldsymbol {\varPhi }}_{v}$ should have a high enough rank to minimize the detrimental effect of variance on the results. Therefore, the residual of the sensitivity function was monitored to ensure it was lower than $10^{-4}$. Using this approach, we will now consider shape optimization at two different angles of attack.

Figure 4. Comparison of present numerical simulations with reference data (Liu et al. Reference Liu, Li, Zhang, Wang and Liu2012; Kurtulus Reference Kurtulus2015; Meena, Taira & Asai Reference Meena, Taira and Asai2017; Di Ilio et al. Reference Di Ilio, Chiappini, Ubertini, Bella and Succi2018): (a) time-averaged lift coefficient; (b) time-averaged drag coefficient.

8.2. Shape optimization of an NACA0012 at $\alpha _{eff}=8^{\circ }$

8.2.1. Set-up

An NACA 0012 airfoil at $\alpha _{eff}=8^{\circ }$ is in the prestall condition with $\overline {C}_{L}=0.323$ and $\overline {C}_{D}=0.14$. For drag minimization we select $\overline {C}_{L, target}=0.32$ and $\beta =10$ in (8.2). We set the effective angle of attack as a variable that can be changed by unlocking the fixed points at the leading edge. In this case, $\boldsymbol {C}(\mathcal {S})$ includes 10 geometrical constraints with 16 bounds. Additionally, one extra constraint is considered to limit $\overline {C}_{L}$. The minimization is then advanced for 10 design iterations.

8.2.2. Optimization results

Figure 5 shows the optimization results. The time-averaged objective function, $\bar {\mathcal {J}}$, continuously decreases until the fourth iteration. After that, the constraints slow down reduction of $\bar {\mathcal {J}}$. Moreover, time-averaged gradients of the objective function are displayed in figure 5(b). All gradients converge to zero, except one that remains constant at approximately 0.03. This cannot be driven lower due to the imposed constraints. Figure 5(c) illustrates how the time-averaged lift and drag coefficients are changed during optimization. The drag coefficient is reduced by approximately 9.35 %, while producing 9 % higher lift than the reference airfoil.

Figure 5. Optimization progress in the shape optimization of the airfoil at $Re=1000$ and $\alpha _{eff}=8^{\circ }$: (a) objective function; (b) gradients and (c) lift and drag coefficients.

To compare the reference and optimized designs in terms of geometry and flow characteristics, figure 6 is provided. The pressure changes rapidly at the leading edge of the optimized airfoil. These variations on the suction side produce additional increased lift, while the rest of the optimized airfoil has no notable contribution to lift generation. Furthermore, the optimized airfoil has a nose slightly lower than the reference airfoil, which influences the effective angle of attack. Interestingly, the optimized airfoil produces higher lift at lower effective angle of attack.

Figure 6. The geometry of the airfoil and the time-averaged pressure coefficient along the airfoil surface: (a) reference case and (b) optimized case.

8.3. Flow structures

The slender leading edge of the optimized airfoil has a notable role for the separation point on the suction side. As shown in figure 7(a), the separation point, where the flow detaches from the surface due to an adverse pressure gradient, is at $x\simeq 0.5c$. After the separation point, the flow confined on the suction side generates a large pressure difference and, subsequently, higher drag. This confined area is also directly related to the wake of the airfoil. However, the optimized airfoil's nose induces a sudden adverse pressure gradient near the leading edge, which leads to laminar separation bubble (LSB) formation, as shown in figure 7(b). Consequently, it leads to a delayed separation point located at approximately $x\sim 0.8c$. This delayed flow separation leads to a smaller wake behind the airfoil, which reduces drag.

Figure 7. Velocity contours with superimposed streamlines for $Re=1000$ and $\alpha _{eff}=8^{\circ }$: (a) reference and (b) optimized airfoils.

8.4. Shape optimization of an NACA0012 at $\alpha _{eff}=25^{\circ }$

8.4.1. Set-up

There are different reports on the stall point of an NACA 0012 at $Re=1000$. Liu et al. (Reference Liu, Li, Zhang, Wang and Liu2012) reported that the stall angle of attack is $27^{\circ }$ with $\overline {C}_{L}=1.28$. Kurtulus (Reference Kurtulus2015) also showed the stall lift coefficient as $\overline {C}_{L}=1.25$ with a stall angle of $26^{\circ }$, and Di Ilio et al. (Reference Di Ilio, Chiappini, Ubertini, Bella and Succi2018) detected the same stall angle, but with $\overline {C}_{L}=1.1$. The present study shows that stall occurs at $27^{\circ }$ with $\overline {C}_{L}=1.25$. Irregular vortex shedding is expected due to massive flow separation on the airfoil's suction side. Our objective is to minimize the drag coefficient of the airfoil at $\alpha _{eff}=25^{\circ }$. The lift and drag coefficients for the NACA 0012 at this angle of attack are $\overline {C}_{L}=1.17$ and $\overline {C}_D=0.66$, respectively.

To define the optimization problem, we selected $\beta =2$ and $\overline {C}_{L,taget}=1.25$ to force the objective function to keep the lift coefficient relatively constant while reducing drag. Having a slightly higher $\overline {C}_{L, target}$ than $\overline {C}_{L}=1.17$ was found to prevent the magnitude of the lift coefficient dropping significantly in early design iterations. Additionally, $\boldsymbol {C}(\mathcal {S})$ includes 12 geometrical constraints and 16 bounds for the control points. Limiting the lift coefficient and the angle of attack are two other constraints. Therefore, this optimization problem contains 30 constraints in total.

8.4.2. Optimization results

To show how the design changes during optimization, figures 8 and 9 are provided. In figure 8(a), the time-averaged objective function, $\bar {\mathcal {J}}$, begins to reduce continuously until the 10th optimization iteration. Later the objective function reduces slightly or oscillates when the design parameters are close to the local optimum point in the design space. As shown in figure 8(b), gradients of the objective function either plateau or converge to zero. According to the gradient-based formulation, gradients of the objective function ideally should be zero at the local/global optimum. However, bounding the control points with geometrical constraints may restrict gradients from reaching identically zero. Figure 8(c) also represents the variation of time-averaged lift and drag coefficients. The lift coefficient drops at the first optimization iteration and, later, it rises toward the target value $\overline {C}_{L, target}$. The drag coefficient diminishes significantly.

Figure 8. Optimization progress in the shape optimization of the airfoil at $Re=1000$ and $\alpha _{eff}=25^{\circ }$: (a) objective function; (b) gradients and (c) lift and drag coefficients.

Figure 9. (a)  Instantaneous lift and (b)  drag coefficients for both reference and optimized cases.

Figure 9 depicts the lift and drag coefficients versus non-dimensional time $t/t^{*}$. From figure 9(a), the time-averaged lift coefficient reduces slightly from $\overline {C}_{L, ref.}=1.17$ to $\overline {C}_{L, opt.}=1.16$. However, a significant difference between the time-averaged drag coefficients yields an approximately $20\,\%$ reduction in drag. Figure 10 shows the distribution of the time-averaged pressure coefficient along the airfoil surface, where the shaded area shows pressure variations over time. From figure 10(a), the pressure on the reference airfoil suction side fluctuates across the majority of the chord length, particularly near the trailing edge. However, pressure fluctuations for the optimized design are significantly reduced, as shown in figure 10(b). Because the intensity of large-scale coherent structures in the flow field tends to amplify pressure fluctuations on the suction side, this indicates that the optimized airfoil creates weaker wake vortices.

Figure 10. The geometry of the airfoil and the time-averaged pressure coefficient (red) along the airfoil surface. The shaded area also displays the variations of the pressure coefficient over time. (a) Reference case and (b) optimized case.

The spatial integral of the time-averaged pressure on the surface yields the time-average pressure force exerted on the airfoil. The majority of this pressure force originates in the vicinity of the leading edge. However, in the optimized case, the time-averaged pressure force near the trailing edge is significantly reduced. The pressure curves in this region ($x/c=0.8\sim 1$) are similar, resulting in minimal lift and drag production in this region. Despite generating significantly less lift near the trailing edge, the airfoil is still able to satisfy the lift coefficient constraint. This is achieved via the shear layer produced at the leading edge of the airfoil. As shown in figure 10(b), sharp variations in the pressure coefficient around the airfoil's leading edge arise from an intense, but relatively stable, shear layer. Furthermore, the airfoil's suction surface is relatively flat and tangent to the flow, leading to the enhanced lift. In contrast, the sharp difference of the pressure coefficient at the leading edge of the reference airfoil is less intense than the optimized one (figure 10a). The pressure variations on the suction side near the nose of the reference airfoil fluctuate change with a large amplitude, indicating an unstable shear layer at the leading edge.

8.4.3. Flow structures

Here we will explore the behaviour of coherent structures in the wakes of the reference and optimized designs. Figures 11 and 12 show time histories of the lift and drag coefficients, and contours of vorticity at several instants during a shedding cycle. Green arrows show the direction of the jet formed at the interface of two counter-rotating vortices. In frame ‘A’, the first trailing edge vortex (TEV), $\varGamma _{1}^{+}$, leaves the airfoil surface and travels downstream. Moreover, the first leading edge vortex (LEV), $\varGamma _{1}^{-}$, increases in intensity and envelopes the suction surface of the airfoil. Hence, the pressure coefficient on the suction surface reaches a minimum, yielding high lift. The shedding of LEV and TEV manifests as a von Kármán street in the wake of the airfoil. In frame ‘B’, the aerodynamic forces decrease, since $\varGamma _{2}^{-}$ is not strong enough to produce a notable low-pressure zone on the suction surface of the airfoil. Furthermore, $\varGamma _{2}^{+}$ does not allow $\varGamma _{2}^{-}$ to grow. According to Kelvin's circulation theorem, after $\varGamma _{1}^{-}$ separates, the second TEV, $\varGamma _{2}^{+}$, forms and gains energy by absorbing the kinetic energy coming from $\varGamma _{2}^{-}$, and the shear layer at the trailing edge of the airfoil. This energy transfer indicates that if two counter-rotating vortices are located beside each other, the low-energy vortex absorbs energy from the high-energy vortex through shear. In frame ‘C’, the pair vortex, $\varGamma ^{*}$, has developed on the suction surface. This counter-rotating vortex reduces the connection of $\varGamma _{2}^{-}$ with the airfoil surface. It is observed that $\varGamma ^{*}$ in the reference case plays an important role in the quasi-periodic behaviour of vortical structures in the wake. Different intensities and growth rates for $\varGamma ^{*}$ leads to different vortex shedding patterns, which influences the LEV and TEV dynamics. This is evident in time variations of the lift and drag coefficients.

Figure 11. Flow structures around the reference airfoil.

Figure 12. Flow structures around the optimized airfoil.

Figure 12 shows different instants during one shedding cycle for the optimized design. In frame ‘A’, $\varGamma _{1}^{+}$ has detached from the airfoil, while $\varGamma _{1}^{-}$ envelops the suction surface. When $\varGamma _{1}^{-}$ reaches its maximum intensity, the lift is maximized. In frame ‘B’, a shear layer forms a LSB, which initiates $\varGamma ^{*}$. As $\varGamma ^{*}$ increases in intensity, it ejects $\varGamma _{1}^{-}$ away from the surface with the help of $\varGamma _{2}^{+}$ (as shown in frame ‘C’). It is observed that LEV and TEV enter a harmonic vortex shedding pattern for the optimized design, where the aerodynamic loads repeat cyclically. The flow field becomes more complicated in the post-stall region, where at least two shedding frequencies appear. Hence, we observe that the optimized design has a simpler shedding pattern, with harmonic interaction between the LEV and TEV.

To quantify the strength of these vortices, a control surface is placed around the airfoil over which an integral of $\boldsymbol {\nabla } \times \boldsymbol {u}$ is computed, yielding circulation. The control surface boundaries were placed far enough away from the airfoil such that this circulation was not sensitive to them. Figure 13(a) shows the total circulation magnitude of the LEV over $15t^{*}$ for the reference case, $\varGamma _{ref.}^{-}$, and the optimized case, $\varGamma _{opt.}^{-}$. The strength of $\varGamma _{ref.}^{-}$ is notably higher than $\varGamma _{opt.}^{-}$. Furthermore, the oscillation of $\varGamma _{ref.}^{-}$ is greater, while $\varGamma _{opt.}^{-}$ has smaller oscillations with nearly constant peaks. The same observation is made from figure 13(b) showing the strength of $\varGamma _{ref.}^{+}$ and $\varGamma _{opt.}^{+}$. This indicates that the optimized airfoil produces a weaker LEV. This is because the optimized airfoil guides the flow over the surface more smoothly, which leads to simpler interaction between the LEV and TEV structures in the wake.

Figure 13. Time variation of the total strength of core vortices (i.e. LEV and TEV) over $15t^{*}$ for both reference and optimized cases: (a) instantaneous circulation generated by LEV; (b) instantaneous circulation generated by TEV.

8.4.4. Sensitivity analysis

This section analyses the sensitivity of the state vector and the objective function with respect to the design parameters. According to (8.1), the total gradient of the objective function, $\bar {\mathcal {J}}$, can be written as

(8.3)\begin{equation} \overline{\frac{\textrm{d}\mathcal{J}}{\textrm{d}\mathcal{S}}}= \frac{2}{T} \int_{0}^{\textrm{T}}\beta \frac{\textrm{d} C_{D}}{\textrm{d}\mathcal{S}} C_{D}\,\textrm{d} t +\frac{2}{T} \int_{0}^{\textrm{T}} \frac{\textrm{d} C_{L}}{\textrm{d}\mathcal{S}}\left(C_{L}-\overline{C}_{L,target}\right) \textrm{d} t, \end{equation}

where

(8.4)\begin{equation} \left\{\begin{gathered} \frac{\textrm{d} C_{L}}{\textrm{d}\mathcal{S}} = \frac{\partial C_{L}}{\partial \boldsymbol{u}}\boldsymbol{v} + \frac{\partial C_{L}}{\partial \mathcal{S}},\\ \frac{\textrm{d} C_{D}}{\textrm{d}\mathcal{S}} = \frac{\partial C_{D}}{\partial \boldsymbol{u}}\boldsymbol{v} + \frac{\partial C_{D}}{\partial \mathcal{S}} \end{gathered}\right\} \end{equation}

and $\boldsymbol {v}$ is approximated by $\boldsymbol {v} \approx \bar {\boldsymbol {v}} + \tilde {\boldsymbol {\varPhi }} \tilde {\boldsymbol {h}}$. Since the design objective is drag minimization, we will isolate this term. The total sensitivity of the drag coefficient, $\textrm {d} C_{D}/\textrm {d}\mathcal {S}$, is dependent on the geometrical sensitivity, $\partial C_{D}/\partial \mathcal {S}$, and the state sensitivity, $(\partial C_{D}/\partial \mathcal {S})\boldsymbol {v}$. The geometrical sensitivity is only dependent on the shape of the airfoil, independent of the flow field. Also, the state sensitivity is only dependent on the flow field, independent of perturbations in the airfoil geometry. Therefore, these two sensitivities will be explored separately.

Figure 14 displays the sensitivity of the drag coefficient with respect to control points (shown in figure 3), where each column corresponds to a control point. The first, second and third rows show the time variation of the geometrical sensitivity, state sensitivity and total sensitivity. From figure 10, the pressure coefficient has large fluctuations near the trailing edge, which suggests that the aerodynamic loads are highly sensitive here (P(1) and P(8)). This implies that the trailing edge has a significant influence on TEV formation, affecting the drag coefficient. Furthermore, the drag coefficient is sensitive to the suction surface of the airfoil (P(2) to P(4)), where sensitivities fluctuate with the flow. However, since the flow field on the pressure side of the airfoil is relatively steady, the sensitivities for P(5) and P(6) are also relatively steady. Moreover, the time variation of $\textrm {d} C_{D}/\textrm {d}\mathcal {S}$ reveals that the reference airfoil is more sensitive than the optimized design.

Figure 14. Instantaneous gradients of the drag coefficient (geometrical, state and total sensitivities) with respect to the control points.

Figure 15 shows the primary mode of the sensitivity, which corresponds to the fluid momentum in $x$ direction, $\tilde {\boldsymbol {\varPhi }}_{x}$. For the reference case, P(1) has a notable effect on the TEV and drag reduction. Also, P(1) affects a wide range of secondary structures in the wake. For the optimized case, P(1) has less sensitivity to the TEV. On the other hand, the optimized airfoil has higher sensitivity to the LEV. In general, for both the reference and optimized cases, P(2) and P(3) control the shear layer at the leading edge, which directly changes the LEV and high-pressure differences at the suction surface of the airfoil. However, P(5) to P(7) indicate that the pressure surface of the airfoil does not significantly affect vortical structures in the wake of the airfoil. On the other hand, P(8) influences these vortical structures on the suction surface of the airfoil. The conclusion drawn from figure 15 is that control points in the vicinity of the trailing edge have a significant effect on drag minimization. On the other hand, control points adjacent to the leading edge have a notable influence on lift.

Figure 15. The primary modes of the solutions of the sensitivity for both reference and optimized cases.

8.4.5. Dynamics

In the previous sections, we explained how optimization changes the airfoil shape and, hence, flow structures in the wake. Then, we focused on the behaviour of the sensitivity solution to understand each control point's effect on the flow field. Here, we consider the optimization procedure from a different perspective in Hilbert space. We transform the optimization problem from $\mathscr {P}$ to $\mathscr {H}$, and then we identify the characteristics of the underlying dynamical system.

Figure 16 shows a discrete Fourier transform of the generalized coordinates. Figure 16(a) shows the Strouhal number distribution for 10 subsequent modes, indexed by their rank $r$. The Strouhal number is defined as $St=f_{s} l_{c}/u_{\infty }$, where $f_{s}$, and $l_{c}$ are the frequency and characteristic length, taken here to be the wake width, respectively. Dominant Strouhal numbers are evident in modes with high spectrum magnitudes. For the reference case, several dominant shedding modes are observed. However, the optimized case has only three dominant modes, as shown in figure 16(b). This suggests that the wake is significantly simpler in nature for the optimized case. Figures 16(c) to 16(g) display the amplitude spectrum versus Strouhal number for the first five dominant modes for the reference case. Frequencies detected in the first four modes are $St_{1}=0.2$ and $St_{2}=0.1$. Figures 16(h) and 16(i) also show that the dominant Strouhal number is $St_{1}=0.2$ for the optimized case. Yarusevych, Sullivan & Kawall (Reference Yarusevych, Sullivan and Kawall2009) performed an extensive experiment on the wake of the reference airfoil, and the reported Strouhal number was $St\approx 0.2$ for an airfoil at $\alpha _{eff}=10^{\circ }$. Additionally, similar observations can be found in the results of Huang & Lin (Reference Huang and Lin1995). They considered wake structures of an NACA 0012 airfoil, and mentioned that the Strouhal number of the dominant vortex shedding for angles of attack higher than $\alpha _{eff}=25^{\circ }$ remains approximately constant with a value of $St\approx 0.2$.

Figure 16. Variation of frequency for different modes in the reference and optimized cases.

Correlations can be defined as interactions between the generalized coordinates, $q_{i} \in \boldsymbol {q}=[q_1, q_2, \ldots, q_r]^{\top }$, to understand the dynamical behaviour of the system. Figure 17 shows correlations for the first 10 modes in the reference and optimized cases. These 10 modes correspond to five ‘pair’ modes in the dynamical system. The pair modes are two subsequent modes that show the same, or similar, dynamical behaviour with a certain offset. Figures 17(a), 17(c), 17(e), 17(g) and 17(i) show correlations of ‘pair’ modes. The remaining figures show correlations between ‘non-pair’ modes. Each correlation loop, specifically in ‘pair’ modes, belongs to a specific vortical structure. Figure 17(a) shows that the correlation of the reference case has two loops, which indicates that two different dominant patterns in the vortex shedding occur one after another. However, the optimized case has only one loop indicating one dominant pattern occurs cyclically. High rank correlations also contain more loops, which indicates that each dominant pattern has different forms that repeat harmonically in different shedding cycles. It is worth pointing out that as the dynamical system experiences higher nonlinearity, the number of loops in corresponding correlations increases. Interestingly, in the optimized case, high rank correlations have multiple similar loops, indicating linear behaviour of the vortical structures in the wake of the optimized airfoil.

Figure 17. Correlations between modes: reference (blue) and optimized (red) cases.

Figures 18 and 19 show correlations coloured by sensitivity magnitudes, $h_{i} \in \boldsymbol {h}=[h_1, h_2, \ldots, h_r]^{\top }$, for the reference and optimized cases, respectively. The maximum sensitivity belongs to the first rank and, as the rank increases, the sensitivity magnitude reduces. The primary weight of the sensitivity belongs to prominent vortical structures in the wake of the airfoil. As we discussed earlier, the sensitivity of the optimized case is less than that of the reference case, which is observed by comparing sensitivity magnitudes in figures 18 and 19. It is worth mentioning that the maximum sensitivity magnitude for each mode happens when the generalized coordinates in correlations move to another loop. This exchange occurs when an LEV or TEV sheds downstream. Therefore, based on these observations, the sensitivity of the generalized coordinates in Hilbert space is strongly related to the sensitivity of the state vector in physical space. Consequently, applying optimization to the dynamical system in Hilbert space is analogous to optimization in physical space.

Figure 18. Correlations of modes coloured by sensitivity magnitude for the reference case.

Figure 19. Correlations of modes coloured by sensitivity magnitude for the optimized case.

9. Conclusions

In this study, we proposed a new approach for PDE-constrained optimization of nonlinear systems. The intuition of this study is that, instead of directly optimizing FOMs in physical space, we transform the physical governing equations into an unphysical space, where the dynamics of the system evolve on manifolds. This allows us to optimize the evolutionary behaviour of dynamical systems in lower dimensions. Hence, in optimization problems, the optimizer is able to change the dynamical behaviour of the system, such that its effect in physical space minimizes the objective function. To this end, we developed a closure model in the form of a ROM. This closure model was explicitly derived from the FOM using the LSPG approach, and leverages the minimum-residual property over a discrete temporal domain. Using this ROM, we omitted low energy unstable modes, while maintaining accuracy of the ROM. Additionally, sampling techniques were considered for building the trial and test basis functions, which shape the manifolds in Hilbert space. This procedure is referred to as a physics-constrained data-driven approach. The main feature of developing a ROM using this approach is that it provides derivatives of the FOM (i.e. Jacobian) in lower dimensions. Furthermore, sequential least-squares minimizations were used to solve this closure model, leading to a robust framework, specialized for unsteady optimization.

Shape optimization of an NACA 0012 airfoil in the presence of flow separation at $Re=1000$ was then considered. It was shown that the proposed framework results in a significant improvement in the aerodynamic performance of the airfoil, with a drag reduction of approximately 20 % observed at $\alpha _{eff}=25^{\circ }$. Unlike the reference case with the nonlinear dynamical system, the results illustrate that the dynamical system of the optimized case exhibit a linear-like dynamical behaviour. Therefore, we expect that implementing the proposed optimization approach can be applied to large-scale nonlinear problems, such as unsteady flows, and this approach significantly reduces computational cost and data storage requirements. Future work will focus on applying this approach to larger and more complicated turbulent flows.

Supplementary material

Supplementary material is available at https://doi.org/10.1017/jfm.2021.1051.

Funding

The authors acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC): RGPAS-2017-507988, RGPIN-2017-06773. This research was enabled in part by support provided by Calcul Quebec (www.calculquebec.ca), WestGrid (www.westgrid.ca), SciNet (www.scinethpc.ca) and Compute Canada (www.computecanada.ca) via a Resources for Research Groups allocation. H.K. also acknowledges support from Concordia University via the Concordia Graduate Fellowship and Concordia University International Tuition Award of Excellence.

Data statement

Data relating to the results in this manuscript can be downloaded from the publication's website under a CC BY-NC-ND 4.0 license.

Declaration of interests

The authors report no conflict of interest.

References

REFERENCES

Amsallem, D. & Farhat, C. 2008 Interpolation method for adapting reduced-order models and application to aeroelasticity. AIAA J. 46 (7), 18031813.CrossRefGoogle Scholar
Apponsah, K.P. & Zingg, D.W. 2020 Aerodynamic shape optimization for unsteady flows: some benchmark problems. In AIAA Scitech 2020 Forum, p. 0541.Google Scholar
Arian, E., Fahl, M. & Sachs, E.W. 2000 Trust-region proper orthogonal decomposition for flow control. Tech. Rep. Institute for computer application in science and engineering, Hampton, VA.Google Scholar
Balay, S., et al. 2020 PETSc users manual. Tech. Rep. ANL-95/11 – Revision 3.14. Argonne National Laboratory.Google Scholar
Bergmann, M. & Cordier, L. 2008 Optimal control of the cylinder wake in the laminar regime by trust-region methods and POD reduced-order models. J. Comput. Phys. 227 (16), 78137840.CrossRefGoogle Scholar
Bergmann, M., Cordier, L. & Brancher, J.-P. 2005 Optimal rotary control of the cylinder wake using proper orthogonal decomposition reduced-order model. Phys. Fluids 17 (9), 097101.CrossRefGoogle Scholar
Blonigan, P.J., Gomez, S.A. & Wang, Q. 2014 Least squares shadowing for sensitivity analysis of turbulent fluid flows. In 52nd Aerospace Sciences Meeting, p. 1426.Google Scholar
Bright, I., Lin, G. & Kutz, J.N. 2013 Compressive sensing based machine learning strategy for characterizing the flow around a cylinder with limited pressure measurements. Phys. Fluids 25 (12), 127102.CrossRefGoogle Scholar
Brunton, S.L. & Kutz, J.N. 2019 Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press.CrossRefGoogle Scholar
Brunton, S.L., Noack, B.R. & Koumoutsakos, P. 2020 Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 52, 477508.CrossRefGoogle Scholar
Bui-Thanh, T., Willcox, K. & Ghattas, O. 2008 a Model reduction for large-scale systems with high-dimensional parametric input space. SIAM J. Sci. Comput. 30 (6), 32703288.CrossRefGoogle Scholar
Bui-Thanh, T., Willcox, K. & Ghattas, O. 2008 b Parametric reduced-order models for probabilistic analysis of unsteady aerodynamic applications. AIAA J. 46 (10), 25202529.CrossRefGoogle Scholar
Carlberg, K., Barone, M. & Antil, H. 2017 Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction. J. Comput. Phys. 330, 693734.CrossRefGoogle Scholar
Carlberg, K., Bou-Mosleh, C. & Farhat, C. 2011 Efficient non-linear model reduction via a least-squares Petrov–Galerkin projection and compressive tensor approximations. Intl J. Numer. Meth. Engng 86 (2), 155181.CrossRefGoogle Scholar
Carlberg, K. & Farhat, C. 2011 A low-cost, goal-oriented ‘compact proper orthogonal decomposition’ basis for model reduction of static systems. Intl J. Numer. Meth. Engng 86 (3), 381402.CrossRefGoogle Scholar
Carlberg, K., Farhat, C., Cortial, J. & Amsallem, D. 2013 The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows. J. Comput. Phys. 242, 623647.CrossRefGoogle Scholar
Dalcín, L., Paz, R. & Storti, M. 2005 MPI for Python. J. Parallel Distrib. Comput. 65 (9), 11081115.CrossRefGoogle Scholar
Dalcin, L.D., Paz, R.R., Kler, P.A. & Cosimo, A. 2011 Parallel distributed computing using Python. Adv. Water Resour. 34 (9), 11241139.CrossRefGoogle Scholar
Di Ilio, G., Chiappini, D., Ubertini, S., Bella, G. & Succi, S. 2018 Fluid flow around NACA 0012 airfoil at low-Reynolds numbers with hybrid lattice Boltzmann method. Comput. Fluids 166, 200208.CrossRefGoogle Scholar
Epureanu, B.I. 2003 A parametric analysis of reduced order models of viscous flows in turbomachinery. J. Fluids Struct. 17 (7), 971982.CrossRefGoogle Scholar
Fahl, M. 2000 Trust-region methods for flow control based on reduced order modelling. PhD thesis, Universität Trier.Google Scholar
Hay, A., Borggaard, J.T. & Pelletier, D. 2009 Local improvements to reduced-order models using sensitivity analysis of the proper orthogonal decomposition. J. Fluid Mech. 629, 4172.CrossRefGoogle Scholar
He, P., Mader, C.A., Martins, J.R.R.A. & Maki, K.J. 2019 Aerothermal optimization of a ribbed U-bend cooling channel using the adjoint method. Intl J. Heat Mass Transfer 140, 152172.CrossRefGoogle Scholar
He, P. & Martins, J. 2021 A hybrid time-spectral approach for aerodynamic shape optimization with unsteady flow. In AIAA Scitech 2021 Forum, p. 0278.Google Scholar
Hernandez, V., Roman, J.E. & Vidal, V. 2005 SLEPc: a scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans. Math. Softw. 31 (3), 351362.CrossRefGoogle Scholar
Hinze, M. & Matthes, U. 2011 Model order reduction for networks of ODE and PDE systems. In IFIP Conference on System Modeling and Optimization, pp. 92–101. Springer.CrossRefGoogle Scholar
Huang, R.F. & Lin, C.L. 1995 Vortex shedding and shear-layer instability of wing at low-Reynolds numbers. AIAA J. 33 (8), 13981403.CrossRefGoogle Scholar
Huynh, H.T. 2007 A flux reconstruction approach to high-order schemes including discontinuous Galerkin methods. In 18th AIAA Computational Fluid Dynamics Conference, p. 4079.Google Scholar
Karbasian, H.R. & Vermeire, B.C. 2022 Gradient-free aerodynamic shape optimization using large eddy simulation. Comput. Fluids 232, 105185.CrossRefGoogle Scholar
Kunisch, K. & Volkwein, S. 2008 Proper orthogonal decomposition for optimality systems. ESAIM 42 (1), 123.CrossRefGoogle Scholar
Kurtulus, D.F. 2015 On the unsteady behavior of the flow around NACA 0012 airfoil with steady external conditions at $Re= 1000$. Intl J. Micro Air Veh. 7 (3), 301326.CrossRefGoogle Scholar
Lassila, T. & Rozza, G. 2010 Parametric free-form shape design with PDE models and reduced basis method. Comput. Meth. Appl. Mech. Engng 199 (23–24), 15831592.CrossRefGoogle Scholar
Ling, J. & Templeton, J. 2015 Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier–Stokes uncertainty. Phys. Fluids 27 (8), 085103.CrossRefGoogle Scholar
Liu, Y., Li, K., Zhang, J., Wang, H. & Liu, L. 2012 Numerical bifurcation analysis of static stall of airfoil and dynamic stall under unsteady perturbation. Commun. Nonlinear Sci. Numer. Simul. 17 (8), 34273434.CrossRefGoogle Scholar
Manzoni, A., Quarteroni, A. & Rozza, G. 2012 Shape optimization for viscous flows by reduced basis methods and free-form deformation. Intl J. Numer. Meth. Fluids 70 (5), 646670.CrossRefGoogle Scholar
Meena, M., Taira, K. & Asai, K. 2017 Airfoil-wake modification with Gurney flap at low Reynolds number. AIAA J. 56 (4), 13481359.CrossRefGoogle Scholar
Moarref, R., Jovanović, M.R., Tropp, J.A., Sharma, A.S. & McKeon, B.J. 2014 A low-order decomposition of turbulent channel flow via resolvent analysis and convex optimization. Phys. Fluids 26 (5), 051701.CrossRefGoogle Scholar
Mohebujjaman, M., Rebholz, L.G. & Iliescu, T. 2018 Physically constrained data ‘driven correction for reduced’ order modeling of fluid flows. Intl J. Numer. Meth. Fluids 89 (3), 103122.CrossRefGoogle Scholar
Murata, T., Fukami, K. & Fukagata, K. 2020 Nonlinear mode decomposition with convolutional neural networks for fluid dynamics. J. Fluid Mech. 882.Google Scholar
Nadarajah, S. & Jameson, A. 2000 A comparison of the continuous and discrete adjoint approach to automatic aerodynamic optimization. In 38th Aerospace Sciences Meeting and Exhibit, p. 667.Google Scholar
Nair, A.G., Yeh, C.-A., Kaiser, E., Noack, B.R., Brunton, S.L. & Taira, K. 2019 Cluster-based feedback control of turbulent post-stall separated flows. J. Fluid Mech. 875, 345375.CrossRefGoogle Scholar
Newman, J.C. III, Taylor, A.C. III, Barnwell, R.W., Newman, P.A. & Hou, G.J.-W. 1999 Overview of sensitivity analysis and shape optimization for complex aerodynamic configurations. J. Aircraft 36 (1), 8796.CrossRefGoogle Scholar
Oliveira, I.B. & Patera, A.T. 2007 Reduced-basis techniques for rapid reliable optimization of systems described by affinely parametrized coercive elliptic partial differential equations. Optim. Engng 8 (1), 4365.CrossRefGoogle Scholar
Orozco, C.E. & Ghattas, O.N. 1992 Massively parallel aerodynamic shape optimization. Comput. Syst. Engng 3 (1–4), 311320.CrossRefGoogle Scholar
Parish, E.J., Wentland, C.R. & Duraisamy, K. 2020 The adjoint Petrov–Galerkin method for non-linear model reduction. Comput. Meth. Appl. Mech. Engng 365, 112991.CrossRefGoogle Scholar
Ravindran, S.S. 2000 Reduced-order adaptive controllers for fluid flows using POD. J. Sci. Comput. 15 (4), 457478.CrossRefGoogle Scholar
Rowley, C.W. & Dawson, S.T.M. 2017 Model reduction for flow analysis and control. Annu. Rev. Fluid Mech. 49, 387417.CrossRefGoogle Scholar
Rozza, G. & Manzoni, A. 2010 Model order reduction by geometrical parametrization for shape optimization in computational fluid dynamics. In Proceedings of the ECCOMAS CFD, V European Conference on Computational Fluid Dynamics, pp. 1–20.Google Scholar
Rubino, A., Pini, M., Colonna, P., Albring, T., Nimmagadda, S., Economon, T. & Alonso, J. 2018 Adjoint-based fluid dynamic design optimization in quasi-periodic unsteady flow problems using a harmonic balance method. J. Comput. Phys. 372, 220235.CrossRefGoogle Scholar
Taira, K., Brunton, S.L., Dawson, S.T.M., Rowley, C.W., Colonius, T., McKeon, B.J., Schmidt, O.T., Gordeyev, S., Theofilis, V. & Ukeiley, L.S. 2017 Modal analysis of fluid flows: an overview. AIAA J. 55 (12), 40134041.CrossRefGoogle Scholar
Taira, K., Hemati, M.S., Brunton, S.L., Sun, Y., Duraisamy, K., Bagheri, S., Dawson, S.T.M. & Yeh, C.-A. 2020 Modal analysis of fluid flows: applications and outlook. AIAA J. 58 (3), 9981022.CrossRefGoogle Scholar
Virtanen, P., et al. 2020 SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Meth. 17, 261272.CrossRefGoogle ScholarPubMed
Wang, Q., Hu, R. & Blonigan, P. 2014 Least squares shadowing sensitivity analysis of chaotic limit cycle oscillations. J. Comput. Phys. 267, 210224.CrossRefGoogle Scholar
Yamaleev, N., Diskin, B. & Nielsen, E. 2008 Adjoint-based methodology for time-dependent optimization. In 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, p. 5857.Google Scholar
Yarusevych, S., Sullivan, P.E. & Kawall, J.G. 2009 On vortex shedding from an airfoil in low-Reynolds-number flows. J. Fluid Mech. 632, 245.CrossRefGoogle Scholar
Yoon, G.H. 2016 Topology optimization for turbulent flow with Spalart–Allmaras model. Comput. Meth. Appl. Mech. Engng 303, 288311.CrossRefGoogle Scholar
Zahr, M.J. & Farhat, C. 2015 Progressive construction of a parametric reduced-order model for PDE-constrained optimization. Intl J. Numer. Meth. Engng 102 (5), 11111135.CrossRefGoogle Scholar
Zahr, M.J. & Persson, P.-O. 2016 An adjoint method for a high-order discretization of deforming domain conservation laws for optimization of flow problems. J. Comput. Phys. 326, 516543.CrossRefGoogle Scholar
Figure 0

Figure 1. Optimization of the shape of an arbitrary object subject to a Hamiltonian system in physical space $\mathscr {P}$ versus optimization of its dynamics subject to a surrogate model in Hilbert space $\mathscr {H}$.

Figure 1

Figure 2. Schematic of the physics-constrained data-driven approach. The physics-constrained part uses information from the governing equations to apply strict constraints on the shape of manifolds in Hilbert space. Therefore, the overall structures of these manifolds are known, but their scale and details are unknown. The data-driven part helps fill this gap to find the shapes of these manifolds in Hilbert space.

Figure 2

Figure 3. Geometrical definition of the airfoil using control and fixed points.

Figure 3

Figure 4. Comparison of present numerical simulations with reference data (Liu et al.2012; Kurtulus 2015; Meena, Taira & Asai 2017; Di Ilio et al.2018): (a) time-averaged lift coefficient; (b) time-averaged drag coefficient.

Figure 4

Figure 5. Optimization progress in the shape optimization of the airfoil at $Re=1000$ and $\alpha _{eff}=8^{\circ }$: (a) objective function; (b) gradients and (c) lift and drag coefficients.

Figure 5

Figure 6. The geometry of the airfoil and the time-averaged pressure coefficient along the airfoil surface: (a) reference case and (b) optimized case.

Figure 6

Figure 7. Velocity contours with superimposed streamlines for $Re=1000$ and $\alpha _{eff}=8^{\circ }$: (a) reference and (b) optimized airfoils.

Figure 7

Figure 8. Optimization progress in the shape optimization of the airfoil at $Re=1000$ and $\alpha _{eff}=25^{\circ }$: (a) objective function; (b) gradients and (c) lift and drag coefficients.

Figure 8

Figure 9. (a)  Instantaneous lift and (b)  drag coefficients for both reference and optimized cases.

Figure 9

Figure 10. The geometry of the airfoil and the time-averaged pressure coefficient (red) along the airfoil surface. The shaded area also displays the variations of the pressure coefficient over time. (a) Reference case and (b) optimized case.

Figure 10

Figure 11. Flow structures around the reference airfoil.

Figure 11

Figure 12. Flow structures around the optimized airfoil.

Figure 12

Figure 13. Time variation of the total strength of core vortices (i.e. LEV and TEV) over $15t^{*}$ for both reference and optimized cases: (a) instantaneous circulation generated by LEV; (b) instantaneous circulation generated by TEV.

Figure 13

Figure 14. Instantaneous gradients of the drag coefficient (geometrical, state and total sensitivities) with respect to the control points.

Figure 14

Figure 15. The primary modes of the solutions of the sensitivity for both reference and optimized cases.

Figure 15

Figure 16. Variation of frequency for different modes in the reference and optimized cases.

Figure 16

Figure 17. Correlations between modes: reference (blue) and optimized (red) cases.

Figure 17

Figure 18. Correlations of modes coloured by sensitivity magnitude for the reference case.

Figure 18

Figure 19. Correlations of modes coloured by sensitivity magnitude for the optimized case.

Supplementary material: File

Karbasian and Vermeire supplementary material

Karbasian and Vermeire supplementary material

Download Karbasian and Vermeire supplementary material(File)
File 876 KB