Introduction

Computer-aided physical design has become an important tool in many fields including photonics1,2, mechanical design3, circuit design4,5, and thermal design6,7. In many cases, the design problem is formulated as a constrained nonconvex optimization problem which is then approximately minimized using local optimization methods such as ADMM8, evolutionary algorithms5, and the method of moving asymptotes6, among many others.

More generally, a physical design problem can be phrased in the following way: we are allowed to choose some design parameters (e.g., the permittivity in photonic design or the conductances in diffusion design) at each point in a domain, within some limits, in order to minimize an objective function of the field (this can be, e.g., the electric field in photonic design, or a vector containing the potentials, flows, and potential differences in diffusion design). The constraints specify the physics of the problem, connecting the design variables to the field variables (e.g., Maxwell’s equations in photonics, or a diffusion equation such as the heat equation in diffusion design). We note that, in many cases, the physics constraints are linear equations in the field variables (when the design parameters are held constant), and linear equations in the design parameters (when the fields are held constant), which has led to some heuristics with good performance8.

There has been recent interest in understanding global properties of solutions for physical design problems: lower bounds for optimal design objectives in photonic design have been studied via the use of convex relaxations found by physical arguments9,10, duality theory11,12,13, among others14. We instead analyze a convex restriction (see15, Sect. 2.1) of the physical design problem, potentially providing another approach for analyzing properties of global solutions and for creating fast heuristics.

In this paper, we consider a simple (but very general) formulation of a class of physical design problems which includes problems in thermal design, photonic inverse design with scalar fields and convex objectives, and some types of control problems. This formulation offers some insights into the properties of global solutions for these problems. For example, in many practical cases, problems with linear objectives can be shown to have optimal extremal designs (in the case of physical design) or bang-bang controls (in the case of control). As another example, we observe that it suffices to know only the sign of a subset of variables in order to globally solve the problem efficiently, even though the original problem is NP-hard. The formulation also suggests a heuristic which appears to have good performance for many kinds of physical design problems, and we give numerical examples of this heuristic applied to a few different problems.

General problem formulation

We consider a problem of the form

$$\begin{array}{ll} \text {minimize} &\quad f(x, u, v) \\ \text {subject} \; \text{to} &\quad (x, u, v) \in \mathscr {C} \\ &\quad u = \mathbf{diag}(\theta ) v \\ &\quad \theta ^\text{min}\le \theta \le \theta ^\text{max}, \end{array}$$
(1)

where \(f: {\text{ R }}^n \times {\text{ R }}^m \times {\text{ R }}^m \rightarrow {\text{ R }}\) is a convex function over our variables \(x \in {\text{ R }}^m\) and \(u, v \in {\text{ R }}^n\), \(\mathscr {C}\subseteq {\text{ R }}^n \times {\text{ R }}^m \times {\text{ R }}^m\) is a convex constraint set, and \(\theta \in {\text{ R }}^n\) is our design variable whose limits are \(\theta ^\text{min}, \theta ^\text{max}\in {\text{ R }}^n\). While apparently simple, many physical design problems can be expressed as instances of problem (1); we show a few examples in the “Applications” section. We call (xuv) the field (corresponding to, e.g., the electric field in photonic design) and \(\theta \) the design parameters (corresponding to, e.g., the permittivity in photonic design). We say that \(\theta \) is extremal whenever \(\theta _i \in \{\theta ^\text{min}_i, \theta ^\text{max}_i\}\) for each \(i=1, \dots , m\). The physics of the problem is encoded in the constraints \((x, u, v) \in \mathscr {C}\) and \(u = \mathbf{diag}(\theta )v\).

In this problem, the convex set \(\mathscr {C}\) can be any convex set specifying constraints on the variables (xuv), such as linear equality constraints. On the other hand, the design parameters \(\theta \) enter in a very specific way: as a diagonal term relating u and v. Another way to say this is that each design parameter \(\theta _i\) is the ratio of two field parameters, \(u_i\) and \(v_i\).

We note that the problem (1) is convex in (xuv) whenever \(\theta \) is fixed, and convex in \((x, u, \theta )\) whenever v is fixed. In practice, there has been great success in applying heuristics for approximately minimizing instances of (1) using this observation16.

Absolute upper bound formulation

Problem (1) is equivalent to

$$\begin{array}{ll} \text {minimize}& \quad f(x, u, v)\\ \text {subject}\; \text{to}& \quad (x, u, v) \in \mathscr {C}\\& \quad u = \mathbf{diag}({\bar{\theta }})v + \mathbf{diag}(\rho ) w\\& \quad |w| \le |v|, \end{array}$$
(2)

where the absolute value is taken elementwise. The variables of problem (2) are \(x \in {\text{ R }}^m\) and \(u, v, w \in {\text{ R }}^n\), while \({\bar{\theta }} = (\theta ^\text{max}+ \theta ^\text{min})/2\) and \(\rho = (\theta ^\text{max}- \theta ^\text{min})/2\) are constants. Note that \({\bar{\theta }}\) is the middle value of the physical parameter interval, and \(\rho \) is the radius, i.e., half the range or width of the interval.

The equivalence between problems (1) and (2) can be seen by noting that, for every feasible (xuvw) for problem (2) we can set,

$$\begin{aligned} \theta _i = {\left\{ \begin{array}{ll} {\bar{\theta }}_i + \rho _i w_i/v_i &{} v_i \ne 0,\\ {\bar{\theta }}_i &{} \text {otherwise}, \end{array}\right. } \end{aligned}$$
(3)

for \(i=1, \dots , m\). Then, \((x, u, v, \theta )\) is feasible for (1), with the same objective value. Note that, if \(v_i = 0\), any choice of \(\theta _i \in [\theta ^\text{min}_i, \theta ^\text{max}_i]\) would suffice.

Similarly, for any \((x, u, v, \theta )\) that is feasible for (1), we can set

$$\begin{aligned} w_i = \left( \frac{\theta _i - {\bar{\theta }}_i}{\rho _i}\right) v_i, \quad i=1, \dots , m, \end{aligned}$$

and then (xuvw) is feasible for problem (2) with the same objective value.

We will refer to problem (2) as the absolute-upper-bound formulation of problem (1). This problem, like problem (1), is nonconvex due to the inequality \(|w| \le |v|\), and is hard to solve exactly.

NP-hardness

We can reduce any mixed-integer convex program (MICP) to an instance of (2), implying that this problem is hard, as any instance of an NP-complete problem is easily reducible to instances of the MICP problem17.

The reduction follows since we can force v to be binary in problem (2). First, choose \({\bar{\theta }} = 0\), \(\rho = \mathbf{1}\) (and therefore \(u=w\)), and add \(u = \mathbf{1}\) to the constraint set. This immediately implies that \(\mathbf{1}\le |v|\). Adding the convex constraint \(|v| \le \mathbf{1}\) to the constraint set \(\mathscr {C}\), yields \(v \in \{\pm 1\}^n\), as required. Since \(\mathscr {C}\) and f can be otherwise freely chosen, the result follows.

Known signs

If the signs of an optimal \(v^\star \) are known for problem (2), then the problem becomes convex. We can see this as follows. If \(s = \mathbf{sign}(v^\star ) \in \{\pm 1\}^m\) is known, then we can solve the following convex problem18, Sect. 4:

$$\begin{array}{ll} \mathrm{minimize}& \quad f(x, u, v)\\ \text{subject} \; \text{to}& \quad (x, u, v) \in \mathscr {C}\\& \quad u = \mathbf{diag}({\bar{\theta }})v + \mathbf{diag}(\rho ) w\\& \quad |w| \le s\circ v, \end{array}$$
(4)

where \(s\circ v\) is the elementwise product of s and v. Note that \(v^\star \) (and its associated values of \(x^\star \), \(u^\star \), and \(w^\star \)) are feasible for this instance of (4) since \(|v^\star | = s\circ v^\star \), which implies that a solution of this instance of (4) must be globally optimal for (2).

Global solution

Note that problem (4) generates a family of optimization problems over the set of possible signs, \(s \in \{\pm 1\}^m\). This suggests a simple, if inefficient, way to globally solve problem (2) and therefore problem (1): solve problem (4) for the \(2^m\) possible signs, \(s \in \{\pm 1\}^m\), to obtain optimal values \(p^\star (s)\) for each set of signs s. A solution \((x^\star , u^\star , v^\star , w^\star )\) for any optimal set of signs, \(s^\star \in \mathrm{argmin}_{s \in \{\pm 1\}^m} p^\star (s)\), is then a solution to (2) and therefore to (1).

Of course, this algorithm may not be useful in practice for anything but the smallest values of m, but it implies that solving problem (1) requires solving only a finite number of convex problems.

Extremality principle

The rewriting given in (4) also yields an interesting insight. If problem (4) is a feasible linear program and \(\mathscr {C}\) is an affine set with \(\{u \mid (x, u, v) \in \mathscr {C}\} = {\text{ R }}^m\), i.e., for each \(u \in {\text{ R }}^m\) there exists a \(v \in {\text{ R }}^m\) and an \(x \in {\text{ R }}^n\) such that \((x, u, v) \in \mathscr {C}\), then there exists a solution of (4) such that all entries of the inequality \(|w| \le s\circ v\) hold at equality (see, e.g.,19, Sect. 2.6). This rewriting then implies that there exists an optimal design for which \(\theta \) is extremal, by (3). A numerical example of this principle is found in the “Thermal design” section.

Sign flip descent

Since problem (4) generates a family of optimization problems parametrized by the sign vector \(s \in \{\pm 1\}^m\), we can view the original physical design problem (1) as a problem of choosing an optimal Boolean vector. A simple way of approximately optimizing (2) is: at each iteration i, start with some sign vector \(s^i \in \{\pm 1\}^m\) and solve (4) to obtain an optimal value \(p^i\). We then consider a rule for proposing a new sign vector, say \({\tilde{s}}^i \in \{\pm 1\}^m\), for which we again solve (4) and then obtain a new optimal value \({\tilde{p}}^i\). If \({\tilde{p}}^i < p^i\), we then keep this new sign vector, i.e., we set \(s^{i+1} = {\tilde{s}}^i\), and repeat the procedure; otherwise, we discard \({\tilde{s}}^i\) by setting \(s^{i+1} = s^i\), and repeat the procedure, proposing a new sign vector in the next iteration. This is outlined in algorithm 1.

figure a

By construction, any algorithm of the form of algorithm 1 is a descent algorithm since each iteration is feasible and the objective value is decreasing on each iteration. We outline two possible rules for proposing new sets of signs at each iteration.

Greedy sign rule

A simple rule for choosing signs is to begin at iteration k with some set of signs \(s^k\). We then define a new set of signs \({\tilde{s}}^k\) with \({\tilde{s}}^k = s^k\) except at the kth entry where we have \({\tilde{s}}^k_k = -s^k_k\) (or, if \(k > m\) then we pick the entry at index \(1 + (k-1\mod m)\), i.e., such that the entries are changed, one-by-one, in a round-robin fashion). We stop whenever flipping any one entry of \(s^k\) does not yield a lower objective value.

The greedy sign rule has two useful properties. First, the rule guarantees local optimality in the following sense: if algorithm 1 returns \(s^\star \), then changing any one sign of \(s^\star \) will not decrease the objective value. Second, the rule terminates in finite time, since the corresponding algorithm is a descent algorithm and there are a finite number of possible sign vectors. On the other hand, the algorithm is often slow for anything but the smallest designs: to reach the terminating condition, we have to solve at least m convex optimization problems.

Field-based rule

Another simple rule that appears to work very well in practice is based on the observation that, for many choices of sign vectors \(s^k\), the inequality \(|w| \le s^k \circ v\) has many entries of v that are zero. If \(v_i\) is zero for some index \(i=1, \dots , m\), this suggests that the sign \(s_i^k\) might have been originally set incorrectly: in this case, we propose a new vector \({\tilde{s}}^k\) which is equal \(s^k\) at all entries \(i=1, \dots , m\) for which \(v_i\) is nonzero and has opposite sign at all entries i for which \(v_i\) is zero.

Note that this new proposed vector will always have an optimal value \({\tilde{p}}^k\) which is at least as small as the optimal value for \(s^k\), i.e., \({\tilde{p}}^k \le p^k\). This observation, coupled with the proposed rule, suggests that we should stop whenever there are no signs left to flip, or whenever the iterations stop decreasing as quickly as desired, i.e., whenever \(p^k - p^{k+1} < \varepsilon \).

While this rule does not necessarily guarantee local optimality, it always terminates in finite time with the given stopping conditions and appears to work well in practice (requiring, in comparison to the greedy sign rule, much fewer than m iterations to terminate) as shown in the “Numerical examples” section.

Applications

We describe a few interesting design problems that reduce to problems of the form of (1).

Diagonal physical design

As in, e.g.,11, many physical design problems can be written in the following way:

$$\begin{array}{ll} \text {minimize} & \quad f(z)\\ \text {subject} \; \text{to} & \quad (A + \mathbf{diag}(\theta ))z = b\\ & \quad \theta ^\text{min} \le \theta \le \theta ^\text{max}, \end{array}$$
(5)

where \(A \in {\text{ R }}^{n\times n}\) describes the physics of the problem, while \(b \in {\text{ R }}^n\) describes the excitation, and \(\theta \in {\text{ R }}^n\) are the design parameters of the system, chosen to minimize some convex objective function \(f: {\text{ R }}^n \rightarrow {\text{ R }}\) of the field \(z \in {\text{ R }}^n\). Our variables in this problem are the field z and the design parameters \(\theta \).

We can write a problem of the form of (5) as a problem of the form (1) by introducing a new variable u with constraint \(u = \mathbf{diag}(\theta )z\) and rewriting the equality constraint of (5) with this new variable, \(Az + u = b\). As the set of (zu) satisfying \(Az + u = b\) forms a convex (in fact, affine) set, the resulting problem,

$$\begin{array}{ll} \text {minimize}& \quad f(z)\\ \text {subject} \; \text{to}& \quad Az + u = b\\& \quad u = \mathbf{diag}(\theta )z\\& \quad \theta ^\text{min} \le \theta \le \theta ^\text{max}, \end{array}$$

is of the form of (1) which can be easily rewritten into the form of (2).

Static diffusion design

Consider a flow problem on a graph \(G = (V, E)\) where we choose the conductance \(g_k \in {\text{ R }}\) across each edge \(k \in E\), constrained to satisfy \(g^\text{min}_k \le g_k \le g^\text{max}_k\), to minimize some function \(f: {\text{ R }}^{|V|} \rightarrow {\text{ R }}\) of the potentials \(e \in {\text{ R }}^{|V|}\), given some sources \(s \in {\text{ R }}^{|V|}\).

To compactly write the conditions this system must satisfy, let the matrix \(A \in {\text{ R }}^{|V|\times |E|}\) be the incidence matrix for the graph G defined to be (see20, Sect. 7.3):

$$\begin{aligned} A_{ij} = {\left\{ \begin{array}{ll} +1 &{} \hbox {edge} \; j \; \hbox {points} \; \text{to} \; \text{node} \; i\\ -1 &{} \hbox {edge} \; j \; \text {points} \; \text{from} \; \text{node} \; i\\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

We can then write the steady-state diffusion equation as

$$\begin{aligned} A\mathbf{diag}(g)A^Te = s, \end{aligned}$$
(6)

where \(A\mathbf{diag}(g)A^T\) can be recognized as the graph Laplacian of G with edge weights g. This equation can also be seen as the discrete form of the heat equation on a graph G21.

The corresponding optimization problem is then an instance of (1):

$$\begin{array}{ll}\text {minimize}&\quad f(e)\\ {\text {subject}} \; \text{to}& \quad v = A^Te\\ & \quad Aw = s\\ & \quad w = {\mathbf{diag}}(g)v\\ & \quad g^\text{min}\le g \le g^{\text{max}}, \end{array}$$
(7)

where we have introduced two new variables \(w, v \in {\text{ R }}^{|E|}\), in addition to the potential \(e \in {\text{ R }}^{|V|}\) and the conductances \(g \in {\text{ R }}^{|E|}\). As before, \(A \in {\text{ R }}^{|V|\times |E|}\) is the incidence matrix, \(s \in {\text{ R }}^{|V|}\) are the sources at each node, while \(c \in {\text{ R }}^{n}\) is a vector such that \(c^Te\) is the average temperature over the desired region.

Dynamic diffusion control

Similarly to the “Static diffusion design” section, we can consider the time-varying generalization of (6) given by

$$\begin{aligned} Ce_{t+1} = Ce_t - h A\mathbf{diag}(g_t) A^Te_t + hBu_t, \end{aligned}$$

at each time \(t=1, \dots , T\), with step size \(h > 0\). Here, \(c \in {\text{ R }}^{|V|}_{++}\) is the heat capacity of each node and \(C = \mathbf{diag}(c)\), while \(u_t \in {\text{ R }}^{n}\) are the inputs given to the system, \(B \in {\text{ R }}^{|V| \times n}\) is a matrix mapping these inputs to the power added or removed from each node, \(g_t \in {\text{ R }}^{|V|}\) are the conductances at each node, and \(e_t \in {\text{ R }}^{|V|}\) is the temperature at each node.

In this case, we can minimize any convex function of the temperatures and inputs by appropriately choosing the conductances and inputs:

$$\begin{array}{ll} \text {minimize} &\quad f(e, u)\\ \text {subject} \; \text{to} & \quad Ce_{t+1} = Ce_t - hAw_t + hBu_t, \ t\in [T]\\ &\quad v_t = A^Te_t,\quad t\in [T]\\ & \quad w_t = \mathbf{diag}(g_t)v_t, \quad t\in [T]\\ & \quad g^\text{min}\le g_t \le g^\text{max}, \quad t\in [T], \end{array}$$
(8)

where, as before, we have introduced the variables \(v_t, w_t \in {\text{ R }}^{|E|}\), for each \(t \in [T]\) and \([T] = \{1, \dots , T\}\).

We can see problem (8) as a nontraditional control problem. A particular example is: we have a set of rooms with temperatures \(e_t\) at time t which we wish to keep within some comfortable temperature range. We are allowed to open and close vents (equivalently, change the conductances \(g_t\) at each time t) and turn on and off heat pumps (via the control variable \(u_t\)), while paying a cost for the latter. A simple question could be: what is an optimal set of actions such that the input cost is minimized while keeping the temperatures \(e_t\) within some specified bounds? We show a simple example of this in the “Temperature control” section.

Numerical examples

Julia22 code for all examples in this section is available in the following Github repository: angeris/pd-heuristic. We use the JuMP modeling language23 to interface with Mosek24. All times reported are on a 2015 2.9 GHz dual-core MacBook Pro.

Photonic design

In this example, we wish to choose the speed of a wave satisfying Helmholtz’s equation at each point in some domain \(\Omega \subseteq {\text{ R }}^2\) in order to minimize a convex function of the field.

Helmholtz’s equation

More specifically, the speed of the wave \(c : \Omega \rightarrow {\text{ R }}_{++}\) is chosen such that the field \(\psi : \Omega \rightarrow {\text{ R }}\) at a specific frequency \(\omega \in {\text{ R }}_+\) with excitation \(\phi : \Omega \rightarrow {\text{ R }}\) satisfies Helmholtz’s equation,

$$\begin{aligned} \nabla ^2 \psi (x, y) + \left( \frac{\omega }{c(x, y)}\right) ^2\psi (x, y) = \phi (x, y), \end{aligned}$$
(9)

at each point \((x, y) \in \Omega \). Additionally, we require that the chosen speeds are bounded such that \(0 < c^\text{min}(x, y) \le c(x, y) \le c^\text{max}(x, y)\) at each point \((x, y) \in \Omega \), and we assume Dirichlet boundary conditions such that \(\psi (x, y) = 0\) for \((x, y) \in \partial \Omega \), i.e., we require the field to be zero at every point on the boundary of the domain. In electromagnetics, this condition corresponds to having a perfect conductor at the boundary.

In this case (as in11, Sect. 5.1), we will work with a discretized form of (9) where \(z \in {\text{ R }}^n\) is the discretized field (\(\psi \)), \(b \in {\text{ R }}^n\) is the discretized excitation (\(\phi \)), \(\theta \in {\text{ R }}^n\) is the discretized speed of the wave (c), and \(A \in {\text{ R }}^{n\times n}\) is the discretized version of the Laplacian operator (\(\nabla ^2\)), such that

$$\begin{aligned} Az + \mathbf{diag}(\theta )z = b, \end{aligned}$$
(10)

approximates (9) at each point \((x_i, y_i) \in \Omega \) for \(i=1,\dots , n\). We assume that the discretization is such that \(\Omega \) is a \(1 \times 1\) box.

Problem data

In this case, the problem data are given by \(\omega = 4\pi \), with \(n = 101 \times 101 = 10201\), while the convex objective function \(f: {\text{ R }}^n \rightarrow {\text{ R }}\) is given by

$$\begin{aligned} f(z) = \sum _{i \in B} z_i^2, \end{aligned}$$

where \(B \subseteq \{1, \dots , n\}\) is the box indicated in Fig. 1, and the excitation b is defined as

$$\begin{aligned} b_i = {\left\{ \begin{array}{ll} 1 &{} i \in S\\ 0 &{} \text {otherwise}, \end{array}\right. } \end{aligned}$$

for each \(i=1, \dots , n\), where \(S \subseteq \{1, \dots , n\}\) is the box indicated in Fig. 1. Here, \(\theta ^\text{min}= 1\) and \(\theta ^\text{max}= 2\). We set the tolerance parameter of the algorithm to \(\varepsilon = 10^{-4}\). We initialize the algorithm by finding a solution to Eq. (10) with \(\theta = (\theta ^\text{max}+ \theta ^\text{min})/2\) and use the signs of this solution as the initial sign vector.

Figure 1
figure 1

Approximately optimal photonic design. The leftmost figure specifies \(S \subseteq \{1, \dots , n\}\) in purple, the center specifies \(B \subseteq \{1, \dots , n\}\), while the rightmost figure gives the design, \(\theta \).

Numerical results

With the given problem data, the algorithm terminates at 102 iterations with a total time of about 4 minutes, roughly around 2 seconds per iteration. This time could be very much shortened since the current implementation does not warm-start any of the current iterations, essentially solving the problem from scratch at each iteration. The final design is shown in Fig. 1 and its final field is shown in Fig. 2.

Figure 2
figure 2

Field for approximately optimal design.

Thermal design

In this design problem, as in the “Static diffusion design” section, we seek to set the conductances on a graph in order to minimize the average temperature of a subset of points in the center of a 2D grid of size \(m\times m\), given a heat source and a heat sink at opposite corners of the 2D grid. This is an instance of the diffusion problem where \(A \in {\text{ R }}^{|E| \times |V|}\) is the incidence matrix of the grid and \(s \in {\text{ R }}^{|V|}\) are the heat sources and sinks. This problem can be written as an instance of (7) where the potentials \(e \in {\text{ R }}^{|V|}\) are the temperatures at each point in the grid.

Problem data

Our convex objective function \(f: {\text{ R }}^{|V|} \rightarrow {\text{ R }}\) is given by

$$\begin{aligned} f(e) = c^Te, \end{aligned}$$

where \(c \in {\text{ R }}^{|V|}\) is a vector such that \(c_i = 1\) if vertex i lies in the center square of size \(\lfloor (m-1)/4\rfloor \times \lfloor (m-1)/4\rfloor \) while \(c_i = 0\) otherwise. There is a heat source set at the bottom left corner of the grid and a heat sink set at the top right corner of the grid. We set the minimal and maximal conductances as \(g^\text{min}= 1\) and \(g^\text{max}= 10\) at each edge.

We approximately optimize the conductances in this problem by using the field-based heuristic described in the “Sign flip descent” section. The directions are initialized by solving the problem with uniform conductances.

Numerical results

A small example is given in Fig. 3 with \(m=11\) (which shows the chosen directions of flow), while a relatively large design is given in Fig. 4 with \(m=51\). In both figures, thick edges indicate that conductance is maximized at that edge while thin edges indicate that conductance is minimized (see the extremality principle in the “General problem formulation” section for more details). The color of each node indicates the potential value, with red values indicating a higher potential and blue values indicating a lower one. We note that our heuristic recovers similar tendril-like patterns to those found in, e.g.7, Sect. 4 .

With the provided data, the heuristic terminates after 7 iterations, taking a total time of around .4 seconds in the case with \(m=11\), with an objective value of about .115. The case with \(m=51\) terminates after 14 iterations, taking a total time of around 20.5 seconds with an objective value of approximately .239.

Figure 3
figure 3

Approximately optimal design for \(m=11\). Arrows indicate the direction of flow used for this design, colors indicate the temperature at each node, while edge thickness indicates the conductance at each edge. The grey box indicates the center square.

Figure 4
figure 4

Approximately optimal design for \(m=51\).

Temperature control

In this example, we wish to keep the temperature of two rooms in a range of desired temperatures by appropriately closing and opening vents to the outside and between rooms and turning heat pumps on and off at specified times, while minimizing the total power consumption. We will also require that the controls and the temperatures be periodic.

Problem data

We can write this as an instance of problem (8) with

$$\begin{aligned} B = .2 I, \quad C = \mathbf{diag}((.3, .1)), \quad g^\text{min}= 1, \quad g^\text{max}= 10, \end{aligned}$$

and A is the incidence matrix of the graph shown in Fig. 5, while

$$\begin{aligned} (e_t)_3 = 70 + 20\sin \left( \frac{4\pi t}{T}\right) , \quad t=1, \dots , T, \end{aligned}$$

where \(T = 300\). Since we will require that the room temperatures be periodic, we then have

$$\begin{aligned} (e_1)_1 = (e_T)_1, \quad (e_1)_2 = (e_T)_2. \end{aligned}$$

Finally, we will require that the temperatures remain in some a range,

$$\begin{aligned} 65 \le (e_t)_1, (e_t)_2 \le 75, \quad \quad t = 1, \dots , T, \end{aligned}$$

while minimizing

$$\begin{aligned} f(e, u) = h\Vert u\Vert _2 + \eta h \sum _{t=1}^{T-1}\Vert e_{t+1} - e_t\Vert _2, \end{aligned}$$
(11)

where \(h=1/T\) and \(\eta = 10^{-4}\) is a small regularization parameter that ensures the resulting trajectories are smooth.

Figure 5
figure 5

Graph set up for the temperature control problem. Here, \((e_t)_3\) is the ambient temperature at time t, while \((e_t)_1\) and \((e_t)_2\) are the temperatures of rooms 1 and 2, respectively. The \(g_t\) are the conductances of the indicated edges.

We initialize the problem with the signs given by assuming that \(g_t = (g^\text{min}+ g^\text{max})/2\) for all \(t=1, \dots , T-1\) and using the heat pumps \(u_t\) to ensure the temperature in the rooms remains above 65 and below 75.

Numerical results

We approximately optimize this instance using the field-based heuristic outlined in “Sign flip descent”, with the result shown in Fig. 6. With the provided data, the heuristic terminates in 3 iterations, with a total time of around 1.56 s. The final approximately optimized problem has an objective value of around 836.

Figure 6
figure 6

Approximately optimal control.

Conclusion

This paper presented a new problem formulation and an associated heuristic which may be of practical use for a general class of physical design problems, which appears to have good practical performance on many different kinds of physical design problems. Additionally, this problem formulation implies a few interesting facts, most notably that the class of problems can be efficiently solved even when only the signs of an optimal solution are known and that, in a few important cases, there exist globally optimal extremal designs.

Future work

There are several notable exceptions to the class of problems which are included in the formulation given in (1), with the most important being designs whose parameters are constrained to be equal. This means that, at the moment, a direct application to photonic design in three dimensions, the usual photonic design problem with complex fields, circuit design with complex impedances, or multi-scenario physical design, is not possible with the current problem formulation. We suspect a suitable generalization of (1) might yield similarly interesting insights and, potentially, new heuristics for physical design.