Interpreting the dual Riccati equation through the LQ reproducing kernel

In this study, we provide an interpretation of the dual differential Riccati equation of Linear-Quadratic (LQ) optimal control problems. Adopting a novel viewpoint, we show that LQ optimal control can be seen as a regression problem over the space of controlled trajectories, and that the latter has a very natural structure as a reproducing kernel Hilbert space (RKHS). The dual Riccati equation then describes the evolution of the values of the LQ reproducing kernel when the initial time changes. This unveils new connections between control theory and kernel methods, a field widely used in machine learning.


Introduction
We consider the problem of finite-dimensional time-varying linear quadratic (LQ) optimal control with finite horizon and quadratic terminal cost as in s.t. x (t ) = A(t )x(t ) + B(t )u(t ), a.e. in [t 0 , T ] (1a) where the state x(t ) ∈ R N and the control u(t ) ∈ R M . We shall henceforth assume that J T 0, 1 which solution J( · , T ) satisfies V (t 0 , x 0 ) = x 0 J(t 0 , T )x 0 . It is well-known (e.g. [3, p. 31, 408]) that under the above positivity assumptions, J(t , T ) is a symmetric positive definite matrix, which inverse M(t , T ) := J(t , T ) −1 satisfies a dual Riccati equation This inverse matrix has been used as a tool to obtain a representation formula in infinitedimensional LQ control [2] but it has not received the deserved interest yet. Whereas the solution of (2) is equal to the Hessian of the value function V (t 0 , ·), i.e. J(t 0 , T ) = ∂ x,x V (t 0 , ·), we show (Theorem 4 below) that the solution of (3) is equal to the diagonal element of a matrix-valued reproducing kernel K ( · , · ), naturally associated with (1). Owing to this interpretation, the dual Riccati equation (3) is thus no less fundamental and effectively allows to reverse the perspective between the adjoint vector and the optimal trajectory. We first need to bring trajectories to the fore in (1). In his seminal book, Luenberger [4, p. 255] already discussed that an optimal control problem such as (1) can be seen as either optimizing over the set of controls u( · ), or jointly over the set of trajectories x( · ) and controls u( · ), connected through the dynamic constraint (1a). Luenberger also alluded without details to a third possibility, that of optimizing directly over the controlled trajectories. We follow this last viewpoint and consequently introduce the vector space S [t 0 ,T ] of controlled trajectories of the linear system: and There is not necessarily a unique choice of u( · ) for a given x( · ) ∈ S [t 0 ,T ] . 3 Therefore, with each x( · ) ∈ S [t 0 ,T ] , we associate the control u( · ) having minimal norm based on the pseudoinverse B(t ) of B(t ) for the R M -norm · R(t ) := R(t ) 1/2 · : Problem (1) then induces a natural inner product over S [t 0 ,T ] . As a matter of fact, the expression is bilinear and symmetric over a.e. ≡ 0 and, as J T 0, x(T ) = 0, hence x( · ) ≡ 0. Therefore In other words the value function V (t 0 , x 0 ) of (1) coincides with the optimal value of a constrained norm minimization over S [t 0 ,T ] . The solution of (7) can be made explicit as (S [t 0 ,T ] , 〈 · , · 〉 K ) is not an arbitrary Hilbert space, but a vector-valued reproducing kernel Hilbert space (vRKHS).

Vector spaces of linear controlled trajectories as vRKHSs
such that the reproducing property holds: for all t ∈ T, p ∈ R N , K T ( · , t )p ∈ H K (T) and for all Remark. It is well-known that by Riesz's theorem, an equivalent definition of a vRKHS is that, for every t ∈ T and p ∈ R N , the evaluation functional f ∈ H K (T) → p f(t ) ∈ R is continuous. There is also a one-to-one correspondence between the kernel K T and the vRKHS (H K (T), 〈 · , · 〉 K ) (see e.g. [5, Theorem 2.6]). Moreover, by symmetry of the scalar product, the matrix-valued kernel has a Hermitian symmetry, i.e. K T (s, t ) = K T (t , s) for any s, t ∈ T.
Proof of Lemma 2. The proof is identical to the one of Lemma 1 in [1] where S [t 0 ,T ] was equipped with the following inner product Owing to Lemma 2, we can look for a "representer theorem", i.e. a necessary condition to ensure that the solutions of an optimization problem like (7)

Then, for any minimizer f, there exists {p
Taking L(e 1 x(t 0 ), . . . , e N x(t 0 )) := χ x 0 (x(t 0 )) and Ω(y) = y 2 , with e i the i -th basis vector of R N , χ x 0 the indicator function of x 0 , we apply Theorem 3 to (7). Since · 2 K is strongly convex and there exists x( · ) ∈ S [t 0 ,T ] satisfying x(t 0 ) = x 0 , the solution of (7) is unique and can be written as ,T ] (t 0 , t 0 ) for the R N -seminorm K [t 0 ,T ] (t 0 , t 0 ) 1/2 · . Thus, owing to the reproducing property, So we conjecture that K [t 0 ,T ] (t 0 , t 0 ) = J(t 0 , T ). We actually have a stronger result: The proof of Theorem 4 (in Section 3 below) boils down to identifying the reproducing kernel of (S [t 0 ,T ] , 〈 · , · 〉 K ). Informally, the inverse relation comes from inverting the graph of the (x, p)relation. As a matter of fact, consider the solution p(t ) of the adjoint equation Then we have p(t ) = −J(t , T )x(t ). In other words, the solution J( · , T ) of the differential Riccati equation maps the optimal trajectory x( · ) to its adjoint vector p( · ). On the contrary, since x(t ) = K [t 0 ,T ] (t , t 0 )p 0 , the kernel K [t 0 ,T ] ( · , t 0 ) maps an initial covector p 0 ∈ R n to the optimal trajectory x( · ). This effectively inverts the graph of the relation between x( · ) and p( · ). The inversion performed is related to yet another change of perspective, from an online and differential approach to an offline and integral one. Through Pontryagine's Maximum Principle (PMP), it is well known that the optimal control u( · ) satisfies u(t ) = R(t ) −1 B(t ) p(t ) = −R(t ) −1 B(t ) J(t , T )x(t ) =: G(t )x(t ). Hence, based on J(t , T ), one has a closed feedback loop, with gain matrix G(t ), and knows the control to apply based only on the present time and state. However the optimal trajectory x( · ) is not encoded as simply as in the kernel formula x(t ) = K [t 0 ,T ] (t , t 0 )p 0 . It has to be derived through numerical approximations of the dynamics (1a). Conversely, the kernel K [t 0 ,T ] performs the integration of the Hamiltonian system (1a)-(10) and sparsely encodes x( · ) over [t 0 , T ] by p 0 . This sparsity partly stems from the smaller number of constraints in (7)  More generally, for a given interval [t 0 , T ], Theorem 3 states that to encode the optimal trajectories one needs at most as many covectors p i as there are points t i where the trajectory is evaluated in the optimization problem. It is a classical property of "kernel machines", frequently leveraged in classification tasks (e.g. SVMs in [7]). This result was exploited in [1] to tackle affine state constraints. From the PMP perspective, it resulted in focusing only on the measures supported on the constraint boundary. Unlike the adjoint vector p(t ) associated with the equality constraint (1a), which never vanishes except for abnormal trajectories, the covectors corresponding to inequality constraints are null whenever the constraint is not active. This led to extremely sparse encoding of the optimal trajectory by specifying only the active covectors on the [t 0 , T ] time interval. 4 Offline computation of the kernel is indeed well suited for path-planning problems. The kernel formalism however conflicts with the online perspective since varying t 0 changes the domain of K [t 0 ,T ] . As the correspondence between the kernel K T and the vRKHS (H K (T), 〈 · , · 〉 K ) is one-to-one (e.g. [5,Theorem 2.6]), varying T = [t 0 , T ] or modifying the inner product changes the kernel. In general, restricting the domain leads to complicated relations between a vRKHS and its kernel [6, p. 78-80]. In our case, the dual Riccati equation (3) precisely describes how the values of the LQ kernel change when varying t 0 .

Proof of Theorem 4
The proof corresponds to the identification of the reproducing kernel of (S [t 0 ,T ] , 〈 · , · 〉 K ). Since we shall proceed with fixed initial time t 0 , we drop the corresponding index and set K ( · , · ) = K [t 0 ,T ] ( · , · ). By existence and unicity of the reproducing kernel, we just have to exhibit a function K ( · , · ) which satisfies the requirements of Definition 1.
Let us denote by Φ A (t , s) ∈ R N ,N the state-transition matrix of z (τ) = A(τ)z(τ), defined from s to t , i.e. z(t ) = Φ A (t , s)z(s). The key property used throughout this section is the variation of constants, a.k.a. Duhamel's principle, stating that for any absolutely continuous x( · ) such that x (t ) = A(t )x(t ) + B(t )u(t ) a.e., we have for any σ, t ∈ [t 0 , T ] By the Hermitian symmetry of K and the variation of constants (11) written for σ = T , we can rewrite (12) as, for all t ∈ [t 0 , T ], x( · ) ∈ S [t 0 ,T ] , Let us solve (15) by variation of the constant J T . We thus look for a function J( · , T ), which we will prove solves (2) So it suffices that J( · , T ) solves the differential Riccati equation (2) and that, by symmetry of K , This concludes our proof as K d ( · ) solves the dual matrix Riccati equation (3) which has a unique solution.