Introduction

Most existing quantum systems inevitably interact with the surrounding environment, often making a straightforward application of Schrödinger’s equation impractical1. The main challenge in modeling these “open” quantum systems is the large Hilbert space dimension because the environment is much larger than the system of interest. Addressing this challenge is important in many disciplines, including solid state and condensed matter physics2,3,4, chemical physics and quantum biology5,6,7,8, quantum optics9,10,11,12, and quantum information science13,14,15. In this work, we provide a unified framework for studying non-Markovian open quantum systems, which will help to facilitate a better understanding of open quantum dynamics and the development of numerical methods.

Various numerically exact methods have been developed to describe non-Markovian open quantum dynamics. Two of the most commonly used approaches are (1) the Feynman–Vernon influence functional path integral (INFPI)16 based techniques, including the quasiadiabatic path-integral method of Makri and Makarov and its variants17,18,19,20,21,22,23,24,25,26, hierarchical equations of motion (HEOM) methods7,27,28, and time-evolving matrix product operator and related process tensor-based approaches29,30,31,32,33,34 and (2) the Nakajima–Zwanzig generalized quantum master equation (GQME) techniques1,35,36,37. The INFPI formulation employs the influence functional (\({{\mathcal{I}}}\)) that encodes the time-nonlocal influence of the baths on the system. In the GQME formalism, the analogous object to \({{{\mathcal{I}}}}\) is the memory kernel (\({{{\bf{{{{\mathcal{K}}}}}}}}\)), which describes the entire complexity of the bath influence on the reduced system dynamics. It is natural to intuit that \({{{\mathcal{I}}}}\) and \({{{\bf{{{{\mathcal{K}}}}}}}}\) are closely connected and are presumably identical in their information content. Despite this, to the best of our knowledge, analytic and explicit relationships between the two have yet to be shown.

There have been several works that loosely connect these two frameworks. For instance, there is a body of work on numerically computing \({{{\bf{{{{\mathcal{K}}}}}}}}\) with projection-free inputs using short-time system trajectories based on INFPI or other exact quantum dynamics methods38,39,40,41,42. The obtained \({{{\bf{{{{\mathcal{K}}}}}}}}\) is then used to propagate system dynamics for longer times. Another line of work worth noting is the real-time path integral Monte Carlo algorithms for evaluating memory kernels exactly43. These works took advantage of the real-time path integral approaches used to evaluate \({{{\mathcal{I}}}}\)44 to evaluate necessary matrix elements in computing the exact memory kernel. Nonetheless, they did not present any direct analytical relationship between the memory kernel and \({{{\mathcal{I}}}}\).

In this work, we present a unifying description of these non-Markovian quantum dynamics frameworks. In particular, we establish explicit analytic correspondence between \({{{\mathcal{I}}}}\) and \({{{\bf{{{{\mathcal{K}}}}}}}}\). We present a visual schematic describing the main idea of our work in Fig. 1a. Readers interested in the relationship between our work and existing numerical tools are referred to Supplementary Note 3C.

Fig. 1: Unification of open quantum dynamics framework for Class 1.
figure 1

a An open quantum system, where the environment is characterized by the spectral density J(ω), can be described with the generalized quantum master equation (GQME) and the influence functional path integral (INFPI). The former distills environmental correlations through the memory kernels \({{{\mathcal{K}}}}\) while the latter through the influence functionals \({{{\mathcal{I}}}}\). In this work, we show both are related through Dyck Paths, and that, furthermore, we can use the Dyck construction for extracting J(ω) by simply knowing how the quantum system evolves. b Cumulant expansion of memory kernel. Examples through Eq. (6) for N  =  2 and N  = 3. Solid arcs of diameter k filled with all possible arcs of diameters smaller than k denote propagator Uk. c Dyck path diagrams. Examples for N = 2 and N  = 3 and their corresponding influence function diagrams, which composes \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{2}\) and \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{3}\), respectively. Solid lines denote influence functions I and dashed lines denote \(\tilde{{{{\bf{I}}}}}\).

Results

General setup

We consider a broad range of system-bath Hamiltonians in which the bath is Gaussian, and the system-bath Hamiltonian is bilinear. The total Hamiltonian is \(\hat{H}={\hat{H}}_{S}+{\sum }_{j}({\hat{H}}_{B,j}+{\sum }_{\alpha }{\hat{H}}_{I,j,\alpha })\), with subscripts j and α specifying the jth bath and the αth interaction, respectively. While we do not limit the form of \({\hat{H}}_{S}\) in our discussion, we consider a quadratic (i.e., Gaussian) Hamiltonian for the baths, \({\hat{H}}_{B,j}={\sum }_{k}{\omega }_{k,j}{\hat{a}}_{k,j}^{{{\dagger}} }{\hat{a}}_{k,j}\), where \({\hat{a}}_{k,j}\) can be fermionic or bosonic (it is also possible to treat baths consisting of noninteracting spins in a certain limit, see Supplementary Notes 3), and the bilinear interaction Hamiltonian, \({\hat{H}}_{I,j,\alpha }={\hat{S}}_{j,\alpha }\otimes {\hat{B}}_{j,\alpha }\) with \({\hat{S}}_{j,\alpha }\) and \({\hat{B}}_{j,\alpha }\) being the system and bath operators, respectively. We also assume that the initial density matrix is separable between the system and each bath. There are four classes of problems that one may commonly encounter under the setup described:

  1. 1.

    Class 1: With only single α for all baths j (such cases are henceforth indicated by dropping the subscript α), \(\{{\hat{S}}_{j}\}\) are all diagonalizable, and furthermore, that \(\{{\hat{S}}_{j}\}\) are all simultaneously diagonalizable. That is, all terms in \(\{{\hat{H}}_{I,j}\}\) commute. The spin-boson model, other models in the same universality class, and Frenkel exciton models for photosynthetic systems belong to this class.

  2. 2.

    Class 2: No terms in \(\{{\hat{S}}_{j}\}\) commute but each term in \(\{{\hat{S}}_{j}\}\) is diagonalizable. Generalizing the models in Class 1 to multiple nonadditive baths typically leads to this case. Such systems may arise when considering non-adiabatic dynamics of systems involving strong coupling of electronic degrees of freedom coupled to quantized photonic modes32.

  3. 3.

    Class 3: There are common baths for some \({\hat{H}}_{I,j,\alpha }\) and \(\{{\hat{S}}_{j,\alpha }\}\) may or may not commute. Examples of such baths arise when considering decoherence in models of coupled qubits45.

  4. 4.

    Class 4: No terms in \(\{{\hat{S}}_{j}\}\) commute and each term in \(\{{\hat{S}}_{j}\}\) is not diagonalizable. The Anderson impurity model46 is representative of this category.

We show in all three classes that one can relate \({{{\mathcal{I}}}}\) and \({{{\bf{{{{\mathcal{K}}}}}}}}\) analytically. Furthermore, we show that one can obtain the bath spectral density from the reduced dynamics. Lastly, for Class 1, we show that a simple diagrammatic structure in the relationship between \({{{\mathcal{I}}}}\) and \({{{\bf{{{{\mathcal{K}}}}}}}}\) can be found, which allows for efficient construction of \({{{\bf{{{{\mathcal{K}}}}}}}}\) without approximations. We provide more details of Class 1 in the main text, and additional details for other classes are available in the Supplementary Notes. Further, for Class 1 models, we extend this analysis to consider driven systems, extending the analysis beyond the time-translationally invariant memory kernels observed for time-independent Hamiltonians.

Path integral formulation

The time evolution of the full system is given by, \({\rho }_{{{{\rm{tot}}}}}(t)={e}^{-i\hat{H}t}{\rho }_{{{{\rm{tot}}}}}(0){e}^{i\hat{H}t}\). We discretize time and employ a Trotterized propagator,

$${e}^{-i\hat{H}\Delta t}={e}^{-i{\hat{H}}_{S}\Delta t/2}{e}^{-i{\hat{H}}_{{{{\rm{env}}}}}\Delta t}{e}^{-i{\hat{H}}_{S}\Delta t/2}+O(\Delta {t}^{3}),$$
(1)

where \({\hat{H}}_{{{{\rm{env}}}}}=\hat{H}-{\hat{H}}_{S}\). The initial total density matrix is assumed to be factorized into \({\rho }_{{{{\rm{tot}}}}}(0)=\rho (0)\otimes {({Z}_{j}^{-1}\exp [-{\beta }_{j}{\hat{H}}_{B,j}])}^{\otimes j}\) at inverse temperature βj where \({Z}_{j}={{{\rm{Tr}}}}\exp [-\beta {\hat{H}}_{B,j}]\). Then, one can show that the dynamics of the reduced system density matrix, \(\rho (N\Delta t)={\rho }_{N}={{{{\rm{Tr}}}}}_{B}\left[{\rho }_{{{{\rm{tot}}}}}(N\Delta t)\right]\) (partial trace over all baths’ degree of freedom), follows

$$\langle {x}_{2N}^{+}| {\rho }_{N}| {x}_{2N}^{-}\rangle= \sum\limits_{{x}_{0}^{\pm }\cdots {x}_{2N-1}^{\pm }}{G}_{{x}_{0}^{\pm }{x}_{1}^{\pm }}{G}_{{x}_{1}^{\pm }{x}_{2}^{\pm }}\ldots {G}_{{x}_{2N-1}^{\pm }{x}_{2N}^{\pm }}\\ \times \langle {x}_{0}^{+}| {\rho }_{0}| {x}_{0}^{-}\rangle \prod\limits_{\alpha }{{{{\mathcal{I}}}}}_{j}({x}_{1}^{\pm },\, {x}_{3}^{\pm },\cdots \,,\, {x}_{2N-1}^{\pm }),$$
(2)

where \({G}_{{x}_{m}^{\pm }{x}_{m+1}^{\pm }}=\langle {x}_{m}^{+}| {e}^{-\frac{i{\hat{H}}_{s}\Delta t}{2}}| {x}_{m+1}^{+}\rangle \langle {x}_{m+1}^{-}| {e}^{\frac{i{\hat{H}}_{s}\Delta t}{2}}| {x}_{m}^{-}\rangle\).

Restricting ourselves to problems in Class 1 (details for other Classes are available in the Supplementary Notes), we consider \({\hat{H}}_{I}=\hat{S}\otimes \hat{B}\) where \(\hat{S}\) is a system operator that is diagonal in the computational basis and \(\hat{B}={\sum }_{k}{\lambda }_{k}({\hat{a}}_{k}^{{{\dagger}} }+{\hat{a}}_{k})\) is a bath operator that is linear in the bath creation and annihilation operators (with the subscript α and j dropped for clarity.) The discussion below can be applied to cases with multiple commuting \(\hat{S}\otimes \hat{B}\) since \({{{\mathcal{I}}}}\) take simple product form, see Supplementary Note 1. We can show that the influence functional, \({{{\mathcal{I}}}}\), is pairwise separable,

$${{{\mathcal{I}}}}({x}_{1}^{\pm },\, {x}_{3}^{\pm },\cdots \,,\, {x}_{2N-1}^{\pm })= {\prod}_{n=1}^{N}{I}_{0,\, {x}_{2n-1}^{\pm }}{\prod}_{n=1}^{N-1}{I}_{1,\, {x}_{2n-1}^{\pm }{x}_{2n+1}^{\pm }}\\ \times {\prod}_{n=2}^{N-1}{I}_{2,\, {x}_{2n-3}^{\pm }{x}_{2n+1}^{\pm }}\\ \cdots \times {I}_{N-1,\, {x}_{1}^{\pm }{x}_{2N-1}^{\pm }}$$
(3)

where the influence functions Ik are defined in Supplementary Note 1, and are related to the bath spectral density, \(J(\omega )=\pi {\sum }_{k}{\lambda }_{k}^{2}\delta (\omega -{\omega }_{k})\). For later use, we note that Eq. (2) can be simplified into

$$\langle {x}_{2N}^{+}| {\rho }_{N}| {x}_{2N}^{-}\rangle={\sum}_{{x}_{0}^{\pm }}{({{{{\bf{U}}}}}_{N})}_{{x}_{2N}^{\pm }{x}_{0}^{\pm }}\langle {x}_{0}^{+}| {\rho }_{0}| {x}_{0}^{-}\rangle,$$
(4)

where UN is the system propagator from t  = 0 to t  =  NΔt. It is then straightforward to express UN in terms of {Ik}19,20,21,42,47.

The Nakajima–Zwanzig equation

The Nakajima–Zwanzig equation is a time-non-local formulation of the formally exact GQME. Assuming the time-independence of \({\hat{H}}_{S}\), the discretized homogeneous Nakajima–Zwanzig equation takes the form

$${\rho }_{N}={{{\bf{L}}}}{\rho }_{N-1}+\Delta {t}^{2}{\sum}_{m=1}^{N}{{{{\bf{{{{\mathcal{K}}}}}}}}}_{N-m}{\rho }_{m-1},$$
(5)

where \({{{\bf{L}}}}\equiv ({{{\bf{1}}}}-\frac{i}{\hslash }{{{{\mathcal{L}}}}}_{S}\Delta t)\) with \({{{{\mathcal{L}}}}}_{S}\bullet \equiv [{\hat{H}}_{S},\bullet ]\) being the bare system Liouvillian and \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{n}\) is the discrete-time memory kernel at time step n. To relate \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) to {Ik}, we inspect the reduced dynamics evolution operator UN as defined in Eq. (4),

$${{{{\bf{U}}}}}_{N}={{{\bf{L}}}}{{{{\bf{U}}}}}_{N-1}+\Delta {t}^{2}{\sum}_{m=1}^{N}{{{{\bf{{{{\mathcal{K}}}}}}}}}_{N-m}{{{{\bf{U}}}}}_{m-1}.$$
(6)

With this relation, one can obtain \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) from the reduced propagators {Uk}. We observe setting N  = 1 yields \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{0}=\frac{1}{\Delta {t}^{2}}({{{{\bf{U}}}}}_{1}-{{{\bf{L}}}})\), since U0 is the identity. The memory kernel, \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{0}\), accounts for the deviation of the system dynamics from its pure dynamics (decoupled from the bath) within a time step. From setting N  =  2, we get \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{1}=\frac{1}{\Delta {t}^{2}}({{{{\bf{U}}}}}_{2}-{{{{\bf{U}}}}}_{1}{{{{\bf{U}}}}}_{1})\). This intuitively shows that \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{1}\) captures the effect of the bath that cannot be captured within \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{0}\). Similarly, for N  =  3, \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{2}=\frac{1}{\Delta {t}^{2}}({{{{\bf{U}}}}}_{3}-{{{{\bf{U}}}}}_{2}{{{{\bf{U}}}}}_{1}-{{{{\bf{U}}}}}_{1}{{{{\bf{U}}}}}_{2}+{{{{\bf{U}}}}}_{1}{{{{\bf{U}}}}}_{1}{{{{\bf{U}}}}}_{1}).\) This set of equations is similar to cumulant expansions, widely used in many-body physics and electronic structure theory48,49. Instead of dealing with higher-order N-body expectation values, we deal with higher-order N-time memory kernel in this context. The N-time memory kernel \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) is the N-th order cumulant in the cumulant expansion of the system operator. Unsurprisingly, these recursive relations lead to diagrammatic expansions commonly found in cumulant expansions48, as shown in Fig. 1b.

Relationship between \({{{\bf{{{{\mathcal{K}}}}}}}}\) and I

Using this cumulant generation of \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) and by expressing {Uk} in terms of {Ik}, we obtain a direct relationship between \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) and \({\{{{{{\bf{I}}}}}_{k}\}}_{k=0}^{k=N}\). Specifically, we have

$${{{{\mathcal{K}}}}}_{0,ik}=\frac{1}{\Delta {t}^{2}}\left[{\sum}_{j}{G}_{ij}{I}_{0,j}{G}_{jk}-{L}_{ik}\right]$$
(7)
$${{{{\mathcal{K}}}}}_{1,im}=\frac{1}{\Delta {t}^{2}}\sum\limits_{jk}{G}_{ij}{I}_{0,j}{F}_{jk}{\tilde{I}}_{1,jk}{I}_{0,k}{G}_{km}$$
(8)
$${{{{\mathcal{K}}}}}_{2,ip}= \frac{1}{\Delta {t}^{2}}\sum\limits_{jkn}{G}_{ij}{F}_{jk}{F}_{kn}\left({\tilde{I}}_{2,jn}{I}_{1,jk}{I}_{1,kn}\right.\\ +\left.{\tilde{I}}_{1,jk}{\tilde{I}}_{1,kn}\right){I}_{0,j}{I}_{0,k}{I}_{0,n}{G}_{np}$$
(9)
$${{{{\mathcal{K}}}}}_{3,il}= \frac{1}{\Delta {t}^{2}}\sum\limits_{jknp}{G}_{ij}{F}_{jk}{F}_{kn}{F}_{np}{I}_{0,j}{I}_{0,k}{I}_{0,n}{I}_{0,p}{G}_{pl}\\ \left\{{\tilde{I}}_{3,jp}{I}_{2,jn}{I}_{2,kp}{I}_{1,jk}{I}_{1,kn}{I}_{1,np}\right.\\ +{I}_{1,kn}\left({\tilde{I}}_{2,jn}{\tilde{I}}_{2,kp}{I}_{1,jk}{I}_{1,np}+{\tilde{I}}_{2,kp}{\tilde{I}}_{1,jk}{I}_{1,np}\right.\\ \left.\left. +{\tilde{I}}_{2,jn}{\tilde{I}}_{1,np}{I}_{1,jk}\right)+{\tilde{I}}_{1,jk}{\tilde{I}}_{1,kn}{\tilde{I}}_{1,np}\right\}\\ \vdots$$
(10)

where we define F  =  GG (bold-face for denoting matrices) and \({\tilde{I}}_{k,ij}={I}_{k,ij}-1\). We emphasize that Eqs. (7) to (10) are exact up to the Trotter discretization error and valid for any coupling strengths in the models considered in this work. By definition, earlier \({{{{\mathcal{K}}}}}_{N}\) contains shorter memory effects and will thus appear simpler.

This series of equations is a part of the main result of this work, showing explicitly how \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) is diagrammatically constructed in terms of influence functions from I0 to IN. This construction can easily show the computational effort of computing \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\). We sum over an additional time index for each time step. This gives a computational cost that scales exponentially in time, \({{{\mathcal{O}}}}({N}_{\dim }^{2N})\) where \({N}_{\dim }\) is the dimension of the system Hilbert space. In Supplementary Note 3E, we present further details on the general algorithm for calculating higher-order memory kernels, exploiting a non-trivial diagrammatic structure to express them in terms of I and \(\tilde{{{{\bf{I}}}}}\).

It can be inferred from Eqs. (8) to (10) that each term in \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) is represented uniquely by each Dyck path50,51,52 of order N. Hence, one can construct \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) by generating the respective set of Dyck paths and associating each path with a tensor contraction of influence functions. This is illustrated in Fig. 1c and further detailed in Supplementary Note 3E. This observation reveals some new properties of \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\). First, the number of terms in \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) is given by the N-th Catalan’s number51,52 \({C}_{N}=\frac{1}{N+1}\left(\begin{array}{c}2N\\ N\end{array}\right)\) (i.e., \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{4}\) has 14 such terms, \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{5}\) has 42, then 132, 429, 1430, 4862, 16796, 58786, …). We note that Catalan’s number appeared in ref. 47 when analyzing an approximate numerical INFPI method. See Supplementary Note 3E for more information.

Scrutinizing the relationship of \({{{\bf{{{{\mathcal{K}}}}}}}}\) and I, presented in Supplementary Note 3E, further, we can observe how \({{{\bf{{{{\mathcal{K}}}}}}}}\) decays asymptotically. As is well-known, for typical condensed phase systems Ik,ij → 1 for k → 17,53. Similarly, because \({\tilde{I}}_{k,ij}\,\ll \,1\) for large k, those terms with larger multiplicities contribute less to \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) and decay exponentially to zero as multiplicity grows. In fact, for condensed phase systems, the decay of IN and \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) is often rapid, which motivated the development of approximate INFPI methods17,18,19,20,53 and other approximate GQME methods37,54,55,56.

With our new insight, approximate INFPI methods can be viewed through the lens of the corresponding memory kernel content (and vice versa). As an example, we shall discuss the iterative quasiadiabatic path-integral methods17,18,53. In these methods, Ik,ij is set to unity beyond a preset truncation length \({k}_{\max }\). For simplicity, let us consider \({k}_{\max }=1\), and hence Ik,ij  =  1 and \({\tilde{I}}_{k,ij}=0\) for \(k \, > \, {k}_{\max }\). We now inspect what this approximation entails for \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\). First, no approximation is applied to \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{0}\) and \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{1}\). Then, in \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{2}\) (Eq. (9)),

$$({\tilde{I}}_{2,jn}{I}_{1,jk}{I}_{1,kn}+{\tilde{I}}_{1,jk}{\tilde{I}}_{1,kn})\to {\tilde{I}}_{1,jk}{\tilde{I}}_{1,kn}.$$
(11)

Similarly, in \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{3}\) (Eq. (10)), the only surviving contribution is from \({\tilde{I}}_{1,jk}{\tilde{I}}_{1,kn}{\tilde{I}}_{1,np}\). We hope such a direct connection between approximate methods will inspire the development of more efficient and accurate methods.

The time-translational structure of the INFPI formulation and its Dyck-diagrammatic structure allow for a recursive deduction of IN from \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\), which is the inverse map of Eqs. (8) to (10). We first observe that

$${{{{\bf{I}}}}}_{0}={{{{\bf{G}}}}}^{-1}(\delta {t}^{2}{{{{\bf{{{{\mathcal{K}}}}}}}}}_{0}+{{{\bf{L}}}}){{{{\bf{G}}}}}^{-1}$$
(12)

where we obtained I0 from \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{0}\). One can then show that

$${I}_{1,jk}=1+\Delta {t}^{2}\frac{{({{{{\bf{G}}}}}^{-1}{{{{\bf{{{{\mathcal{K}}}}}}}}}_{1}{{{{\bf{G}}}}}^{-1})}_{jk}}{{F}_{jk}{I}_{0,j}{I}_{0,k}}.$$
(13)

using \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{1}\) and I0. Similarly, inspecting the expression for \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{2}\) gives us

$${I}_{2,jn}=1+\frac{\left[\Delta {t}^{2}{({{{{\bf{G}}}}}^{-1}{{{{\bf{{{{\mathcal{K}}}}}}}}}_{2}{{{{\bf{G}}}}}^{-1})}_{jn}-{\sum }_{k}{F}_{jk}{F}_{kn}{\tilde{I}}_{1,jk}{\tilde{I}}_{1,kn}{I}_{0,j}{I}_{0,k}{I}_{0,n}\right]}{{\sum }_{k}{F}_{jk}{F}_{kn}{I}_{1,jk}{I}_{1,kn}{I}_{0,j}{I}_{0,k}{I}_{0,n}},$$
(14)

where \({\tilde{I}}_{1,jk}={I}_{1,jk}-1\) as well as I0,i are obtained from the previous two relations.

Spectral density learning

In Supplementary Note 3F, we present a general recursive procedure using the Dyck paths and how to obtain the bath spectral density from Ik. As a result, we achieve the following mapping from left to right,

$$\rho \to {{{\bf{U}}}}\to {{{\bf{{{{\mathcal{K}}}}}}}}\to {{{\bf{I}}}}\to J(\omega ).$$
(15)

A remarkable outcome of this analysis is that one can completely characterize the environment (i.e., J(ω)), by inspecting the reduced system dynamics. Such a tool is powerful in engineering quantum systems in experiments where we have access to only the reduced system Hamiltonian and reduced system dynamics, but lack information about the environment. Furthermore, this approach provides an alternative to quantum noise spectroscopy57,58. This type of Hamiltonian learning with access only to subsystem observables has been achieved for other simpler Hamiltonians59,60. To our knowledge, our work is the first to show this inverse map for the Hamiltonian considered here.

Note that the expression Eq. (13) can become ill-defined when F is diagonal. This occurs when \({\hat{H}}_{S}\) is diagonal and commutes with \({\hat{H}}_{{{{\rm{env}}}}}\), constituting a purely dephasing dynamics. In that case, the reduced system dynamics is governed only by the diagonal elements of I. Similarly, \({{{\bf{{{{\mathcal{K}}}}}}}}\) is diagonal, as clearly seen in our Dyck path construction. As a result, the map \({{{\bf{{{{\mathcal{K}}}}}}}}\leftrightarrow {{{\bf{I}}}}\) is no longer bijective in that we cannot obtain off-diagonal elements of I. Regardless, one can still extract J(ω) using only the diagonal elements of I via inverse cosine transform. One may worry Eq. (14) could also become ill-conditioned when its denominator vanishes, but \({\hat{H}}_{S}\) is not diagonal. If that were the case, the propagator U2 would become zero. Therefore, this condition cannot be satisfied in general. Finally, we remark that generalization to extract the \({{{{\mathcal{I}}}}}_{\alpha }\) of multiple baths through a single central system is possible and straightforward. See Supplementary Note 3F for more details.

Generalization to driven systems

While analysis up to this point considered general time-independent systems, in many scenarios, e.g., of biological or engineering relevance, particularly for quantum control applications61, a time-dependent description of the system is necessary. In such cases, \({{{\bf{{{{\mathcal{K}}}}}}}}\) loses its time-translational properties and should depend on two times. Consequently, Eq. (6) cannot be applied. To overcome this, we factorize \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N+s,s}\) into time-dependent and time-independent parts. This can be achieved straightforwardly, as follows: one observes upon the inclusion of time-dependence in \({\hat{H}}_{S}\), the terms that are affected in \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\), Eqs. (7) to (10), are only the bare system propagators G and F. We define the remainder as tensors with N number of indices, \({T}_{N;{x}_{s+2},{x}_{s+4},...,{x}_{s+2N}}\), which includes all the influence of the bath between N-time steps. These tensors only need to be computed once and reused for a later time. Then, one builds the kernels via tensor contraction over two tensors,

$${{{{\mathcal{K}}}}}_{N+s,s;{x}_{s+2N+2},\, {x}_{s}}=\frac{1}{\Delta {t}^{2}}{\sum}_{\bullet }{P}_{{x}_{s},\bullet,\, {x}_{s+2N+2}}^{N+1+s,s}{T}_{N;\bullet },$$
(16)

where • denotes indices, xs+2, . . . , xs+2N, and the tensor \({P}_{{x}_{s},\bullet,{x}_{2N+s}}^{N+1+s,s}\) encapsulates the time-dependence of the system Hamiltonian and is constructed only out of bare system propagators. The tensor, TN;•, then consists only of influence functions, up to IN. The construction of these tensors is straightforward with TN;• following the Dyck path construction presented for time-independent system dynamics. On the surface, the TN;• tensor appears to be related to the process tensor33,34: T represents K upon the contraction with P, but the process tensor is used to construct U when contracted with P. Subsequently, there is a non-trivial rearrangement of the terms to write K in terms of the process tensor. The simple relationship between T and K in Eq. (16) is our unique contribution. More detailed analysis and relevant numerical results for open, driven system dynamics are presented in Supplementary Note 3H.

Numerical verification

While the discussion above applies to a generic system linearly coupled to a Gaussian bath (or multiple such baths if they couple additively), we discuss the spin-boson model for further illustration. The spin-boson model is an archetypal model for studying open quantum systems62. The model comprises a two-level system coupled linearly to a bath of harmonic oscillators. Hence, it and its generalizations have been used to understand various quantum phenomena: transport, chemical reactions, diode effect, and phase transitions63.

We use \({\hat{H}}_{S}=\epsilon {\sigma }_{z}+\Delta {\sigma }_{x}\), coupled via σz to a harmonic bath with spectral density (ω ≥ 0)62

$$J(\omega )=\pi {\sum}_{k}{\lambda }_{k}^{2}\delta (\omega -{\omega }_{k})=\frac{\xi \pi }{2}\frac{{\omega }^{s}}{{\omega }_{c}^{s-1}}{e}^{-\omega /{\omega }_{c}},$$
(17)

where J(−ω)  =  −J(ω), ξ is the Kondo parameter, and s is the Ohmicity. All reference calculations were performed using the HEOM method28,64,65. Details of the HEOM implementation used here are provided in Supplementary Note 7.

In Fig. 2, we investigate a series of spin-boson models corresponding to weak and intermediate coupling to an Ohmic environment (s =  1) as well as strong coupling to a subohmic environment (s  =  0.5). In panels (a, b), we observe that the decay of \({\tilde{{{{\bf{I}}}}}}_{N}\) is rapid for the Ohmic cases. This translates to a similarly rapid decay for the respective \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\), although one can see that both \({\tilde{I}}_{N}\) and \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) are overall scaled larger in the strong coupling regime. This is to be contrasted with the results for the strongly coupled subohmic environment shown in panel (c). The decay of the \({\tilde{{{{\bf{I}}}}}}_{N}\) is slow, accompanied by a similarly slow decay of \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\). Interestingly, the rates by which both \({\tilde{{{{\bf{I}}}}}}_{N}\) and \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) decay are similar, which we observe to be exponential. We also see perfect agreement between \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) constructed from our Dyck diagrammatic method and those obtained by numerically post-processing exact trajectories via the transfer tensor method40. Lastly, we construct \({\tilde{{{{\bf{I}}}}}}_{N}\) from \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) up to N = 16 as exemplified in Eqs. (13) and (14) and observe perfect agreement between our \({\tilde{{{{\bf{I}}}}}}_{N}\) and those computed from its known analytic formula.

Fig. 2: Numerical verification of the Dyck construction.
figure 2

Operator norm of \({\tilde{{{{\bf{I}}}}}}_{N}\) (Light) and \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) (Dark) as a function of NΔt. Lines denote \({\tilde{{{{\bf{I}}}}}}_{N}\) computed from analytic expressions and \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) from post-processing exact numerical results via the transfer tensor method40. Circles denote \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) from the Dyck diagrammatic method, and crosses are \({\tilde{{{{\bf{I}}}}}}_{N}\) obtained via the inverse map discussed in Eqs. (13) and (14). Dashed lines denote the operator norm of the crest term of \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) (the Dyck path diagram with the highest height). Parameters used are: Δ  =  1 (other parameters are expressed relative to Δ), ϵ  =  0, β  =  5, Δt  =  0.1, ωc  = 7.5, and ξ  =  0.1 and s  =  1 (a), ξ  =  0.5 and s  =  1 (b), and ξ  =  0.5 and s  =  0.5 (c).

We note that the term with \({\tilde{{{{\bf{I}}}}}}_{N}\) (multiplicity of 1) contributes the most to the memory kernel, \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) for all parameters considered in our work. We refer to this term as the “crest” term, which corresponds to the Dyck path that goes straight to the top and down straight to the bottom, having the tallest height. We see a small difference between the crest term norm and the full memory kernel norm in Fig. 2, indicating that the memory kernel is dominated by the crest term. Since the decay of \({\tilde{{{{\bf{I}}}}}}_{N}\) is directly related to the decay of the bath correlation function, one can also make connections between the memory kernel decay and the bath correlation function decay. Nonetheless, for a stronger system-bath coupling (e.g., Fig. 2b) and for cases with a long-lived memory (e.g., Fig. 2c), terms other than the crest term contribute non-negligibly, making general analysis of the memory kernel decay challenging.

The cost to numerically compute \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) scales exponentially with N. Nevertheless, it is possible to exploit the decay of \({\tilde{{{{\bf{I}}}}}}_{N}\), which is rapid for some environments, e.g., ohmic baths, in turn signifying the decay behavior of \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\). This allows truncating the summation in Eq. (5), enabling dynamical propagation to long times (with linear costs in time) as usually done in small matrix path integral methods19,20 and GQME40 methods. We show in panels (a1) and (b1) of Fig. 3 that this procedure applied to a problem with a rapidly decaying \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) quickly converges to the exact value with a reasonably low-order. On the other hand, for environments with slowly decaying \({\tilde{{{{\bf{I}}}}}}_{N}\), the truncation scheme struggles to work effectively. For a strongly coupled subohmic environment, as shown in Fig. 3c1, one would need truncation orders beyond the current computational capabilities of our implementation (about 16) to converge to the exact value. Nonetheless, this illustrates that our direct construction of \({{{{\bf{{{{\mathcal{K}}}}}}}}}_{N}\) can recover exact dynamics if sufficiently high-order is used. Furthermore, the construction is non-perturbative and can be applied to strong coupling problems. We note that describing quantum phase transitions at T  =  0 would require capturing the algebraic decay in IN29. Our analysis can, in principle, capture such a slow decay as our approach is exact but will require further optimization in the underlying numerical algorithms for practical applications.

Fig. 3: Dynamics of spin-boson model with truncated Dyck paths.
figure 3

a1, b1, c1 Magnetization (〈σz(t)〉) dynamics predicted using \({{{\bf{{{{\mathcal{K}}}}}}}}\) constructed via Dyck diagrams with increasing truncation orders (from light to darker colors) compared to exact results (see Supplementary Note 6). a2, b2, c2 Bath spectral densities extracted through the Dyck diagrammatic method with increasing truncation order (from white to black colors) compared to exact spectral densities (dashed), see Supplementary Note 3F for more details. These results come from numerically exact trajectories, initiated from linearly independent initial states \({\rho }_{1}(0)=\frac{1}{2}({{{\bf{1}}}}+{\sigma }_{z}),\,{\rho }_{2}(0)=\frac{1}{2}({{{\bf{1}}}}-{\sigma }_{z}),\,{\rho }_{3}(0)=\frac{1}{2}({{{\bf{1}}}}+{\sigma }_{x}),\,{\rho }_{4}(0)=\frac{1}{2}({{{\bf{1}}}}+{\sigma }_{x}+{\sigma }_{y}+{\sigma }_{z})\). Parameters used are: Δ = 1 (other parameters are expressed relative to Δ), ϵ =  0, β  =  5, Δt  =  0.1 (a1, b1, c1) or Δt  = 0.05 (a2, b2, c2), ωc  =  7.5, and ξ  =  0.1 and s  =  1 (a1 and a2), ξ  =  0.5 and s =  1 (b1 and b2), or ξ  = 0.5 and s =  0.5 (c1 and c2).

Finally, in Fig. 3a2, b2, c2, we show the extraction of spectral densities J(ω) for three distinct environments. The extracted J(ω) converges to the analytical value as we obtain the influence functions to higher orders. This shows that we can indeed invert the reduced system dynamics to obtain J(ω) given the knowledge of the system Hamiltonian, which ultimately characterizes the entire system-bath Hamiltonian. Nonetheless, the accuracy of the resulting J(ω) depends on the highest order of Ik we can numerically extract. The cost of extracting Ik scales exponentially in k without approximations, so there is naturally a limit to the precision of J(ω) in practice. Furthermore, we show how this procedure can extract highly structured spectral densities as well in Supplementary Note 8 and Supplementary Fig. 9. New opportunities await in using approximately inverted Ik and quantifying the error in the resulting J(ω).

Discussion

In this work, we provide analytical analysis along with numerical results that show complete equivalence between the memory kernel (\({{{\bf{{{{\mathcal{K}}}}}}}}\)) in the GQME formalism and the influence function (I) used in INFPI. Our analysis applies to a broad class of general (driven) systems interacting bilinearly with Gaussian baths. Furthermore, we showed that one can extract the bath spectral density from the reduced system dynamics with the knowledge of the reduced system Hamiltonian \({\hat{H}}_{S}\). We believe that this unified framework for studying non-Markovian dynamics will facilitate the development of new analytical and numerical methods that combine the strengths of both GQME and INFPI. For example, deep connections between the present work and recent matrix product state (MPS)-based approaches invite ideas that would efficiently extract the environmental spectral density from reduced system dynamics29,31,32,33,34.

Methods

Details pertaining to analytical derivation of results in this work, as well as numerical implementations, are provided in the Supplementary Notes.