Entropy production in Gaussian bosonic transformations using the replica method: application to quantum optics

In spite of their simple description in terms of rotations or symplectic transformations in phase space, quadratic Hamiltonians such as those modeling the most common Gaussian operations on bosonic modes remain poorly understood in terms of entropy production. For instance, determining the von Neumann entropy produced by a Bogoliubov transformation is notably a hard problem, with generally no known analytical solution. Here, we overcome this difficulty by using the replica method, a tool borrowed from statistical physics and quantum field theory. We exhibit a first application of this method to the field of quantum optics, where it enables accessing entropies in a two-mode squeezer or optical parametric amplifier. As an illustration, we determine the entropy generated by amplifying a binary superposition of the vacuum and an arbitrary Fock state, which yields a surprisingly simple, yet unknown analytical expression.

In quantum optics and continous-variable quantum information theory, the so-called Gaussian transformations are ubiquitous [1].For instance, the coupling between two modes of the electromagnetic field as effected by a beam splitter in bulk optics or an optical coupler in fiber optics is modeled by the (passive) quadratic Hamiltonian H = iâ b † − iâ †b , where â and b are the bosonic annihilation operators of the modes.This operation can be shown to preserve the Gaussian character of the quantum state, or more precisely the quadratic exponential form of its characteristic function.The corresponding transformation in phase space is the rotation â → cos θ â + sin θ b and b → cos θ b − sin θ â, where cos 2 θ is the transmittance.Another generic Gaussian coupling between two modes of the electromagnetic field results from parametric down-conversion in a nonlinear medium, which is modeled by the (active) quadratic Hamiltonian H = iâ b − iâ †b † .It effects the Bogoliubov transformation â → cosh r â + sinh r b † and b → cosh r b + sinh r â † , where cosh 2 r is the parametric amplification gain.It corresponds to two-mode squeezing or parametric amplification in the context of quantum optics, but also describes a much wider range of physical situations, such as the Unruh radiation in an accelerating frame [2][3][4] or the Hawking radiation emitted by a black hole [5,6].
While they are common in quantum optics and related fields, these Gaussian transformations are nevertheless poorly understood in terms of quantum entropy generation.For example, when amplifying the state of the electromagnetic field using parametric down-conversion, the output state suffers from some intrinsic quantum noise, which is an increasing function of the amplification gain.It is a central problem to characterize this noise and be able to determine the von Neumann entropy of the output state, this being indispensable for instance to compute the capacities of Gaussian bosonic channels [7][8][9].Unfortunately, the output entropy is generally not accessible for an arbitrary input state because it is difficult -usually impossible -to diagonalize the corresponding density operator in an infinite-dimensional Fock space.With the exception of Gaussian states, e.g., the vacuum state (resulting after amplification in a thermal state whose entropy is given by a well-known formula), few analytical results are available as of today.This problem is also linked to several entropic conjectures on Gaussian optimality in the context of bosonic channels.Notably, determining the capacity of a multiple-access or broadcast Gaussian bosonic channel is pending on being able to prove such entropic conjectures, see e.g [10][11][12].
In this Letter, we demonstrate that the replica method can be successfully exploited in order to overcome this problem and find the exact analytical expression of the output entropy of Gaussian transformations applied on non-trivial input states.The replica method is well known to be a very useful tool in statistical physics, especially with disordered systems [13], and in quantum field theory [14].Here, we first apply it to the field of quantum optics and show that it enables accessing the entropy generated by a quantum optical amplifier, paving the way towards a quantum entropic characterization of all Gaussian transformations generated by quadratic Hamiltonians.To illustrate the power of this approach, we calculate the output entropy when amplifying a binary superposition of the vacuum and an arbitrary Fock state, which yields a surprisingly simple analytical expression.
Replica method.-Calculating the von Neumann entropy S(ρ) = −tr(ρ ln ρ) of a bosonic mode that is found in state ρ is often an intractable task because it requires arXiv:1408.5062v1[quant-ph] 21 Aug 2014 finding the infinite vector of eigenvalues of ρ.This can sometimes be circumvented by using the replica method, which relies on the identity log Z = lim n→0 + (Z n − 1)/n.Using x log x = lim n→1 + (x n − x)/(n − 1) = ∂ ∂n (x n ) n=1 + , we may reexpress the von Neumann entropy as The trick is to find an analytical expression of tr(ρ n ) as a function of n ∈ N * and then computing the derivative at n = 1, avoiding the need to diagonalize ρ.This method also makes apparent the connection between the von Neumann entropy and other widely used measures of disorder, such as Tsallis and Rényi entropies.It has been used with great success in the context of spin glasses and quantum field theory [13][14][15][16][17][18][19], being often justified based on the analyticity of tr(ρ n ) and its derivative with respect to n in the neighborhood of n=1 [16].In the Supplemental Material [20], we give a justification based on Hausdorff's moment problem and provide some easy but instructive examples from classical probability theory.
As we shall show in this Letter, using the replica method to access the von Neumann entropy in quantum optical scenarios is highly promising since, dealing with quadratic interactions, it involves Gaussian integrations, or else tricks can be used in order to bring tr(ρ n ) to a calculable form.Let us illustrate this principle with a generic Gaussian state, namely a thermal state ρ0 = (1 , where τ ≡ tanh ξ and ξ is the squeezing parameter (see below).Since ρ0 is in a diagonal form, it is of course straightforward to calculate its entropy, giving the well-known formula S(ρ 0 ) = g(N ) ≡ (N + 1) log(N + 1) − N log N .However, we may also start with its non-diagonal representation in the coherent-state basis {|α }, where α is a complex number, namely ρ0 = 1 π By making the change of variable α → |τ | α and by using α|β = e −(|α| 2 +|β| 2 −2α * β)/2 , we can write where ᾱ = (α 1 , . . .α n ) T is a column vector and M is the Equation ( 3) is a simple Gaussian integral, which, using the determinant det M = 1 − |τ | 2n , can be expressed as Then, we readily find that which coincides with the above expression S(ρ 0 ) = g(N ) for the entropy of a thermal state, as expected.
Amplifying a Fock state.-Considernow the harder problem of expressing the entropy S m generated by amplifying an arbitrary Fock state |m .Thus, we consider a two-mode squeezer of parameter ξ = |ξ| e iφ , applying the unitary transformation on the initial state |m a |0 b (subscript a refers to the signal mode, while b refers to the idler mode).It can be shown that the vector of eigenvalues of the reduced output state ρm of the signal mode is given by [9,21] p where |τ | = tanh |ξ|, from which we find Equation ( 9) can be re-expressed in a closed form as where and Li −n (ζ) denotes the polylogarithm of order −n [22].Applying Eq. (1) to Eq. ( 10) and taking into account that Li , it is easy to check that Eq. ( 12) gives the correct value for S 0 , that is, the entropy of a thermal state ρ0 as in Eq. ( 6).Also, a closed expression can be found for S 1 in terms of Eulerian numbers as the polylogarithm Li −n (ζ) is a well studied function, see [20].For m > 1, the function Li (m) −n (ζ) assumes a summation form which is convergent and differentiable with respect to n, yielding an analytical expression for S m , see [20].Incidentally, this allows us to verify that S m+1 > S m , ∀m ∈ N, in accordance with what was proven in Ref. [9] using majorization theory.
Superposition of Fock states.-Wewill now show that the same procedure makes it possible to express the entropy analytically in situations where no diagonal form is available for the output state, so the replica method becomes essential.Consider the amplification of a binary superposition of the type where we take z ∈ R without loss of generality.By using the Baker-Campbell-Hausdorff relation, the unitary transformation ( 7) can be rewritten in the form where ν = ln cosh |ξ| and τ = ξ |ξ| tanh |ξ|, so that the joint output state |Ψ of the two modes can be expressed in the double coherent-state basis |α a |β b , namely From Eq. ( 15), we can easily write the reduced output state ρ obtained by tracing |Ψ over the idler mode and paying attention to the non-orthogonality of coherent states.Using the notation ᾱ = (α 1 , . . .α n ) T , we get where the matrix M is defined as in Eq. ( 4) and In order to bring this back to a Gaussian integral, we use the so-called "sources" trick [23], exploiting the identity x m e −x 2 = ∂ m ∂λ m e −x 2 +λx | λ=0 .Then, Eq. ( 16) becomes where Π ∂λ (n) operator in the variables λ = (λ 1 , . . .λ n ) T .Note here that λ j and λ * j are treated as independent variables, instead of their real and imaginary parts.The derivatives with respect to all λ's have been pushed in front of the integrals in Eq. ( 18), so that we get a Gaussian integral that is immediately calculable, resulting in where tr(ρ n 0 ) is given by Eq. ( 5) and corresponds to a vacuum input state (z = 0).In Eq. ( 19), we have defined the circulant matrix while the operator Π ∂λ (n) can be expanded as where each Π 2k (n) contains n k 2 terms that return a nonzero result when acting on exp λ † N λ and taking the value at λ = 0, see [20] for details.The term with k = 0 in Eq. ( 21) is simply Π 0 (n) = 1, so that taking z = 0 trivially results into tr(ρ n ) = tr(ρ n 0 ).The term with k = n gives, when acting on the exponential of Eq. ( 19), where Li ) is defined in Eq. (11).Thus, we recognize that this term is connected with the case of an input Fock state |m , something that can also be seen by taking the limit z → ∞ in Eq. ( 19).If we put all pieces together, we obtain the expression where ρm is the reduced output state resulting from the amplification of |m , and F (m) (n) is defined in [20].Now, applying Eq. (1) to Eq. ( 23), we get Finally, we prove in [20] that the last term of the righthand side of Eq. ( 24) vanishes, so that Eq. ( 24) simplifies into the expression, Intriguingly, the output entropy is thus a simple convex combination of S 0 and S m with the exact same weights as if we had lost coherence between the components |0 and |m of the input superposition.This is schematically pictured in Fig. 1.We have also numerically verified this behavior, which, to our knowledge, had never been observed before.It is illustrated in Fig. 2, where we show that the entropy is a monotonically increasing function of the superposition parameter z (our analytical curve is indistinguishable from the numerical results).
Conclusions.-We have demonstrated the power of the replica method in quantum optics.Such a tool, borrowed from other fields of physics, provides a new angle of attack to access quantum entropic measures for fundamental Gaussian transformations.The entropic characteristics of such transformations can be normally accessed as long as Gaussian states are considered, using the symplectic formalism, but otherwise the problem is generally unsolvable.For instance, it could be proven only recently that the state minimising the output entropy of an optical amplifier is simply the vacuum state [24].Although the amplifier is a common Gaussian operation, the difficulty behind this proof was that no diagonal representation of ρ is available, as is most often the case when non-Gaussian states are considered.The replica method should hopefully enable going beyond this proof as it provides a trick to overcome this difficulty: tr(ρ n ) is expressed for n replicas of state ρ by using Gaussian integrals, without accessing its eigenvalues.
We have illustrated this procedure with the amplification of a state of the form |0 + z|m .This allowed us to unveil a remarkably simple behavior for the entropy of the amplified state, namely that it is a convex combination of the extremal points S 0 and S m , as expressed by Eq. (25).It must be stressed that this analytical result is non-trivial as we do not expect similar expressions for the entropy resulting from other superpositions, such as |1 + z|2 or |0 + z|1 + z |2 .Take for instance a coherent state, which is an infinite superposition of Fock states: the entropy of the amplified state is simply S 0 , just as for the vacuum state.
We hope that future works will further explore this avenue in order to achieve a better entropic characterization of fundamental quantum optical operations or even perhaps solve some pending conjectures on bosonic Gaussian channels.More generally, we anticipate that the replica method will become an invaluable tool in quantum optics and continuous-variable quantum information theory.25) implies that the curve S(z) is always above S0.

SUPPLEMENTAL MATERIAL 1. Replica method : justification and examples
The replica method In quantum mechanics, the von Neumann entropy is defined as where ρ is a density operator.In many cases, this definition is not practical as it involves computing the logarithm of a matrix.In other words, we have to find the eigenvalues of the matrix, which is infinite-dimensional for the density operator of a bosonic mode, a task that is often impossible.One can sometimes circumvent this problem by using the replica method, which is described as follows.We introduce the quantity, which can be viewed as the trace of the n replicas of the density matrix ρ.Here, we denote by ρ(x i , x j ) the representation of ρ in some continuous basis, although in general the basis that we choose to represent the density matrix upon does not have to be continuous.In the case of a discrete representation, we would have summations instead of integrals in Eq. ( 27).The Tsallis entropy of order n is a disorder monotone and is defined as,

S
(n) It is well known that in the limit n → 1, one recovers the von Neumann entropy, that is, On the other hand, it holds that, In numerous papers, arguments of analytic continuation are used to justify that Eq. ( 30) is a correct relation as long as we take the derivative at some integer value of n, here n = 1.Then, from Eqs. ( 29) and (30), we find the central equation The question concerning the justification of the replica method can be rephrased as whether the integer order's moments sequence tr(ρ n ) of a density operator ρ uniquely determines the density operator ρ itself.If this is true, then, using the moments, one could in principle uniquely determine any expectation value, in particular the von Neumann entropy since it is the expectation value −E(ln ρ) in view of Eq. ( 26).

Hausdorff 's moment problem
Let us now recourse to Hausdorff's moment problem in order to justify the replica method.In what follows, we define as moments the quantity m n = tr(ρ n ).Hausdorff's moment problem states that if X is a random variable in the interval [0, 1], then the integer-order moments uniquely determine the distribution P (X) if and only if where In the case at hand, we will consider as random variable the eigenvalues λ i of the (hermitian) density operator, that is Λ ∈ [0, 1] with The moments, is easily found to satisfy, Therefore the knowledge of the moments uniquely determines the density operator in the basis of its eigenvectors, or equivalently the eigenvalues of the density operator.Thus, we can find in principle any expectation value from the knowledge of tr(ρ n ) for n ∈ N * .What we need is a recipe to derive the von Neumann entropy from the moments.An obvious choice comes from observing that if we treat n as a real variable we have, Here we should make two important remarks.First, the trick played in Eq. ( 38) may not be unique; there could be other recipes to determine the von Neumann entropy.
What is important is that Hausdorff's moment problem guarantees that these other recipes would give the same result as Eq. ( 38).Second, extra care should be taken regarding the following fact.During the calculation of tr(ρ n ), we consider n to be a natural number in N * .We only consider n to be real in order to apply the step in Eq. ( 38), but this does not imply that tr(ρ x ), with x ∈ R, is found by simply substituting the natural variable n with the real variable x.What Hausdorff's moment problem guarantees is that, in principle, tr(ρ x ) could be uniquely determined from tr(ρ n ) with some proper recipe.

Examples from classical probability theory
To wrap it up, to calculate the von Neumann entropy using the replica method, we have to find tr(ρ n ) as a function of n ∈ N * , and then we need to calculate the derivative of this quantity with respect to n at n = 1.For the sake of completeness, let us consider two examples from elementary classical probability theory which illustrate very well the validity of the replica method.These examples have no immediate connection with Hausdorff's moment problem but we find it instructive to see that the replica method works for classical distributions as well.
First, consider the Gaussian distribution over some real variable x with mean value µ and standard deviation σ, The entropy of this distribution can be found by calculating the moments, which are and then by finding the derivative of Eq. ( 40) with respect to n at n = 1, which is indeed the well-known entropy of the Gaussian distribution.Now, let us consider the Poisson distribution, The moments read, and from the latter we find, − dm which is indeed the entropy of the Poisson distribution.This result reminds us the simple fact that a nonsummation expression for the entropy is not always available, even for a simple classical distribution.As we see in this paper, a similar situation prevails when considering the output entropy of a parametric amplifier that is fed with an arbitrary Fock state |m with m > 1.

Entropy produced by amplifying a Fock state
In the main text, we consider the amplification of an arbitrary Fock state |m .It is shown that the corresponding output state can be written as so that we obtain a closed expression The function Li −n (ζ) is known as the polylogarithm of order −n.It is connected with the Eulerian polynomials in the following way, Li where are the Eulerian numbers.
The entropy S m of the output state ρm is obtained by taking the derivative of Eq. ( 46) with respect to n, taking into account that Li This can be written more explicitly as Closed expressions of S m are hard to extract for m > 1 since the function Li (m) −n (ζ) cannot be written in a nonsummation form for m > 1.This is not so unexpected as it is similar to the case of a Poisson distribution, see Section 1.Note that S m can also be calculated analytically using the standard definition of the entropy, Eq. ( 26), but the replica method provides an alternative way to achieve this calculation which is straightforward and remains applicable even when Eq. ( 26) cannot be exploited, see Section 3.
Incidentally, we may also verify from Eq. (51) that S m+1 > S m as follows.In Eq. (51), there are three terms.The first two, i.e., (m+1)S 0 and ln m! are increasing functions of m.The last term is a product of is an increasing function of m, while it can also be easily proven that σ(m + 1) > σ(m) for m ∈ N for some k.Namely, for some fixed k, the difference ∆(m) of two terms corresponding to m + 1 and m, respectively, reads where we have used Pascal's rule, Pascal's rule, and therefore Eq. ( 52), holds for k ≥ 1.Nevertheless, it is easy to see that for k = 0 it holds that σ(m) = ln m! which is again an increasing function of m.To summarize we have proven that −(1 − |τ | 2 ) m+1 and σ(m) are returning sets of numbers that are in an increasing order as m ∈ N increases.Consequently their product will be increasing for m ∈ N as well.Therefore, the last term in Eq. ( 50) is increasing for m ∈ N and thus we conclude that S m+1 > S m .This is in accordance with the earlier proof based on majorization theory.
where tr(ρ n 0 ) corresponds to a vacuum input state, i.e., z = 0. Here, we define the matrix N being a circular matrix.The differential operator Π ∂λ (n) has the form where each Π 2k (n) contains n k 2 terms that give a nonzero result when acting on exp λ † N λ and taking the value at λ = 0.These terms are all the derivatives of even order (2, 4, . . ., 2n) with respect to λ such that the number of λ is equal to the number of λ * for each derivative.For example, by keeping terms that return non-zero result in Eq. ( 54), we have It can be verified that the term with k = n gives, when acting on the exponential of Eq. (54), where Li ) is defined in the main text.In other words, this term is connected with the entropy S m when the input state is the Fock state |m , something that can be seen by taking the limit z → ∞ in Eq. (54).Also, it is not difficult to see that for each operator with k = 0, 1 . . ., n in Eq. (57), there are exactly n k terms where the derivatives with respect to conjugate pairs of λ's appear.For example, such a term is ∂ 6 /∂λ 1 ∂λ * 1 ∂λ 2 ∂λ * 2 ∂λ 3 ∂λ * 3 .These terms will give a result with no dependence on |τ | in the numerator when it acts on the exponential of Eq. ( 54).If we extract all these terms and gather them together, substituting c with its definition we can write From all this, we obtain the expression where ρm is the output state resulting from an input state |m .In Eq. ( 61), we subtracted the term proportional to z 2n as we have used it twice; one in the first term of the form [. ..] n and a second time for the very last term.In the same equation, F (m) (n) gathers all terms except for the first and last one of Eq. ( 54), but without returning any term with no dependence on |τ | in the numerator, something that we denote as Π2k (n) in the expression Calculation of F (m) (n) and its derivative In the main text, it is shown that by taking the derivative of Eq. (61) with respect to n and keeping the value at n = 1, we get We shall now prove that the last term in the right hand side of Eq. ( 63) is equal to zero.It can be found that Eq. (62) assumes the form, as advertised.

FIG. 1 :
FIG. 1: The von Neumann entropy S(z), function of the superposition parameter z, is pictured by a point belonging to a one-dimensional convex polytope.The two extremal points of the polytope are the entropies S0 and Sm corresponding to the two extreme cases, i.e., the input states |0 and |m .

1 FIG. 2 :
FIG. 2: Plot of the von Neumann entropy S(z) as a function of the superposition parameter z for m = 1 and several values of the squeezing parameter ξ.Since S1 > S0, Eq. (25) implies that the curve S(z) is always above S0.