Generalized concurrence in boson sampling

A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named “Fock state concurrence sum” CS, which is the summation over all the members of “the generalized Fock state concurrence” (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on CS. Therefore, we can state that the Fock state concurrence sum CSbehaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.


Introduction
Computing matrix permanent recently has become an issue that this classically hard computational problem is believed to be accessible by quantum devices such as linear optical network 1 , superconducting circuits 2 and trapped-ion 3 .The corresponding computational complexity is classified as #P-hard 4 and the quantum device would be able to sample a photon distribution of a given unitary photon network U, which is a function of permanents of submatrices out of U.This is so-called boson sampling 1 .So far, there exist only a few classical algorithms known for the matrix permanents, especially for arbitrary matrices with repeated rows and columns.
I recall here the permanent of a M-dimensional square matrix A (Per(A)) with entries A i j .Per(A) is defined as where the set S includes all permutations of (1,2,. . .,M), {σ σ σ }.Bold fonts are used, in this paper, for M-dimensional vectors.
The brute force computation of a matrix permanent in Eq. 1 requires M! terms in summation and each term is composed of products of M elements of the matrix.There are some smart strategies for evaluating the matrix permanents, for example, Ryser's algorithm 5 can perform the calculation in O(2 M M 2 ) arithmetic operations and can be optimized further with Gray code resulting in O(2 M M) arithmetic operations.Glynn 6, 7 derived a different form sharing a similar computational cost as Ryser's.Though, Jerrum et al. suggested a polynomial-time approximation algorithm for the permanents of matrices with non-negative elements 8 , more efficient algorithms for arbitrary matrices than Ryser's and Glynn's have not been developed yet.There have been some efforts in developing randomized algorithms for the permanents.Gurvits used the Glynn's formula to design a randomized algorithm 9 , and Aaronson and Hance 10 generalized Gurvits's sampling algorithm for matrices with either of repeated columns or repeated rows.Herein, I develop a generalized algorithm for matrices both with repeated rows and repeated columns and both in deterministic and in randomized ways by exploiting a series expansion of a product of variables regarding the collective variables 11 , i.e. linear combinations of variables (see Methods).Kan 11 used the series expansion to develop an algorithm for multivariate normal moments that the number of terms to be evaluated is drastically reduced.My algorithm is a generalized version of Glynn's that it can be reduced to Glynn's original form 6,7 in a particular case.In addition to the series expansion, I use the boson operator algebra in computing the matrix elements of the generalized rotation operator and equate them to the matrix permanents 1 .

Results
In the linear optical network characterized with the unitary matrix U, the transition amplitude between the two Fock states, |n 1 , n 2 , ..., n M = |n and |m 1 , m 2 , ..., m M = |m , is expressed as the matrix permanent, i.e.

m| R(lnU
where [U] n,m is a N × N submatrix of U constructed by repeating the k-th column of U m k times and copying the l-th row of the column of the resulting matrix n l times 1 .N is the total number of photons that N = ∑ M k n k = ∑ M k m k and M is the number of optical modes.The generalized rotation operator R is defined with the M × M arbitrary square matrix A together with the boson annihilation (â = ( â1 , . . ., âM ) t ) and creation (â † = ( â † 1 , . . ., â † M ) t ) operators, where [ âi , â † j ] = δ i, j .The action of the rotation operator ( R(Φ) = exp((â † ) t Φ â) and R(Φ) R(−Φ) = 1) on the creation operator â † k reads as follow 12 , In accordance with the linear optical network, the matrix elements of the generalized rotation operator R(lnA t ) is related to the matrix permanent of the arbitrary square complex matrix A as well such that To derive the generalized computational algorithm for the matrix permanent of an arbitrary matrix allowing repeated rows and columns, which can reduce the complexity of the problem, I expand the Fock state |n in terms of collective boson creation operators with the help of a series expansion introduced by Kan 11 (see also Methods), i.e. where The Fock state is expanded with the collective creation operator (n − 2v) t â † .After applying the generalized rotation operator R(lnA t ) to Eq. 5 and projecting the final Fock state |m out, as given in Eq. 4, the matrix permanent is expressed in the following series form, The matrix elements in Eq. 6 is further simplified with 0| âs where [•] k is the k-th element of the vector.This is my key finding in this paper.The expression can be tested with the MATLAB 13 code provided in Methods.By exploiting the symmetry of the expression 11 , the number of terms can be reduced to almost the half of Eq. 7. When at least one of {n k } is an odd number, without losing generality I choose n 1 is an odd number that Eq. 7 is simplified as, Even if all {n k } are even numbers, one still can reduce the number of terms in the summation by separating the first summation for v 1 runs from 0 to n 1 − 1 and v 1 = n 1 , which makes n 1 − 1 odd number that one can use the same trick as in Eq. 8. Eq. 8 requires 1 2 ∏ k (n k + 1) and the later case requires (∏ k (n k + 1) − 1)/2, respectively.

2/5
According to Eq. 8, a special instance Per(A), where n = m = (1, . . ., 1) t and N = M, requires 2 N−1 terms.In this case the elements of vector (n − 2v) t is either +1 or -1, and all binomial coefficients become 1 that which can be expressed equivalently as where the sum is over all M-dimensional vector h with entries +1 or -1 while h 1 = 1.I just have shown my generalized algorithm in Eq. 7 can be reduced to Glynn's formula 6,7 in Eq. 10 with the special condition.
As pointed out by Gurvits 9, 10 , Glynn form 6, 7 can be interpreted as an ensemble average over the random vector h due to the prefactor 2 −(N−1) , which corresponds to the number of terms in the summation in Eq. 10.Likewise, one can interpret Eq. 7 as a randomized algorithm such that v k is randomly generated among (0, . . ., n k ) with probability where is used.Accordingly, Eq. 7 is rewritten as where For each random instance of v with the probability p(v), G(v) is evaluated and the matrix permanent is approximated as an average, i.e.
where N Sample is the number of samples.

Discussion
In this paper, I present a newly developed generalized algorithm for matrix permanent.The algorithm has been derived via the series expansion of products of variables that the products of boson operators are expressed regarding the collective boson operators.The method can be applicable both to the direct deterministic matrix permanent calculation and to the randomized approximation.I note here that my algorithm can treat multiple repeated rows and columns of the submatrices of arbitrary complex matrices efficiently.The multiplicity can reduce the computational cost that for example, if the M = 10 and N = 10, then it requires 2 10 terms and 11 terms in summation, respectively for n = m = (1, . . ., 1) t and n = m = (10, 0, . . ., 0) t to compute the corresponding matrix permanents.I also note here that the computational cost mainly depends on the structure of the input integer vector n according to Eq. 7. As clued in the example, this implies when the entropy of the n/N increases (towards even distribution over the modes) the computation becomes more difficult.The same argument can be made for the output mode m due to the symmetry when the input and output modes are swapped via the complex conjugation.Therefore, the computational cost depends on the minimum entropic mode.The entropy of the integer vector is defined here as S(n) = ∑ M k −P k lnP k , P k = n k /N.Besides, the error bound of the randomized estimator G(v) could be useful, if available, to answer the open question in quantum computing 10 , whether the linear optics can be used for the decision problem.