Disturbance-Disturbance uncertainty relation: The statistical distinguishability of quantum states determines disturbance

The Heisenberg uncertainty principle, which underlies many quantum key features, is under close scrutiny regarding its applicability to new scenarios. Using both the Bell-Kochen-Specker theorem establishing that observables do not have predetermined values before measurements and the measurement postulate of quantum mechanics, we propose that in order to describe the disturbance produced by the measurement process, it is convenient to define disturbance by the changes produced on quantum states. Hence, we propose to quantify disturbance in terms of the square root of the Jensen-Shannon entropy distance between the probability distributions before and after the measurement process. Additionally, disturbance and statistical distinguishability of states are fundamental concepts of quantum mechanics that have thus far been unrelated; however, we show that they are intermingled thereupon we enquire into whether the statistical distinguishability of states, caused by statistical fluctuations in the measurement outcomes, is responsible for the disturbance’s magnitude.

The Heisenberg uncertainty principle (HUP) is related in a complex form to other fundamental quantum phenomena and difficult concepts of quantum mechanics. For example, it is closely related to quantum measurement and state's preparation 1 . Also, it is related to the stability of matter 2 , complementarity 3,4 , entanglement [5][6][7][8][9][10] , and, recently, it was shown that the strength of the uncertainty principle (together with the strength of steering) underlies quantum non-locality 11 . Currently, as far as we are aware, there are at least three generic types of uncertainty principles 1 , where every single one has its own uncertainty relation (M.J.W. Hall acknowledges four generic types of uncertainty principles 3 ).
According to the convention adopted in this paper, we should stress that we use the term uncertainty relation to mean the mathematical expression of the uncertainty principle, as done for example by Uffink and Hilgevoord 12 . This convention takes into account the distinction between preparation and measurement [13][14][15] . For example, using this convention it could be interpreted that Busch et al. 1 list three types of HUP: A) It is impossible to prepare states which possess two non-commuting observables simultaneously arbitrarily well localized, B) It is impossible to measure simultaneously two non-commuting observables, and C) It is impossible to measure one observable without disturbing a non-commuting observable. Thus, regarding this convention, the HUP given in A) refers to preparation of states, B) refers to simultaneous measurement and C) to the disturbance caused by the measurement process. In this sense, A) and B) are bonded up to the different notions of preparation and measurement respectively. Each one of these types have its own uncertainty relation, for example: [ , ] /2  16 , respectively. Additionally, there exist the entropic uncertainty relation (EUR) whose initial purpose was to overcome the state's dependence in the uncertainty relations [17][18][19] . For instance, the Deutsch's EUR 19 is linked with preparation of states and it does not refer to the disturbance process; on the other hand, the information-disturbance tradeoff 20 refers to the disturbance process and the extraction of information.
It is worth mentioning that we do not claim it as the best convention, however it serves to differentiate between the preparation and the measurement processes. It differs from the D' Ariano adopted convention 21 , where "uncertainty relations" are associated with measurements on an ensemble, whereas the "uncertainty principle" is associated with a sequence of measurements on the same system. In another convention the term "uncertainty principle" is often referred to the information gained and the state change induced by the measurement process, whereas the term "uncertainty relations" relates the statistics of the measured observable to the statistics of a non-commuting one.
One of the above HUP formulation refers to the disturbance's process, which is of paramount importance in the applications of quantum information; hence, many approaches have been developed to define disturbance. As disturbance is one of the major themes in this paper; then, it is convenient to review some previous defined disturbance notions. As a result of the vast and rather extensive bibliography regarding this topic, in this paper we make an arbitrary short selection of some representative works. In the noise-disturbance relation 16 the effort was focused on precisely define both noise and disturbance and to differentiate them from the standard deviation. In this approach, Ozawa 16 initially defined disturbance in terms of what he called the disturbance operator, i.e.
i n see reference 16 for details. This was a state-dependent definition that some years later was redefined by Buscemi et al. 22 in terms of the conditional entropy to get an state independent definition focused on the loss of correlation introduced by the change in the system's dynamical variables, i.e. disturbance is defined with respect to two system observables. On the other hand, Busch et al. 23 gave a proof of an uncertainty relation for position and momentum based on what they called calibrated error, in this case the disturbance is defined as the root mean square deviation from a sharp value of the observable.
Also, disturbance was associated with the possibility of probabilistically undoing the measurement that causes it 21 , and a tradeoff intimately linked to the impossibility of determining the state of a single system was proposed. This lead to define the gained information as the difference between the Shanon entropy before and after the measurement and disturbance summed up the amount of how the input state is unitarily uncorrelated with the output state; in this sense, disturbance sizes the inability of approximately reverse a measurement and it must only be a function of the probabilities of reversing it.
Buscemi et al. 24 acknowledge the fundamental importance for quantum mechanics and quantum information in developing a universal relation between information extraction and disturbance; to accomplish this task, they proposed genuine quantum quantities to define both quantum information gain and quantum disturbance. Thus, as coherent information (CI) is related to the possibility of constructing a recovery operation, in order to define disturbance they generalize CI (previously used by Maccone to define disturbance 25 ) which is related to undoing the state change.
Additionally, the information-disturbance tradeoff has been extended to continuous variables. In this respect, Paris 26 analyses the information-disturbance tradeoff for continuos variables presenting a scheme to quantify the information gained and the induced disturbance by coupling the system to a single probe system; here, disturbance is defined in terms of the transmission fidelity 26 .
Recent studies have approached the uncertainty relations from quantum estimation theory introducing a noise-noise uncertainty relation 27 , which is of great relevance for our work, see also 28 . Following this approach, in a recent work, noise was defined in terms of the classical Fisher information and disturbance in terms of the quantum Fisher information 29 ; also, in this work 29 , it was presented an information-disturbance relation based on divergences mentioning the work of Barchielli and Lupiere 30 , where initially was used the relative entropies both classical and quantum 30 and extending them towards an arbitrary divergence. However, as it is well know, the relative entropy is not symmetric. This latter approach is quite related to the approach carried out in our work.
In the information-disturbance setting, disturbance's definition is classified in at least two ways 31 :(i) how close the initial and final states are in terms of the average output fidelity, and (ii) how reversible (or coherent) is the transformation causing the state change. More recently, these approaches have been classified into two different types 29 : (a) an information-theoretic approach, and (b) an estimation-theoretic approach. However, in a more general setting, all the previous works can be classified attending the two distinct relevant properties focused on the observables, as follows: (I) noise-disturbance uncertainty relations, e.g. in 16,22 , (II) information-disturbance uncertainty relations, e.g. in 21,24,26,29 , and (III) noise-noise uncertainty relation, e.g. in 27,28 . Here, in this work we will pursuit the idea of a new relation: (IV) disturbance-disturbance uncertainty relation.
On the other hand, another fundamental and truly relevant concept of quantum theory is the statistical distinguishability of states in the way that was conceived by Wootters 50 . Then, disturbance and states' statistical distinguishability 50 are two core concepts of quantum mechanics that, to the best of our knowledge, have so far been unrelated.
Then, in this paper, we propose that disturbance could be characterized by the concept of statistical distinguishability of quantum states. To show this, we use the following two facts: (1) the complete set of the postulates of quantum mechanics (especially the measurement postulate) and (2) the underlying principle 32,51,52 , that claims that observables do not possess pre-existing values before measurements (the effort of Einstein to circumvent the uncertainty principle and nonlocality 5 , the Bell-Kochen-Specker theorem to test it 32,[51][52][53][54] , and the experimental works of Aspect et al. 55 lay the theoretical and experimental foundation for this principle) to capture the essence of the disturbance produced on quantum states while measuring an observable. This leads us to pursuit an uncertainty relation that capture the relation given in (IV) using the following idea: It is impossible to measure an observable without disturbing simultaneously its probability distribution and the probability distribution of a non-conmuting observable. Scientific  The previous reasons lead us to propose a definition of disturbance based on the distance between two probability distributions (which we call statistical distributions also), the distance will be measured by the square root of the Jensen-Shannon entropy 56 . This definition allows us to uncover a Disturbance-Disturbance uncertainty relation. Also, this definition could be used in the noise-disturbance uncertainty relation 16,22,23,32,40,41,57 , by adapting our definition of disturbance to define the noise process. On the other hand, our approach also could be generalised to the form of root-mean-square deviation 57 uncertainty relations. Additionally, our definition could be useful with regard to the information-gain-disturbance uncertainty relation 20,21,[24][25][26]35,37,45,47,48 . Our approach also could be generalised to include more that two observables [58][59][60] , likewise as the Jensen-Shanon entropy was generalised to continuos variables 56 , then it also could be generalised to the case of continuos variables 26 . However, all this requieres further studies and calculations.
The postulates of quantum mechanics are indispensable to clearly understand our treatment, since they establish one of the two facts on which our approach is based. For a modern statement of the postulates of quantum mechanics see the papers by Paris 61 and Bergou 62 . Here we are going to focus mainly on the measurement postulate only 63 : Measurement postulate (MP): In the measurement process the wave function suffers an abrupt change towards the eigenfunction associated with the determined eigenvalue. That is, if the eigenvalue a k is obtained when measuring the observable Â , then the wave function collapses as ψ → a i k ; where ψ i is the state immediately before the measurement 63 .
Additionally, it is worthy of mention the following corollary which is implied by the quantum measurement postulate: Corollary 1: The Measurement postulate allows measurements without collapsing the wave function, since if a i k ψ = , then there is not any collapse when measuring Â. Instead, immediately after the measurement, the wave function remains in the same initial state ψ = a i k 63 . Then, as this imply that the statistical distribution does not change, we exclude this case in this work.
It is important to mention that to obtain the usual textbook uncertainty relation, it is necessary to use just a few postulates of quantum mechanics (in particular excluding the MP) and the Schwarz inequality; however, it is related to preparation of states but not to the measurement process 64,65 , because the MP is not used to deduce it. In fact, the deduction of many uncertainty relations does not use the MP. Also, the entropic uncertainty relations [17][18][19] , in terms of the Shannon entropy could be obtained without using the MP. Consequently, many of the entropic uncertainty relations are also related to preparation of states only, but not to measurement. As our proposal to define disturbance is a relation given in terms of entropy, it will be valid to use it in the description of the tradeoff between noise and disturbance, taking the noise as the statistical distance between the expected distribution and the experimental distribution 57 .
To illustrate our approach better we use the following example: suppose that you have an initial wave function x ce , then you measure the observable x and you obtain the eigenvalue x s . Due to the MP, after the measurement the wave function collapses towards the eigenfunction associated with the obtained eigenvalue, i.e. ψ i (x) → ψ f (x s ). Consequently, the complementary observable p has evolved from having no predetermined value in the state ψ i (x) to "get", also, no predetermined value at the final state ψ f (x s ). Then, the following questions arise: What was disturbed? Was the disturbance of p from having no predetermined value in ψ i (x) to getting no predetermined value in ψ f (x s )? How we can measure the disturbance between p i and p f when both of them do not possess a predetermined value? What is p î ? and What is p f ? Those kind of questions suggest that it could be useful to consider that disturbance is on the wave function, as the MP implies. Accordingly, it would be interesting to pursuit this approach and associate disturbance with a metric distance between ψ i (x) and ψ f (x s ). This goal is what we carry out in this paper. Then, in the subsequently sections, we suppose that disturbance occurs on the system's state, this supposition is based on the following two reasons: i) observables do not have a pre-existing value, ii) The MP establishes a change on the system's state.
Additionally, notice that although there are three equivalent quantum mechanical pictures, i.e. the Schrödinger, Heisenberg, and Interaction pictures, usually the postulates are stated in the Schrödinger picture only. To the best of our knowledge, there is not a statement of the MP in the Heisenberg or Interaction pictures. That is to say, we do not know the equivalent of the MP in the Heisenberg picture.
Our proposal. -The thought experiment we are considering is the following: there is a quantum system prepared in a quantum state, its properties are represented by self-adjoint operators 66 . Then, we carry out a single projective measurement of one property, e.g. Â . Hence, we disturb the state of the system, and due to this single measurement the state of the system collapses towards an eigenstate of the observable Â , as is prescribed by the MP. Therefore, because this disturbance is on the state of the system, there is a new probability distribution associated with B.

Results
In order to capture the disturbance caused by the measurement process, we will proceed to compare the distance between the statistical distribution of the observable Â before the single measurement and the statistical distribution of the same observable after that single measurement. Also, we compare the distance between the probability distributions of observable B before and after the measurement of observable Â. Then, we define the disturbance caused by the process of measurement as the distance between the probability distribution before and the probability distribution after the measurement process. Consequently, one of the main goals of this paper is to show how the sum of these distances has an irreducible lower bound. The idea of quantifying disturbance as the distance between probability distributions is not new, it appears clearly stated by Werner 67 as the distance between probability measures as the largest difference of expectation values, also it was already stated by Bush 68 , see also 23 . Because it obeys the triangle inequality, a good measure of the distance between two discrete probability distributions is the square root of the symmetric Jensen-Shannon entropy 56 : where p j and q j are two probability distributions. In this case, we associated p j with the initial probability distribution, i.e. before the measurement, and q j with the final probability distribution, i.e. after the measurement. Then, we get the Jensen-Shannon entropy in terms of the eigenstates of the observables. That is to say, for the observable B we have the association p b j j 2 ψ = 〈 | 〉 and q b a j j s 2 = 〈 | 〉 , where ψ | 〉 is the initial state immediately before the measurement and | 〉 a s is the state after the measurement of observable Â associated with the eigenvalue a s obtained in the measurement process. This association refers to the possibility that the resultant metric space can be embedded in a real Hilbert space [69][70][71] .
In order to compare the probability distribution of observable B before the measurement versus the probability distribution of the same observable after that measurement (of Â ) we use the Jensen-Shanon entropy. Consequently, we take D B as the disturbance in the statistical distribution of B because of the measurement of Â, and we find it as: where |〈b j |ψ〉| 2 is the probability of B before the measurement, and |〈b j |a s 〉| 2 is the probability of B given that the state after the measurement is an eigenvector of Â, i.e. | 〉 a s . A similar approach is used to define the disturbance of Â , i.e. D A , as:  notice that 〈a j |a s 〉 = δ js . It is important to emphasise that our framework only applies to projective measurements.
In order to find out if there is a minimum, different from zero, of the sum + D D A B, we take into account how the distance between the probability distributions, represented by P and Q, behaves when one of the probability distributions tends to the other, i.e. we need to consider the following limit: This is the χ 2 -distance between p j and q j . However, that distance is not symmetric and we can express it in terms of the probability distribution of B . Due to this asymmetry, the two χ 2 distances that we can get are the following, At this point, we are going to find out the value of the χ 2 -distance given by equations (6) and (7) based on the statistical distinguishability criterion defined by Wooters 50 , and consequently proving that it is different from zero. We recall that we are considering the case where the initial state is different from an eigenstate of Â , i.e. a j , To find out the value of the χ 2 -distance in terms of the statistical distinguishability criterion we make the following consideration: the disturbance should be minimal if the initial state N ψ immediately before the measurement is only slightly different from the final state after the measurement, i.e. if the initial state is the nearest distinguishable neighbour of a s 50,72 . Physically, this means that the measurement process projects the state ψ δ = + a a N s s to the nearest neighbour distinguishable state a s , i.e. they are the nearest neighbour statistical distinguishable states 50 In other words, the disturbance is minimum when the probability distributions (before and after the measurement) of the possible outcomes are the closest statistically distinguishable distributions. Wootters defines the statistical distance to distinguish between preparation of quantum states 50 . Here, we are using his distinguishability criterion to define the disturbance as the number of statistical distinguishable states between the initial state before measurement and the collapsed state after measurement (we take the minimum disturbance as the distance between the nearest neighbour statistical distinguishable states); that is, by taking disturbance as the distance between distinguishable probability distributions. Wootters proves that the statistical distance for preparation of states, which is determined by statistical fluctuations, is equivalent to the distance between pure states, i.e. the angle between rays. This distinguishability criterion determines that two probability distributions are distinguishable in n trials if the following condition is fulfilled: this, in turn, establishes a distance given by 69 : where δp i refers to the difference between the two probability distributions being considered. Equation (8) 70 completed the proof that the Jensen-Shanon entropy fulfills the requirements of a metric distance and they called it the transmission metric, because it is associated with the rate of transmission for a discrete memoryless channel. Notice that equation (9) (which sets the Wootters distinguishability criterion) is equal to the square root of equation (5), the latter comes from taking the limit p j → q j on the Jensen-Shannon entropy, i.e. between the nearest probability distributions. This fact allows us to measure the amount of disturbance by using the distance generated by the Wootters statistical distinguishability of quantum states, i.e. by counting the number of distinguishable states between the states before and after the measurement process. This distinguishability should be defined by the statistical result of measurements that resolve the nearest neighbour states. In fact, Majtey et al. established a distinguishability criterion after n trials based on the Jensen-Shannon entropy 69 however, this criterion is equal to the Wootters criterion given in equation (8) for sufficiently closed enough probability distributions, see section 3 of reference 69 for details.
Therefore, to find out the value of χ 2 based on the statistical distinguishability criterion of Wootters, we take the state immediately before the measurement as a a N s s ψ δ = + , and normalized. So, we can write equations (6) and (7), respectively, as∑ i.e. the norm ||(|δa s 〉)|| 2 is the distance between the states ψ N and a s , when they are the nearest neighbours, and in some sense this distance represents a unit (statistical distinguishable) to measure distances between states. Thus, we get the following relations for the distinguishability distance 50,72 , between two different probability distributions, caused by the disturbance of the measurement process:χ B j j s B min where D B is expressed in equation (2), equations (13) and (14) exist because the distance χ 2 is not symmetric. For practical purposes we can take the minimum of and , (2) , and we take into consideration only one of these relations. The minimum refers to the quantification of a unit of distance, employing the nearest distinguishable neighbour probability distributions. Notice that experimentally the minimum must be chosen in such a way that equations (13) and (14) arise from a calibration process, of the measurement apparatus, to find out the nearest statistical distinguishable states of the measured observable Â .
Hence, we can write down our first result as the following equation, which we named the Jensen-Shannon entropy relation for disturbance , As a final step to complete our study, we calculate the distance between the probability distributions of observable Â before and after the measurement process. We recall that in our thought experiment we measure the observable Â , then the state of the system collapses towards an eigenstate of the same observable. Then, the probability of finding the system in an eigenstate of Â before the measurement is |〈a j |ψ〉| 2 . In addition, after the measurement we can say with absolute certainty that the system is in an eigenstate of Â , say |a s 〉; where the probability of finding the system in |a s 〉 is 1.
Carrying out the same process used to obtain D B , we know from equations (4) and (5) that the Jensen-Shannon entropy for D A tends to χ 2 when one distribution tends to the other, so we have:( where ( ) a d d where δ δ = 〈 | 〉 d a a j s j s , and with similar definitions as those given after equations (10) and (11). These last two equations can be reduced, so that (1 ( )) 1 1 1 ( ) A min s s , Finally, we obtain the Entropic Uncertainty Relation of Disturbance-Disturbance as the sum of the disturbance of observables Â and B,ˆˆˆˆχ , In this way, we have found a new Entropic Uncertainty Relation. This relation relates the disturbance caused by the measurement of a system's property to the statistical distinguishability of quantum states. It is important to say that there is not a similar relation in literature, and because of this, we need to associate it with a new statement, namely: It is impossible to measure an observable without disturbing simultaneously its probability distribution and the probability distribution of a non-commuting observable.
Notice that our disturbance-disturbance relation does not apply to the case where the system is in an eigenstate of the measured observable.
In this manner, we have found an uncertainty relation using all the postulates of quantum mechanics, specially the MP. We call it the disturbance-disturbance uncertainty relation (D-D-UR). One of the most important properties of this new D-D-UR is that it is an uncertainty relation measuring distances between probability distributions, see the example given in the second subsection below. Some characteristics. Perhaps, as wishful thinking, it is probably expected that if you disturb the probability distribution of Â just by a little amount, then the resulting disturbance of the probability distribution of B will be of a great amount. This expectation comes from the "preparation" uncertainty relation, i.e. ΔxΔp ≥ ћ//2, where Δp increases by reducing Δx. However, in this case there is a single probability distribution only, and its representation in the configuration space is related by a Fourier transform to its momentum representation. It is a mathematical property that when two functions are related by a Fourier transform when the width of one of them decreases, then the width of the other one increases. Please note that the Measurement Postulate is not used to deduce the "preparation" uncertainty relation.
In contrast, our disturbance measure is between two probability distributions, one of them before a single measurement and the other one after that single measurement, both of them related by the Measurement Postulate. That is to say, the Fourier transform does not relate them, and it is not possible to expect a priori that they are related in such a way. On the contrary, if the disturbance is on the wave function, what we could expect is: If we make a little disturbance on the wave function representing the system's state, then the statistical distribution of its properties also change a little. This is explained in more detail with the help of the figure 1: Suppose that the initial state φ i (x) is given by the blue dash curve in Fig. 1, after the measurement of the observable Â the wave function collapses towards the wave function in red, φ f (x), (notice that in this hypothetical case we are trying to consider a situation when the measurement produces a small perturbation on the wave function; of course, you can imagine a better plot with a really minimal perturbation). Then, the statistical distribution of Â suffers a small change, hence the disturbance is small. But, also, the statistical distribution of a complementary observable changes a little too, because the distance between the initial statistical distribution (in blue), before the measurement, and the final distribution (in red), after the measurement, is small, and this small change in the statistical distribution is for both Â and B . Therefore, it is a naive thinking to expect that, when the statistical distribution changes a little for both observables, whereas the disturbance on Â decreases, then the disturbance on B increases.
It is worth noticing that a "preparation" tradeoff relation is to be expected between two non-commuting observables when the quantum system is in the initial state φ i (x), i.e. before measurement; or when it is in the final state φ f (x), i.e. after measurement. In others words, it is expected an usual "preparation" tradeoff relation between two observables when the quantum system is in the state φ i (x) of Fig. 1, because the state φ i (x) has a related wave function in the momentum representation φ i (p) and they are related by φ i (p) = ∫e −ixp/ћ φ i (x). Then, if you reduce the width of φ i (x), then the width of φ i (p) increases. The same applies for φ f (x) in Fig. 1; also, in this case, there is a tradeoff relation between two complementary observables after the measurement, due to the relation that exist between φ f (x) and φ f (p), i.e φ f (p) = ∫e −ixp/ћ φ f (x); that is, if you reduce the width of φ f (x), then the width of φ f (p) increases.
Example of the disturbance-disturbance uncertainty relation. Suppose that we have a particle of spin 1 2 , we consider that the initial state is φ , and χ = . We have written | 〉 = | 〉 + | 〉 x z z ( ) 1 1 2 1 2 , and use the condition δ|c 1 | 2 = −δ|c 2 | 2 due to the requirement of normalization. Remember that δ|c j | = 〈z j |δx 1 〉. At the left of Fig. (2), we can see the square root of χ and between these the square root of the Jensen-Shannon entropy (SJS).
On the other hand, we are going to compare the two probability distributions of S x ; in our example, these distributions are |〈x 1 |x 1 〉| 2 = 1, the distribution after the measurement; and |〈x j |φ〉| 2 , the distribution before the measurement. It is crucial to think deeply about this part of the example because in this case we are comparing a delta distribution with a distribution close to the delta distribution. We have the normalization condition that allows positive and negative values of the little fluctuations, i.e. (δ|d j | 2 ), and compensates to leave the probability of the distribution before the measurement intact.
After calculations we obtain: , under similar conditions, where δ|d j | = 〈x j |δx 1 〉. At the right of Fig. (2) we see the different χ distances and the square root of the Jensen-Shannon entropy (SJS). We note, from the equations of these quantities, that SJS and the χ distance are not defined for positive δ|d j | 2 , i.e. they become imaginary, in this way these two distances are not suitable since they do not satisfy the normalization condition.
In Fig. (2) we gave different values for χ and SJS distances to show its behaviour, however, it is not because they get different values. In fact, these quantities have a single value given by the distance between the nearest distinguishable states after n trails. By changing their values we can see how they behave when the capacity to statistically distinguish two states becomes optimal.

Discussion
The uncertainty relation that we are presenting differs from the ones already known because it quantifies the disturbance caused in the statistical distributions, whereas others focus on the relations between noise and disturbance in the measurements 22,23,32,40,41 . The usual uncertainty relation by Kennard and Robertson 44 is about the statistics as a result of the preparation of quantum states, i.e. it limits the prior knowledge of the statistics of the observables and its predictability 3 , whereas the D-D-UR includes in its derivation the process of measurement by taking into account the MP.
The D-D-UR also differs from the kind of uncertainties related to complementarity 3 because of the impossibility to arrange an experiment which could measure the value of complementary observables, certainly we could generalise our results to include this kind of uncertainty also.
In a recent work, Shitara et al. 29 discussed an inequality given by Barchielli and Lupieri 30 , this inequality interpreted as an information-disturbance relation. Then, by choosing two near states as the argument of the relative entropy, the main results of Shiatara et al. coincide 29 with that obtained by Barchielli In this case, it is worth mentioning that the relative entropy is not symmetric, which mean that it is not a proper distance between two probability distributions, it seems that this also occur with the classical divergence. In our work, we restrict ourselves to the square root of the Jensen-Shannon divergence which represents a truly metric, since it is symmetric and obeys the triangle inequality.
As a conclusion, to the best of our knowledge, the D-D-UR was not previously proposed and it refers to limitations on knowledge and predictability of the value of an observable B before and after the measurement of another observable Â and vice versa.
and SJS when δ|d 1 | 2 takes values in the interval [−0.25, 0.25]. Notice that the lines do not reach the origin.