Abstract
The transfer entropy is a wellestablished measure of information flow, which quantifies directed influence between two stochastic time series and has been shown to be useful in a variety fields of science. Here we introduce the transfer entropy of the backward time series called the backward transfer entropy, and show that the backward transfer entropy quantifies how far it is from dynamics to a hidden Markov model. Furthermore, we discuss physical interpretations of the backward transfer entropy in completely different settings of thermodynamics for information processing and the gambling with side information. In both settings of thermodynamics and the gambling, the backward transfer entropy characterizes a possible loss of some benefit, where the conventional transfer entropy characterizes a possible benefit. Our result implies the deep connection between thermodynamics and the gambling in the presence of information flow, and that the backward transfer entropy would be useful as a novel measure of information flow in nonequilibrium thermodynamics, biochemical sciences, economics and statistics.
Introduction
In many scientific problems, we consider directed influence between two component parts of complex system. To extract meaningful influence between component parts, the methods of time series analysis have been widely used^{1,2,3}. Especially, time series analysis based on information theory^{4} provides useful methods for detecting the directed influence between component parts. For example, the transfer entropy (TE)^{5,6,7} is one of the most influential informational methods to detect directed influence between two stochastic time series. The main idea behind TE is that, by conditioning on the history of one time series, informational measure of correlation between two time series represents the information flow that is actually transferred at the present time. Transfer entropy has been well adopted in a variety of research areas such as economics^{8}, neural networks^{9,10,11}, biochemical physics^{12,13,14} and statistical physics^{15,16,17,18,19}. Several efforts to improve the measure of TE have also been done^{20,21,22}.
In a variety of fields, a similar concept of TE has been discussed for a long time. In economics, the statistical hypothesis test called as the Granger causality (GC) has been used to detect the causal relationship between two time series^{23,24}. Indeed, for Gaussian variables, the statement of GC is equivalent to TE^{25}. In information theory, nearly the same informational measure of information flow called the directed information (DI)^{26,27} has been discussed as a fundamental bound of the noisy channel coding under causal feedback loop. As in the case of GC, DI can be applied to an economic situation^{28,29}, that is the gambling with side information^{4,30}.
In recent studies of a thermodynamic model implementing the Maxwell’s demon^{31,32}, which reduces the entropy change in a small subsystem by using information, TE has attracted much attention^{13,14,15,18,33,34,35,36,37,38}. In this context, TE from a small subsystem to other systems generally gives a lower bound of the entropy change in a subsystem^{15,18,33}. As a tighter bound of the entropy change for Markov jump process, another directed informational measure called the dynamic information flow (DIF)^{34} has also been discussed^{33,34,35,36,37,38,39,40,41,42,43}.
In this article, we provide the unified perspective on different measures of information flow, i.e., TE, DI, and DIF. To introduce TE for backward time series^{13,38}, called backward transfer entropy (BTE), we clarify the relationship between these informational measures. By considering BTE, we also obtain a tighter bound of the entropy change in a small subsystem even for non Markov process. In the context of time series analysis, this BTE has a proper meaning: an informational measure for detecting a hidden Markov model. From the view point of the statistical hypothesis test, BTE quantifies an anticausal prediction. These fact implies that BTE would be a useful directed measure of information flow as well as TE.
Furthermore, we also discuss the analogy between thermodynamics for a small system^{32,44,45} and the gambling with side information^{4,30}. To considering its analogy, we found that TE and BTE play similar roles in both settings of thermodynamics and gambling: BTE quantifies a loss of some benefit while TE quantifies some benefit. Our result reveals the deep connection between two different fields of science, thermodynamics and gambling.
Results
Setting
We consider stochastic dynamics of interacting systems and , which are not necessarily Markov processes. We consider a discrete time k (=1, …, N), and write the state of () at time k as x_{k} (y_{k}). Let () be the path of system () from time k − l + 1 to k where l ≥ 1 is the length of the path. The probability distribution of the composite system at time k is represented by p(X_{k} = x_{k}, Y_{k} = y_{k}), and that of paths is represented by , where capital letters (e.g., X_{k}) represent random variables of its states (e.g., x_{k}).
The dynamics of composite system are characterized by the conditional probability such that
where p(A = aB = b):= p(A = a, B = b)/p(B = b) is the conditional probability of a under the condition of b.
Transfer entropy
Here, we introduce conventional TE as a measure of directed information flow, which is defined as the conditional mutual information^{4} between two time series under the condition of the one’s past. The mutual information characterizes the static correlation between two systems. The mutual information between X and Y at time k is defined as
This mutual information is nonnegative quantity, and vanishes if and only if x_{k} and y_{k} are statistically independent (i.e., p(X_{k} = x_{k}, Y_{k} = y_{k}) = p(X_{k} = x_{k})p(Y_{k} = y_{k}))^{4}. This mutual information quantifies how much the state of y_{k} includes the information about x_{k}, or equivalently the state of x_{k} includes the information about y_{k}. In a same way, the mutual information between two paths and is also defined as
While the mutual information is very useful in a variety fields of science^{4}, it only represents statistical correlation between two systems in a symmetric way. In order to characterize the directed information flow from X to Y, Schreiber^{5} introduced TE defined as
with k ≤ k′. Equation (4) implies that TE is an informational difference about the path of the system that is newly obtained by the path of the system from time k′ to k′ + 1. Thus, TE can be regarded as a directed information flow from to at time k′. This TE can be rewritten as the conditional mutual information^{4} between the paths of and the state of under the condition of the history of :
which implies that TE is nonnegative quantity, and vanishes if and only if the transition probability in from to y_{k′+1} does not depend on the time series , i.e., ). [see also Fig. 1(a)].
Backward transfer entropy
Here, we introduce BTE as a novel usage of TE for the backward paths. We first consider the backward path of the system (); (), which is the timereversed trajectories of the system () from time N − k + l to N − k + 1. We now introduce the concept of BTE defined as TE for the backward paths
with m = N − k, m′ = N − k′ and k ≤ k′. In this sense, BTE may represent “the timereversed directed information flow from the future to the past.” However BTE is well defined as the conditional mutual information, it is nontrivial if such a concept makes any sense informationtheoretically or physically where stochastic dynamics of composite system itself do not necessarily have the timereversal symmetry.
To clarify the proper meaning of BTE, we compare BTE with TE [see Fig. 1]. Transfer entropy quantifies the dependence of X_{k} in the transition from time Y_{k} to Y_{k+1} [see Fig. 1(a)]. In the same way, BTE quantifies the dependence of Y_{m} in the correlation between X_{m+1} and Y_{m+1} [see Fig. 1(b)]. Thus, BTE implies how X_{m+1} depends on Y_{m+1} without the dependence of the past state Y_{m}. In other words, BTE is nonnegative and vanishes if and only if a Markov chain Y_{m} → Y_{m+1} → X_{m+1} exists, which implies that dynamics of X are given by a hidden Markov model. In general, BTE is nonnegative and vanishes if and only if a Markov chain
exists. Therefore, BTE from to quantifies how far it is from composite dynamics of and to a hidden Markov model in .
Thermodynamics of information
We next discuss a thermodynamic meaning of BTE. To clarify the interpretation of BTE in nonequilibrium stochastic thermodynamics, we consider the following nonMarkovian interacting dynamics
where a nonnegative integer n represents the time delay between and . The stochastic entropy change in heat bath attached to the system from time 1 to N in the presence of ^{15} is defined as
where
is the transition probability of backward dynamics, which satisfies the normalization of the probability . For example, if the system and does not include any odd variable that changes its sign with the timereversal transformation, the backward probability is given by p_{B}(X_{k} = x_{k}X_{k+1} = x_{k+1}, Y_{k−n} = y_{k−n}) = p(X_{k+1} = x_{k}X_{k} = x_{k+1}, Y_{k−n} = y_{k−n}) with k ≥ n + 1 (p_{B}(X_{k} = x_{k}X_{k+1} = x_{k+1}, Y_{1} = y_{1}) = p(X_{k+1} = x_{k}X_{k} = x_{k+1}, Y_{1} = y_{1}) with k ≤ n). This definition of the entropy change in the heat bath Eq. (9) is well known as the local detailed balance or the detailed fluctuation theorem^{45}. We define the entropy change in and heat bath as
where is the stochastic Shannon entropy change in .
For the nonMarkovian interacting dynamics Eq. (8), we have the following inequality (see Method);
We add that the term vanishes for the Markovian interacting dynamics (n = 0).
These results [Eqs (12) and (13)] can be interpreted as a generalized second law of thermodynamics for the subsystem in the presence of information flow from to . If there is no interaction between and , informational terms vanish, i.e., , , , , I(X_{N};Y_{N}) = 0 and I(X_{1}; Y_{1}) = 0. Thus these results reproduce the conventional second law of thermodynamics , which indicates the nonnegativity of the entropy change in and bath^{45}. If there is some interaction between and , can be negative, and its lower bound is given by the sum of TE from X to Y and mutual information between and at initial time;
which is a nonnegative quantity . In information theory, this quantity is known as DI from to ^{27}. Intuitively speaking, quantifies a kind of thermodynamic benefit because its negativity is related to the work extraction in in the presence of ^{32}. Thus, a weaker bound (13) implies that the sum of TE quantifies a possible thermodynamic benefit of in the presence of .
We next consider the sum of TE for the timereversed trajectories;
which is given by the sum of BTE and the mutual information between and at final time. A tighter bound (12) can be rewritten as the difference between the sum of TE and BTE;
This result implies that a possible benefit should be reduced by up to the sum of BTE . Thus, the sum of BTE means a loss of thermodynamic benefit. We add that a tighter bound is not necessarily nonnegative while a weaker bound is nonnegative.
We here consider the case of Markovian interacting dynamics (n = 0). For Markovian interacting dynamics, we have the following additivity for a tighter bound [see Supplementary information (SI)]
where the sum of TE and BTE for a single time step and are defined as and , respectively. This additivity implies that a tighter bound for multi time steps is equivalent to the sum of a tighter bound for a single time step . We stress that a tighter bound for a single time step has been derived in ref. 13. We next consider the continuous limit x_{k} = x(t = kΔt), y_{k} = y(t = kΔt), and N = O(Δt^{−1}), where t denotes continuous time, Δt ≪ 1 is an infinitesimal time interval and the symbol O is the Landau notation. Here we clarify the relationship between a tighter bound (16) and DIF^{34} (or the learning rate^{18}) defined as . For the bipartite Markov jump process^{18} or two dimensional Langevin dynamics without any correlation between thermal noises in X and Y^{15}, we have the following relationship [see also SI]
Thus a bound by TE and BTE is equivalent to a bound by DIF for such systems in the continuous limit, i.e., .
Gambling with side information
In classical information theory, the formalism of the gambling with side information has been well known as another perspective of information theory based on the data compression over a noisy communication channel^{4,30}. In the gambling with side information, the mutual information between the result in the gambling and the side information gives a bound of the gambler’s benefit.
This formalism of gambling is similar to the abovementioned result in thermodynamics of information. In thermodynamics, thermodynamic benefit (e.g., the work extraction) can be obtained by using information. On the other hand, the gambler obtain the benefit by using side information. We here clarify the analogy between gambling and thermodynamics in the presence of information flow. To clarify the analogy between thermodynamics and gambling, BTE plays a crucial role as well as TE.
We introduce the basic concept of the gambling with side information given by the horse race^{4,30}. Let y_{k} be the horse that won the kth horse race. Let f_{k} ≥ 0 and o_{k} ≥ 0 be the bet fraction and the odds on the kth race, respectively. Let m_{k} be the gambler’s wealth before the kth race. Let s_{k} be the side information at time k. We consider the set of side information x_{k−1} = {s_{1}, …, s_{k−1}}, which the gambler can access before the kth race. The bet fraction f_{k} is given by the function with k ≥ 2, and f_{1}(y_{1}x_{1}). The conditional dependence ({x_{1}}) of (f(y_{1}x_{1})) implies that the gambler can decide the bet fraction f_{k} (f_{1}) by considering the past information ({x_{1}}). We assume normalizations of the bet fractions and , which mean that the gambler bets all one’s money in every race. We also assume that . This condition satisfies if the odds in every race are fair, i.e., 1/o_{k}(y_{k}) is given by a probability of Y_{k}.
The stochastic gambler’s wealth growth rate at kth race is given by
with k ≥ 2 [], which implies that the gambler’s wealth stochastically changes due to the bet fraction and odds. The information theory of the gambling with side information indicates that the ensemble average of total wealth growth is bounded by the sum of TE (or DI) from X to Y^{28,29} (see Method);
where indicates the ensemble average, and is the Shannon entropy of . This result (21) implies that the sum of TE can be interpreted as a possible benefit of the gambler.
We discuss the analogy between thermodynamics of information and the gambling with side information. A weaker bound in the gambling with side information (21) is similar to a weaker bound in thermodynamics of information (16), where the negative entropy change corresponds to the total wealth growth G. On the other hand, a tighter bound in the gambling with side information (20) is rather different from a tighter bound by the sum of BTE in thermodynamics of information (16). We show that a tighter bound in the gambling is also given by the sum of BTE if we consider the special case that the bookmaker who decides the odds o_{k} cheats in the horse race; The odds o_{k} can be decided by the unaccessible side information x_{k+1} and information of the future races [see also Fig. 2]. In this special case, the fair odds of the kth race o_{k} can be the conditional probability of the future information with k ≤ N − 1, and 1/o_{N}(y_{N}) = p(Y_{N} = y_{N}X_{N} = x_{N}). The inequality (20) can be rewritten as
which implies that the sum of BTE ) represents a loss of the gambler’s benefit because of the cheating by the bookmaker who can access the future information with anticausality. We stress that Eq. (22) has a same form of the thermodynamic inequality (16) for Markovian interacting dynamics (n = 0). This fact implies that thermodynamics of information can be interpreted as the special case of the gambling with side information; The gambler uses the past information and the bookmaker uses the future information. If we regard thermodynamic dynamics as the gambling, anticausal effect should be considered.
Causality
We here show that BTE itself is related to anticausality without considering the gambling. From the view point of the statistical hypothesis test, TE is equivalent to GC for Gaussian variables^{25}. Therefore, it is naturally expected that BTE can be interpreted as a kind of the causality test.
Suppose that we consider two linear regression models
where α (α′) is a constant term, A (A′) is the vector of regression coefficients, ⊕ denotes concatenation of vectors, and () is an error term. The Granger causality of to quantifies how the past time series of in the first model reduces the prediction error of compared to the error in the second model. Performing ordinary mean squares to find the regression coefficients A (A′) and α (α′) that minimize the variance of (), the standard measure of GC is given by
where var() denotes the variance of . Here we assume that the joint probability is Gaussian. Under Gaussian assumption, TE and GC are equivalent up to a factor of 2,
In the same way, we discuss BTE from the view point of GC. Here we assume that the joint probability is Gaussian. Suppose that two linear regression models
where α^{†} (α′^{†}) is a constant term, A^{†} (A′^{†}) is the vector of regression coefficients and () is an error term. These linear regression models give a prediction of the past state of using the future time series of and . Intuitively speaking, we consider GC of to for the rewind playback video of composite dynamics and . We call this causality test the Granger anticausality of to . Performing ordinary mean squares to find A^{†} (A′^{†}) and α^{†} (α′^{†}) that minimize var() (var()), we define a measure of the Granger anticausality of to as . The backward transfer entropy is equivalent to the Granger anticausality up to factor 2,
This fact implies that BTE can be interpreted as a kind of anticausality test. We stress that composite dynamics of and are not necessarily driven with anticausality even if a measure of the Granger anticausality has nonzero value. As GC just finds only the predictive causality^{23,24}, the Granger anticausality also finds only the predictive causality for the backward time series.
Discussion
We proposed that directed measure of information called BTE, which is possibly useful to detect a hidden Markov model (7) and predictive anticausality (29). In the both setting of thermodynamics and the gambling, the measurement of BTE has a profitable meaning; the detection of a loss of a possible benefit in the inequalities (16) and (22).
The concept of BTE can provide a clear perspective in the studies of the biochemical sensor and thermodynamics of information, because the difference between TE and DIF has attracted attention recently in these fields^{14,35}. In ref. 14, Hartich et al. have proposed the novel informational measure for the biochemical sensor called sensory capacity. The sensory capacity is defined as the ratio between TE and DIF . Because DIF can be rewritten by TE and BTE [Eq. (18)] for Markovian interacting dynamics, we have the following expression for the sensory capacity in a stationary state,
where we used I(X_{k+1}; Y_{k+1}) = I(X_{k}; Y_{k}) in a stationary state. This fact indicates that the ratio between TE and BTE could be useful to quantify the performance of the biochemical sensor. By using this expression (29), we show that the maximum value of the sensory capacity C = 1 can be achieved if a Markov chain of a hidden Markov model Y_{k} → Y_{k+1} → X_{k+1} exists. In ref. 35, Horowitz and Sandberg have shown a comparison between two thermodynamic bound by TE and DIF for two dimensional Langevin dynamics. For the KalmanBucy filter which is the optimal controller, they have found the fact that DIF is equivalent to TE in a stationary state. This idea can be clarified by the concept of BTE. Because the KalmanBucy filter can be interpreted as a hidden Markov model, BTE should be zero, and DIF is equivalent to TE in a stationary state.
Our results can be interpreted as a generalization of previous works in thermodynamics of information^{46,47,48}. In refs 46 and 47, S. Still et al. discuss the prediction in thermodynamics for Markovian interacting dynamics. In our results, we show the connection between thermodynamics of information and the predictive causality from the view point of GC. Thus, our results give a new insight into these works of the prediction in thermodynamics. In ref. 48, G. Diana and M. Esposito have introduced the timereversed mutual information for Markovian interacting dynamics. In our results, we introduce BTE, which is TE in the timereversed way. Thus, our result provides a similar description of thermodynamics by introducing BTE, even for nonMarkovian interacting dynamics.
We point out the time symmetry in the generalized second law (12). For Markovian interacting dynamics, the equality in Eq. (12) holds if dynamics of has a local reversibility (see SI). Here we consider a time reversed transformation , and assume a local reversibility such that the backward probability p_{B}(A = aB = b) equals to the original probability p(A = aB = b) for any random variables A and B. In a time reversed transformation, we have , and . The generalized second law Eq. (12) changes the sign in a time reversed transformation, . Thus, the generalized second law (12) has the same time symmetry in the conventional second law, i.e., even for nonMarkovian interacting dynamics, where ΔS_{tot} is the entropy change in total systems. In other words, the generalized second law (12) provides the arrow of time as the conventional second law. This fact may indicate that BTE is useful as well as TE in physical situations where the time symmetry plays a crucial role in physical laws.
We also point out that this paper clarifies the analogy between thermodynamics of information and the gambling. The analogy between the gambling and thermodynamics has been proposed in ref. 49, however, the analogy between Eqs (16) and (22) are different from one in ref. 49. In ref. 49, D. A. Vinkler et al. discuss the particular case of the work extraction in Szilard engine, and consider the work extraction in Szilard engine as the gambling. On the other hand, our result provides the analogy between the general law of thermodynamics of information and the gambling. To clarify this analogy, we may apply the theory of gambling, for example the portfolio theory^{50,51}, to thermodynamic situations in general. We also stress that the gambling with side information directly connects with the data compression in information theory^{4}. Therefore, the generalized second law of thermodynamics may directly connect with the data compression in information theory. To consider such applications, BTE would play a tricky role in the theory of the gambling where the odds should be decided with anticausality.
Finally, we discuss the usage of BTE in time series analysis. In principle, we prepare the backward time series data from the original time series data, and do a calculation of BTE as TE. To calculate BTE, we can estimate how far it is from dynamics of two time series to a hidden Markov model, or detect the predictive causality for the backward time series. In physical situations, we also can detect thermodynamic performance by comparing BTE with TE. If the sum of BTE from the target system to the other systems is larger than the sum of TE from the target system to the other systems, the target system could seem to violate the second law of thermodynamics because of the inequality (16), where the other systems play a similar role of Maxwell’s demon. Therefore, BTE could be useful to detect phenomena of Maxwell’s demon in several settings such as Brownian particles^{52,53}, electric devices^{54,55}, and biochemical networks^{13,56,57,58,59,60}.
Method
The outline of the derivation of inequality (12)
We here show the outline of the derivation of the generalized second law (12) [see also SI for details]. In SI, we show that the quantity , can be rewritten as the KullbuckLeiber divergence ^{4}, where and , are nonnegative functions that satisfy the normalizations and . Due to the nonnegativity of the KullbuckLeiber divergence, we obtain the inequality (12), i.e., . We add that the integrated fluctuation theorem corresponding to the inequality (12) is also valid, i.e., .
The outline of the derivation of inequality (20)
We here show the outline of the derivation of the gambling inequality (20) [see also SI for details]. The quantity can be rewritten as the KullbuckLeiber divergence D_{KL}(ρ  π), where and , are nonnegative functions that satisfy the normalizations and . Due to the nonnegativity of the KullbuckLeiber divergence, we have the inequality (20), i.e., .
Additional Information
How to cite this article: Ito, S. Backward transfer entropy: Informational measure for detecting hidden Markov models and its interpretations in thermodynamics, gambling and causality. Sci. Rep. 6, 36831; doi: 10.1038/srep36831 (2016).
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Hamilton, J. D. “Time series analysis” (Princeton: Princeton university press, 1994).
Gao, Z. K. & Jin, N. D. A directed weighted complex network for characterizing chaotic dynamics from time series. Nonlinear Analysis: Real World Applications, 13, 947–952 (2012).
Ahmed, M. U. & Mandic, D. P. Multivariate multiscale entropy analysis. IEEE Signal Processing Letters, 19, 91–94 (2012).
Cover, T. M. & Thomas, J. A. “Elements of Information Theory” (John Wiley and Sons, New York, 1991).
Schreiber, T. Measuring information transfer. Phys. Rev. Lett. 85, 461 (2000).
Kaiser, A. & Schreiber, T. Information transfer in continuous processes. Physica D 166, 43–62 (2002).
HlaváčkováSchindler, K., Paluš, M., Vejmelka, M. & Bhattacharya, J. Causality detection based on informationtheoretic approaches in time series analysis. Phys. Rep. 441, 1–46 (2007).
Marschinski, R. & Kantz, H. Analyzing the information flow between financial time series. Eur. Phys. J. B 30, 275 (2002).
Lungarella, M. & Sporns, O. Mapping information flow in sensorimotor networks. PLoS Comput. Biol, 2, e144 (2006).
Vicente, R., Wibral, M., Lindner, M. & Pipa, G. Transfer entropya modelfree measure of effective connectivity for the neurosciences. J. Comput. Neurosci. 30, 45–67 (2011).
Wibral, M. et al. Measuring informationtransfer delays. PloS one, 8(2), e55809 (2013).
Bauer, M., Cox, J. W., Caveness, M. H., Downs, J. J. & Thornhill, N. F. Finding the direction of disturbance propagation in a chemical process using transfer entropy. IEEE Trans. Control Syst. Techn. 15, 12–21 (2007).
Ito, S. & Sagawa, T. Maxwell’s demon in biochemical signal transduction with feedback loop. Nat. Commun. 6, 7498 (2015).
Hartich, D., Barato, A. C. & Seifert, U. Sensory capacity: An information theoretical measure of the performance of a sensor. Phys. Rev. E 93, 022116 (2016).
Ito, S. & Sagawa, T. Information thermodynamics on causal networks. Phys. Rev. Lett. 111, 180603 (2013).
Prokopenko, M., Lizier, J. T. & Price, D. C. On thermodynamic interpretation of transfer entropy. Entropy 15, 524–543 (2013).
Barnett, L., Lizier, J. T., Harr, M., Seth, A. K. & Bossomaier, T. Information flow in a kinetic Ising model peaks in the disordered phase. Phys. Rev. Lett. 111, 177203 (2013).
Hartich, D., Barato, A. C. & Seifert, U. Stochastic thermodynamics of bipartite systems: transfer entropy inequalities and a Maxwell’s demon interpretation. J. Stat. Mech. P02016 (2014).
Prokopenko, M. & Einav, I. Information thermodynamics of nearequilibrium computation. Phys. Rev. E 91, 062143 (2015).
Wibral, M., Vicente, R. & Lizier, J. T. (Eds.). Directed information measures in neuroscience (Heidelberg: Springer. 2014).
Staniek, M. & Lehnertz, K. Symbolic transfer entropy. Phys. Rev. Lett. 100, 158101 (2008).
Williams, P. L. & Randall, D. B. “Generalized measures of information transfer.” arXiv preprint arXiv:1102.1507 (2011).
Granger, C. W., Ghysels, E., Swanson, N. R. & Watson, M. W. “Essays in econometrics: collected papers of Clive WJ Granger” (Cambridge University Press 2001).
Granger, C. W. Investigating causal relations by econometric models and crossspectral methods. Econometrica 37, 424438 (1969).
Barnett, L., Barrett, A. B. & Seth, A. K. Granger causality and transfer entropy are equivalent for Gaussian variables. Phys. Rev. Lett. 103, 238701 (2009).
Marko, H. The bidirectional communication theorya generalization of information theory. IEEE Trans. Infom. Theory 21, 1345 (1973).
Massey, J. Causality, feedback and directed information. In Proc. Int. Symp. Inf. Theory Applic. 303–305 (1990).
Hirono, Y. & Hidaka, Y. Jarzynskitype equalities in gambling: role of information in capital growth. J. Stat. Phys. 161, 721 (2015).
Permuter, H. H., Kim, Y. H. & Weissman, T. On directed information and gambling. In Proc. International Symposium on Information Theory (ISIT), 1403 (2008).
Kelly, J. A new interpretation of information rate. Bell Syst. Tech. J. 35, 917926 (1956).
Sagawa, T. & Ueda, M. Generalized Jarzynski equality under nonequilibrium feedback control. Phys. Rev. Lett. 104, 090602 (2010).
Parrondo, J. M., Horowitz, J. M. & Sagawa, T. Thermodynamics of information. Nat. Phys. 11, 131–139 (2015).
Allahverdyan, A. E., Janzing, D. & Mahler, G. Thermodynamic efficiency of information and heat flow. J. Stat. Mech. P09011 (2009).
Horowitz, J. M. & Esposito, M. Thermodynamics with continuous information flow. Phys. Rev. X, 4, 031015 (2014).
Horowitz, J. M. & Sandberg, H. Secondlawlike inequalities with information and their interpretations. New J. Phys. 16, 125007 (2014).
Horowitz, J. M. Multipartite information flow for multiple Maxwell demons. J. Stat. Mech. P03006 (2015).
Ito, S. & Sagawa, T. “Information flow and entropy production on Bayesian networks” arXiv: 1506.08519 (2015); a book chapter in M. Dehmer, F. EmmertStreib, Z. Chen & Y. Shi (Eds.) “Mathematical Foundations and Applications of Graph Entropy” (WileyVCH Verlag, Weinheim, 2016).
Ito, S., Information thermodynamics on causal networks and its application to biochemical signal transduction. (Springer: Japan,, 2016).
Shiraishi, N. & Sagawa, T. Fluctuation theorem for partially masked nonequilibrium dynamics. Phys. Rev. E. 91, 012130 (2015).
Shiraishi, N., Ito, S., Kawaguchi, K. & Sagawa, T. Role of measurementfeedback separation in autonomous Maxwell’s demons. New J. Phys. 17, 045012 (2015).
Rosinberg, M. L., Munakata, T. & Tarjus, G. Stochastic thermodynamics of Langevin systems under timedelayed feedback control: Secondlawlike inequalities. Phys. Rev. E 91, 042114 (2015).
Cafaro, C., Ali, S. A. & Giffin, A. Thermodynamic aspects of information transfer in complex dynamical systems. Phys. Rev. E, 93, 022114 (2016).
Yamamoto, S., Ito, S., Shiraishi, N. & Sagawa, T. Linear Irreversible Thermodynamics and Onsager Reciprocity for Informationdriven Engines. arXiv:1604.07988 (2016).
Sekimoto, K. Stochastic Energetics (Springer, 2010).
Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 75, 126001 (2012).
Still, S., Sivak, D. A., Bell, A. J. & Crooks, G. E. Thermodynamics of prediction. Phys. Rev. Lett. 109, 120604 (2012).
Still, S. Information bottleneck approach to predictive inference. Entropy, 16, 968–989 (2014).
Diana, G. & Esposito, M. Mutual entropy production in bipartite systems. Journal of Statistical Mechanics: Theory and Experiment, P04010 (2014).
Vinkler, D. A., Permuter, H. H. & Merhav, N. Analogy between gambling and measurementbased work extraction. J. Stat. Mech. P043403 (2016).
Cover, T. M. & Ordentlich, E. Universal portfolios with side information. IEEE Trans. Infom. Theory 42, 348–363 (1996).
Permuter, H. H., Kim, Y. H. & Weissman, T. Interpretations of directed information in portfolio theory, data compression, and hypothesis testing. IEEE Trans. Inform. Theory 57, 3248–3259 (2011).
Toyabe, S., Sagawa, T., Ueda, M., Muneyuki, E. & Sano, M. Experimental demonstration of informationtoenergy conversion and validation of the generalized Jarzynski equality. Nat. Phys. 6, 988 (2010).
Bérut, A., Arakelyan, A., Petrosyan, A., Ciliberto, S., Dillenschneider, R. & Lutz, E. Experimental verification of Landauer’s principle linking information and thermodynamics. Nature 483, 187 (2012).
Koski, J. V., Maisi, V. F., Sagawa, T. & Pekola, J. P. Experimental observation of the role of mutual information in the nonequilibrium dynamics of a Maxwell demon. Phys. Rev. Lett. 113, 030601 (2014).
Kutvonen, A., Koski, J. & AlaNissila, T. Thermodynamics and efficiency of an autonomous onchip Maxwell’s demon. Sci. Rep. 6, 21126 (2016).
Sartori, P., Granger, L., Lee, C. F. & Horowitz, J. M. Thermodynamic costs of information processing in sensory adaptation. PLoS Comput Biol, 10, e1003974 (2014).
Barato, A. C., Hartich, D. & Seifert, U. Efficiency of cellular information processing. New J. Phys. 16, 103024 (2014).
Bo, S., Del Giudice, M. & Celani, A. Thermodynamic limits to information harvesting by sensory systems. J. Stat. Mech. P01014 (2015).
Ouldridge, T. E., Govern, C. C. & Wolde, P. R. T. The thermodynamics of computational copying in biochemical systems. arXiv:1503.00909 (2015).
McGrath, T., Jones, N. S., Wolde, P. R. T. & Ouldridge, T. E., A biochemical machine for the interconversion of mutual information and work. arXiv:1604.05474 (2016).
Acknowledgements
We are grateful to Takumi Matsumoto, Takahiro Sagawa, Naoto Shiraishi and Shumpei Yamamoto for the fruitful discussion on a range of issues of thermodynamics. This work was supported by JSPS KAKENHI Grant Numbers JP16K17780 and JP15J07404.
Author information
Authors and Affiliations
Contributions
S.I. constructed the theory, and carried out the analytical calculations, and wrote the paper.
Ethics declarations
Competing interests
The author declares no competing financial interests.
Electronic supplementary material
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Cite this article
Ito, S. Backward transfer entropy: Informational measure for detecting hidden Markov models and its interpretations in thermodynamics, gambling and causality. Sci Rep 6, 36831 (2016). https://doi.org/10.1038/srep36831
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/srep36831
This article is cited by

Geometric thermodynamics for the Fokker–Planck equation: stochastic thermodynamic links between information geometry and optimal transport
Information Geometry (2023)

Gibbs Distribution from Sequentially Predictive Form of the Second Law
Journal of Statistical Physics (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.