Occam’s Quantum Strop: Synchronizing and Compressing Classical Cryptic Processes via a Quantum Channel

A stochastic process’ statistical complexity stands out as a fundamental property: the minimum information required to synchronize one process generator to another. How much information is required, though, when synchronizing over a quantum channel? Recent work demonstrated that representing causal similarity as quantum state-indistinguishability provides a quantum advantage. We generalize this to synchronization and offer a sequence of constructions that exploit extended causal structures, finding substantial increase of the quantum advantage. We demonstrate that maximum compression is determined by the process’ cryptic order–a classical, topological property closely allied to Markov order, itself a measure of historical dependence. We introduce an efficient algorithm that computes the quantum advantage and close noting that the advantage comes at a cost–one trades off prediction for generation complexity.

Discovering and describing correlation and pattern are critical to progress in the physical sciences. Observing the weather in California last Summer we find a long series of sunny days interrupted only rarely by rain-a pattern now all too familiar to residents. Analogously, a one-dimensional spin system in a magnetic field might have most of its spins "up" with just a few "down"-defects determined by the details of spin coupling and thermal fluctuations. Though nominally the same pattern, the domains of these systems span the macroscopic to the microscopic, the multi-layer to the pure. Despite the gap, can we meaningfully compare these two patterns?
To exist on an equal descriptive footing, they must each be abstracted from their physical embodiment by, for example, expressing their generating mechanisms via minimal probabilistic encodings. Measures of unpredictability, memory, and structure then naturally arise as information-theoretic properties of these encodings. Indeed, the fundamental interpretation of (Shannon) information is as a rate of encoding such sequences. This recasts the informational properties as answers to distinct communication problems. For instance, a process' structure becomes the problem of two observers, Alice and Bob, synchronizing their predictions of the process.
However, what if the communication between Alice and Bob is not classical? What if Alice instead sends qubits to Bob-that is, they synchronize over a quantum channel? Does this change the communication requirements? More generally, does quantum communication enhance our understanding of what "pattern" is in the first place? What if the original process is itself quantum? More practically, is the quantum encoding more compact?
A provocative answer to the last question appeared recently [1][2][3] suggesting that a quantum representation can compress a stochastic process beyond its known classical limits 4 . In the following, we introduce a new construction for quantum channels that improves and broadens that result to any memoryful stochastic process, is highly computationally efficient, and points toward optimal quantum compression. Importantly, we draw out the connection between quantum compressibility and process cryptic order-a purely classical property that was only recently discovered 5 . Finally, we discuss the subtle way in which the quantum framing of pattern and structure differs from the classical.

Synchronizing Classical Processes
To frame these questions precisely, we focus on patterns generated by discrete-valued, discrete-time stationary stochastic processes. There is a broad literature that addresses such emergent patterns [6][7][8] . In particular, A process' -machine allows one to directly calculate its measures of unpredictability, memory, and structure.
For example, the most basic question about unpredictability is, how much uncertainty about the next future observation remains given complete knowledge of the infinite past? This is measured by the well-known Shannon entropy rate h μ 9-12 where X L denotes the symbol random variable (r.v.) at time L, X 0:L denotes the length-L block of symbol r.v.s X 0 , …, X L−1 , and = − ∑ p p H l og i i is the Shannon entropy (in bits using log base 2) of the probability distribution {p i } 13 . A process' -machine allows us to directly calculate this in closed form as the state-averaged branching uncertainty: where π i denotes the stationary distribution over the causal states. This form is possible due to the -machine's unifilarity: in each state σ, each symbol x leads to at most one successor state σ′ .
One can ask the complementary question, given knowledge of the infinite past, how much can we reduce our uncertainty about the future? This quantity is the mutual information between the past and future and is known the excess entropy 9 : It is the total amount of future information predictable from the past. Using the -machine we can directly calculate it also: where +  and −  are the forward (predictive) and reverse (retrodictive) causal states, respectively 5 . This suggests we think of any process as channel that communicates the past to the future through the present. In this view E is the information transmission rate through the present "channel". The excess entropy has been applied to capture the total predictable information in such diverse systems as Ising spin models 14 , diffusion in nonlinear potentials 15 , neural spike trains [16][17][18] , and human language 19 .
What memory is necessary to implement predicting E bits of the future given the past? Said differently, what resources are required to instantiate this putative channel? Most basically, this is simply the historical information the process remembers and stores in the present. The minimum necessary such information is that stored in the causal states, the statistical complexity 4 : Importantly, it is lower-bounded by the excess entropy: What do these quantities tell us? Perhaps the most surprising observation is that there is a large class of cryptic processes for which E ≪ C μ 5 . The structural mechanism behind this difference is characterized by the cryptic order: . A related and more familiar property is the Markov order: the small- . Markov order reflects a process' historical dependence. These orders are independent apart from the fact that k ≤ R 20,21 . It is worth pointing out that the equality E = C μ is obtained exactly for cryptic order k = 0 and, furthermore, that this corresponds with counifilarity-for each state σ′ and each symbol x, there is at most one prior state σ that leads to σ′ on a transition generating x 21 .
These properties play a key role in the following communication scenario where we have a given process' -machine in hand. Alice and Bob each have a copy. Since she has been following the process for some time, using her -machine Alice knows that the process is currently in state σ j , say. From this knowledge, she can use her -machine to make the optimal probabilistic prediction σ ( | ) X Pr about the process' future (and do so over arbitrarily long horizons L). While Bob is able to produce all such predictions from each of his -machine's states, he does not know which particular state is currently relevant to Alice. We say that Bob and Alice are unsynchronized.
To communicate the relevant state to Bob, Alice must send at least C μ bits of information. More precisely, to communicate this information for an ensemble (size N → ∞) of -machines, she may, by the Shannon noiseless coding theorem 13 , send NC μ bits. Under this interpretation, C μ is a fundamental measure of a process' structure in that it characterizes not only the correlation between past and future, but also the mechanism of prediction. In the scenario with Alice and Bob, C μ is seen as the communication cost to synchronize. We can also imagine Alice Scientific RepoRts | 6:20495 | DOI: 10.1038/srep20495 using this channel to communicate with her future self. In this light, C μ is understood as a fundamental measure of a process' internal memory.

Results
Quantum Synchronization. What if Alice can send qubits to Bob? Consider a communication protocol in which Alice encodes the causal state in a quantum state that is sent to Bob. Bob then extracts the information through measurement of this quantum state. Their communication is implemented via a quantum object-the q-machine -that simulates the original stochastic process. It sports a single parameter that sets the horizon-length L of future words incorporated in the quantum-state superpositions it employs. We monitor the q-machine protocol's efficacy by comparing the quantum-state information transmission rate to the classical causal-state rate (C μ ).
The q-machine M(L) consists of a set η ( )〉 L { } k of pure signal states that are in one-to-one correspondence with the classical causal states  σ ∈ k . Each signal state η ( ) L k encodes the set of length-L words that may follow σ k , as well as each corresponding conditional probability used for prediction from σ k . Fixing L, we construct quantum states of the form: . Due to -machine unifilarity, a word w L following a causal state σ j leads to only one subsequent causal state. Thus, Additionally, insight about the q-machine can be gained through its connection with the classical concatenation machine defined in ref. 22; the q-machine M(L) is equivalent to the q-machine M(1) derived from the Lth concatenation machine.
Having specified the state space, we now describe how the q-machine produces symbol sequences. Given one of the pure quantum signal states, we perform a projective measurement in the  w basis. This results in a symbol string = , …, − w x x L L 0 1 , which we take as the next L symbols in the generated process. Since the -machine is unifilar, the quantum conditional state must be in some basis state σ k of  σ . Subsequent measurement in this basis then indicates the corresponding classical causal state with no uncertainty.
Observe that the probability of a word w L given quantum state η k is equal to the probability of that word given the analogous classical state σ k . Also, the classical knowledge of the subsequent corresponding causal state can be used to prepare a subsequent quantum state for continued symbol generation. Thus, the q-machine generates the desired stochastic process and is, in this sense, equivalent to the classical -machine.
Focus now on the q-machine's initial quantum state: We see this mixed quantum state is composed of pure signal states combined according to the probabilities of each being prepared by Alice (or being realized by the original process that she observes). These are simply the probabilities of each corresponding classical causal state, which we take to be the stationary distribution: p i = π i . In short, quantum state ρ(L) is what Alice must transmit to Bob for him to successfully synchronize. Later, we revisit this scenario to discuss the tradeoffs associated with the q-machine representation. If Alice sends a large number N of these states, she may, according to the quantum noiseless coding theorem 23 , compress this message into NS(ρ(L)) qubits, where S is the von Neumann entropy S(ρ) = tr(ρ log(ρ)). Due to its parallel with C μ , and for convenience, we define the function: Recall that, classically, Alice must send NC μ bits. To the extent that NC q (L) is smaller, the quantum protocol will be more efficient. In this particular sense, the q-machine is a compressed representation of the original process and its ε-machine.
Example Processes: C q (L). Let's now draw out specific consequences of using the q-machine. We explore protocol efficiency by calculating C q (L) for several example processes, each chosen to illustrate distinct properties: the q-machine affords a quantum advantage, further compression can be found at longer horizons L, and the compression rate is minimized at the horizon length k-the cryptic order of the classical process 21 .
For each example, we examine a process family by sweeping one transition probability parameter, illustrating C q (L) and its relation to classical bounds C μ and E. Additionally, we highlight a single representative process at one generic transition probability. Following these examples, we turn to discuss more general properties of q-machine compression that apply quite broadly and how the results alter our notion of quantum structural complexity.
Biased Coins Process. The Biased Coins Process provides a first, simple case that realizes a nontrivial quantum state entropy 1 . There are two biased coins, named A and B. The first generates 0 with probability p; the second, 0 with probability 1 − p. At each step, one coin is flipped-which coin is flipped depends on the result of the previous flip. If the previous flip yielded a 1, the next flip is made using coin B. If the previous flip yielded a 1, the Scientific RepoRts | 6:20495 | DOI: 10.1038/srep20495 next flip is made using coin A. Otherwise the same coin is flipped. Its two causal-state -machine is shown in Fig. 1(top).
Consider p ≈ 1/2. The generated sequence is close to that of a fair coin. And, starting with coin A or B makes little difference to the future; there is little to predict about future sequences. This intuition is quantified by the predictable information E ≈ 0, when p is near 1/2. See Fig. 1(left).
In contrast, since the causal states have equal probability, C μ = 1 bit independent of parameter p. (All information measures are quoted in log base 2.) This is because there is always some, albeit very little, predictive advantage to remembering whether the last symbol was 0 or 1. Retaining this advantage, however small, requires the use of an entire (classical) bit. The gap between C μ and E presents an opportunity for large quantum improvement. It is only at the exact value p = 1/2 where the two causal states merge, this advantage disappears, and the process becomes memoryless or independent, identically distributed (IID). This is reflected in the discontinuity of C μ as p → 1/2, which is sometimes misinterpreted as a deficiency of C μ . Contrariwise, this feature follows naturally from the equivalence relation Eq. (1) and is a signature of symmetry. Now, let's consider these complexities in the quantum setting where we monitor communication costs using C q (L). To understand its behavior, we first write down the q-machine's states.
The von Neumann entropy of the former is simply the Shannon information of the signal state distribution; that is, C q (0) = C μ . In the latter, however, the two quantum states have a nonzero overlap (inner product). This implies that the von Neumann entropy is smaller than the Shannon entropy (Note that for p ∈ {0, 1/2, 1} these quantities are all equal and equal to zero. This comes from the simplification of process topology caused by state merging dictated by the predictive equivalence relation, Eq. (1).) How do costs change with sequence length L? To see this Fig. 1(right) expands the left view for a single value of p. As expected, C q (L) decreases from L = 0 to L = 1. However, it then remains constant for all L ≥ 1. There is no additional quantum state-compression afforded by expanding the q-machine to use longer horizons.
The Biased Coins Process has been analyzed earlier using a construction equivalent to an L = 1 q-machine 1 , similarly finding that the number of required qubits falls between E and C μ . The explanation there for this compression (C q (1) < C μ ) was lack of counifilarity in the process' -machine. More specifically, ref. 1 showed that E = C q = C μ if and only if the -machine is counifilar, and E < C q < C μ otherwise. The Biased Coins Process is easily seen to be noncounifilar and so the inequality follows. This previous analysis happens to be sufficient for the Biased Coins Process, since C q (L) does not decrease beyond L = 1. Unfortunately, only this single, two-state process was analyzed previously when, in fact, the space of processes is replete with richly structured behaviors 26 . With this in mind, and to show the power of the q-machine, we step into deeper water and consider a 7-state process that is almost periodic with a random phase-slip.

R-k Golden Mean Process.
The R-k Golden Mean Process is a useful generalization of the Markov order-1 Golden Mean Process that allows for the independent specification of Markov order R and cryptic order k 20,21 . Figure 2(top) illustrates its -machine. We take R = 4 and k = 3.
The calculations in Fig. 2(left) show again that C q (L) generically lies between E and C μ , across this family of processes. In contrast with the previous example, C q (L) continues to decrease beyond L = 1. Figure 2(right) illustrates that the successive q-machines continue to reduce the von Neumann entropy: C μ > C q (1) > C q (2) > C q (3). However, there is no further improvement beyond a future-depth of L = 3, the cryptic order: C q (3) = C q (L > 3). It is important to note that the compression improvements at stages L = 2 and L = 3 are significant. Therefore, a length-1 quantum representation misses the majority of the quantum advantage.
To understand these results we need to sort out how quantum compression stems from noncounifilarity. In short, the latter leads to quantum signal states with nonzero overlap that allow for super-classical compression. Let's explain using the current example. There is one noncounifilar state in this process, state A. Both states A and G lead to A on a symbol 1. Due to this, at L = 1, the two q-machine states: . (All other overlaps in the L = 1 q-machine vanish.) As with the Biased Coins Process, this leads to the inequality C q (1) < C μ .
Extending the representation to L = 2 words, we find three nonorthogonal quantum states:

Note that the overlap η η
A G is unchanged. This is because the conditional futures are identical once the merger on symbol 1 has taken place. That is, the words 11 and 10, which contribute to the L = 2 overlap η η A G , simply derive from the prefix 1, which was the source of the overlap at L = 1. In order to obtain a change in this or any other overlap, there must be a new merger-inducing prefix (for that state-pair). (See Sec. 5 for computational implications.) Since all quantum amplitudes are positive, each pairwise overlap is a nondecreasing function of L.
At L = 2 we have two such new mergers: 11 for η η A F and 11 for η η F G . This additional increase in pairwise overlaps leads to a second decrease in the von Neumann entropy. (See Sec. 3 for details.) Then, at L = 3, we find three new mergers: 111 for η η A E , 111 for η η E F , and 111 for η η E G . As before, the pre-existing mergers simply acquire suffixes and do not change the degree of overlap.
Importantly, we find that at L = 4 there are no new mergers. That is, any length-4 word that leads to the merging of two states must merge before the fourth symbol. In general, the length at which the last merger occurs is equivalent to the cryptic order 21 . Further, it is known that the von Neumann entropy is a function of pairwise overlaps of signal states 27 . Therefore, a lack of new mergers, and thus constant overlaps, implies that the von Neumann entropy is constant. This demonstrates that C q (L) is constant for L ≥ k, for k the cryptic order.
The R-k Golden Mean Process was selected to highlight the unique role of the cryptic order, by drawing a distinction between it and Markov order. The result emphasizes the physical significance of the cryptic order. In the example, it is not until L = 4 that a naive observer can synchronize to the causal state; this is shown by the Markov order. For example, the word 000 induces two states D and E. Just one more symbol synchronizes to either E (on 0) or F (on 1). Yet recall that synchronization can come about in two ways. A word may either induce a path merger or a path termination. All merger-type synchronizations must occur no later than the last termination-type synchronization. This is equivalently stated: the cryptic order is never greater than the Markov order 21 .
In the current example, we observe this termination-type of synchronization on the symbol following 000. For instance, 0000 does not lead to the merger of paths originating in multiple states. Rather, it eliminates the possibility that the original state might have been B.
It is the final merger-type synchronization at L = 3 that leads to the final unique-prefix quantum merger and, thus, to the ultimate minimization of the von Neumann entropy. So, we see that in the context of the q-machine, the most efficient state compression is accomplished at the process' cryptic order. (One could certainly continue beyond the cryptic order, but at best this increases implementation cost with no functional benefit.) Nemo Process. To demonstrate the challenges in quantum compressing typical memoryful stochastic processes, we conclude our set of examples with the seemingly simple three-state Nemo Process, shown in Fig. 3(top). Despite its overt simplicity, both Markov and cryptic orders are infinite. As one should now anticipate, each increase in the length L affords a smaller and smaller state entropy, yielding the infinite chain of inequalities: . Figure 3(right) verifies this. This sequence approaches the asymptotic value C q (∞) ≃ 1.0332. We also notice that the convergence of C q (L) is richer than in the previous processes. For example, while the sequence monotonically decreases (and at each p), it is not convex in L. For instance, the fourth quantum incremental improvement is greater than the third.
We now turn to discuss the broader theory that underlies the preceding analyses. We first address the convergence properties of C q (L), then the importance of studying the full range of memoryful stochastic processes, and finally tradeoffs between synchronization, compression, and prediction. C q (L) Monotonicity. It is important to point out that while we observed nonincreasing C q (L) in our examples, this does not constitute proof. The latter is nontrivial since ref. 27 showed that each pairwise overlap of signal states can increase while also increasing von Neumann entropy. (This assumes a constant distribution over signal states.) Furthermore, this phenomenon occurs with nonzero measure. They also provided a criterion that can exclude this somewhat nonintuitive behavior. Specifically, if the element-wise ratio matrix R of two Gram matrices of signal states is a positive operator, then strictly increasing overlaps imply a decreasing von Neumann entropy. We note, however, that there exist processes with -machines for which the R matrix is nonpositive. At the same time, we have found no example of an increasing C q (L).
So, while it appears that a new criterion is required to settle this issue, the preponderance of numerical evidence suggests that C q (L) is indeed monotonically decreasing. In particular, we verified C q (L) monotonicity for many processes drawn from the topological -machine library 28 . Examining 1000 random samples of two-symbol, N-state processes for 2 ≤ N ≤ 7 yielded no counterexamples. Thus, failing a proof, the survey suggests that this is the dominant behavior.
Infinite Cryptic Order Dominates. The Biased Coins Process, being cryptic order k = 1, is atypical.
Previous exhaustive surveys demonstrated the ubiquity of infinite Markov and cryptic orders within process space. For example, Fig. 4 shows the distribution of different Markov and cryptic orders for processes generated by six-state, binary-alphabet, exactly-synchronizing -machines 29 . The overwhelming majority have infinite Markov and cryptic orders. Furthermore, among those with finite cryptic order, orders zero and one are not common. Such surveys in combination with the apparent monotonic decrease of C q (L) confirm that, when it comes to general claims about compressibility and complexity, it is advantageous to extend analyses to long sequence lengths.
Scientific RepoRts | 6:20495 | DOI: 10.1038/srep20495 Prediction-Compression Trade Off. Let's return to Alice and Bob in their attempt to synchronize on a given stochastic process to explore somewhat subtle trade-offs in compressibility, prediction, and complexity. Figure 5 illustrates the difference in their ability to generate probabilistic predictions about the future given the historical data. There, Alice is in causal state A (signified by  for Alice). Her prediction "cone" is depicted in light gray. It depicts the span over which she can generate probabilistic predictions conditioned on the current causal state (A). She chooses to map this classical causal state to a L = 3 q-machine state and send it to Bob. (Whether  this is part of an ensemble of other such states or not affects the rate of qubit transmission, but not the following argument.) It is important to understand that Bob cannot actually determine the corresponding causal state (at time t = 0). He can, however, make a measurement that results in some symbol sequence of length 3 followed by a definite (classical) causal state. In the figure, he generates the sequence 111 followed by causal state A at time t = 3. This is shown by the blue state-path ending in  for Bob. Now Bob is in position to generate corresponding conditional predictions-'s future cone ( ) ∞ X Pr 0:  (dark gray). As the figure shows, this cone is only a subprediction of Alice's. That is, it is equivalent to Alice's prediction conditioned on her observation of 111 or any other word leading to the same state. Now, what can Bob say about times t = 0, 1, 2? The light blue states and dashed edges in the figure show the alternate paths that could have also lead to his measurement of the sequence 111 and state A. For instance, Bob can only say that Alice might have been in causal states A, D, or E at time t = 0. In short, the quantum representation led to Bob's uncertainty about the initial state sequence and, in particular, Alice's prediction. Altogether, we see that the quantum representation gains compressibility at the expense of Bob's predictive power.
What if Alice does not bother to compute k and, wanting to make good use of quantum compressibility, uses an L = 1000 q-machine? Does this necessarily translate into Bob's uncertainty in the first 1000 states and, therefore, only a highly conditional prediction? In our example, Alice was not quite so enthusiastic and settled for the L = 3 q-machine. We see that Bob can use his current state A at t = 3 and knowledge of the word that led to it to infer that the state at t = 2 must have been A. The figure denotes his knowledge of this state by ′  . For other words he may be able to trace farther back. (For instance, 000 can be traced back from D at t = 3 all the way to A at t = 0.) The situation chosen in the figure illustrates the worst-case scenario for this process where he is able to trace back and discover all but the first 2 states. The worst-case scenario defines the cryptic order k, in this case k = 2. After this tracing back, Bob is then able to make the improved statement, "If Alice observes symbols 11, then her conditional prediction will be ( ) ∞ X A Pr 0: ". This means that Alice and Bob cannot suffer through overcoding-using an L in excess of k.
Finally, one feature that is unaffected by such manipulations is the ability of Alice and Bob to generate a single future instance drawn from the distribution ( ) ∞ X A Pr 0: . This helps to emphasize that generation is distinct from prediction. Note that this is true for the q-machine M(L) at any length.

Methods
Let's explain the computation of C q (L). First, note that the size of the q-machine M(L) Hilbert space grows as L A ( | L A 2 for the density operators). That is, computing C q (L = 20) for the Nemo Process involves finding eigenvalues of a matrix with 10 12 elements. Granted, these matrices are often sparse, but the number of components in each signal state still grows exponentially with the topological entropy rate of the process. This alone would drive computations for even moderately complex processes (described by moderate-sized -machines) beyond the access of contemporary computers.
Recall though that there are, at any L, still only |S| quantum signal states to consider. Therefore, the embedding of this constant-sized subspace wastes an exponential amount of the embedding space. We desire a computation of C q (L) that is independent of the diverging embedding dimension.
Another source of difficulty is the exponentially increasing number of words with L. However, we only need to consider a small subset of these words. Once a merger has occurred between states η i and η | 〉 j on word w, subsequent symbols, while maintaining that merger, do not add to the corresponding overlap. That is, the contribution to the overlap η η 〈 | 〉 i j by all words with prefix w is complete. To take advantage of these two opportunities for reduction, we compute C q (L) in the following manner. First, we construct the "pairwise-merger machine" (PMM) from the -machine. The states of the PMM are unordered pairs of causal states. A pair-state (σ i , σ j ) leads to (σ m , σ n ) on symbol x if σ i leads to σ m on x and σ j leads to σ m on x. (Pairs are unordered, so (σ m , σ n ) = (σ n , σ m ).) If both components in a pair-state lead to the same causal . Trading prediction for quantum compression:  is Alice's state of predictive knowledge.  is that for Bob, except when he uses the process' -machine to refine it. In which case, his predictive knowledge becomes that in ′, which can occur at a time no earlier than that determined by the cryptic order k.
Scientific RepoRts | 6:20495 | DOI: 10.1038/srep20495 state, then this represents a merger. Of course, these mergers from pair-states occur only when entering noncounifilar states of the -machine. If either component state forbids subsequent emission of symbol x, then that edge is omitted. The PMMs for the three example processes are shown in Fig. 6. Now, making use of the PMM, we begin at each noncounifilar state and proceed backward through the pair-state transient structure. At each horizon-length, we record the pair-states visited and with what probabilities. This allows computing each increment to each overlap. Importantly, by moving up the transient structure, we avoid keeping track of any further novel overlaps; they are all "behind us". Additionally, the finite number of pair-states gives us a finite structure through which to move; when the end of a branch is reached, its contributions cease. It is worth noting that this pair-state transient structure may contain cycles (as it does for the Nemo Figure 6. Pairwise-merger machines for our three example processes. Pair-states (red) lead to each other or enter the -machine at a noncounifilar state. For example, in the R-k Golden Mean (middle), the two pair-states AF and FG both lead to pair-state AG on 0. Then pair-state AG leads to state A, the only noncounifilar state in this -machine.