A Rewritable, Random-Access DNA-Based Storage System

We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.

Addressing the emerging demands for massive data repositories, and building upon the rapid development of technologies for DNA synthesis and sequencing, a number of laboratories have recently outlined architectures for archival DNA-based storage [1,2,3,4,5].The architecture in [3] achieved a storage density of 700 TB/gram, while the system described in [4] raised the density to 2.2 PB/gram.The success of the latter method may be largely attributed to three classical coding schemes: Huffman coding, differential coding, and single parity-check coding [4].Huffman coding was used for data compression, while differential coding was used for eliminating homopolymers (i.e., repeated consecutive bases) in the DNA strings.Parity-checks were used to add controlled redundancy, which in conjunction with four-fold coverage allows for mitigating assembly errors 1 .
Due to dynamic changes in biotechnological systems, none of the three coding schemes represents a suitable solution from the perspective of current DNA sequencer designs: Huffman codes are fixed-to-variable length compressors that can lead to catastrophic error propagation in the presence of sequencing noise; the same is true of differential codes.Homopolymers do not represent a significant source of errors in Illumina sequencing platforms [6], while single parity redundancy or RS codes and differential encoding are inadequate for combating error-inducing sequence patterns such as long substrings with high GC content [6].As a result, assembly errors are likely, and were observed during the readout process described in [4].
An even more important issue that prohibits the practical wide-spread use of the schemes described in [3,4] is that accurate partial and random access to data is impossible, as one has to reconstruct the whole text in order to read or retrieve the information encoded even in a few bases.Furthermore, all current designs support read-only storage.The first limitation represents a significant drawback, as one usually needs to accommodate access to specific data sections; the second limitation prevents the use of current DNA storage methods in architectures that call for moderate data editing, for storing frequently updated information and memorizing the history of edits.Moving from a read-only to a rewritable DNA storage system requires a major implementation paradigm shift, as: 1. Editing in the compressive domain may require rewriting almost the whole information content; 2. Rewriting is complicated by the current data DNA storage format that involves reads of length 100 bps shifted by 25 bps so as to ensure four-fold coverage of the sequence (See Figure 1.1 (a) for an illustration and description of the data format used in [4]).In order to rewrite one base, one needs to selectively access and modify four "consecutive" reads; 3. Addressing methods used in [3,4] only allow for determining the position of a read in a file, but cannot ensure precise selection of reads of interest, as undesired cross-hybridization between the primers and parts of the information blocks may occur.
To overcome the aforementioned issues, we developed a new, random-access and rewritable DNA-based storage architecture based on DNA sequences endowed with specialized address strings that may be used for selective information access and encoding with inherent error-correction capabilities.The addresses are designed to be mutually uncorrelated and to satisfy the error-control running digital sum constraint [7,8].Given the address sequences, encoding is performed by stringing together properly terminated prefixes of the addresses as dictated by the information sequence.This encoding method represents a special form of prefix-synchronized coding [9].Given that the addresses are chosen to be uncorrelated and at large Hamming distance from each other, it is highly unlikely for one address to be confused with another address or with another section of the encoded blocks.Furthermore, selection of the blocks to be rewritten is made possible by the prefix encoding format, while rewriting is performed via two DNA editing techniques, the gBlock and OE-PCR (overlap-extension polymerase chain reaction) methods [10,11].With the latter method, rewriting is done in several steps by using short and cheap primers.The first method is more efficient, but requires synthesizing longer and hence more expensive primers.Both methods were tested on DNA encoded Wikipedia entries of size 17 KB, corresponding to six universities, where information in one, two and three blocks was rewritten in the DNA encoded domain.The rewritten blocks were selected, amplified and Sanger sequenced [12] to verify that selection and rewriting are performed with 100% accuracy.

Results
The main feature of our storage architecture that enables highly sensitive random access and accurate rewriting is addressing.The rational behind the proposed approach is that each block in a random access system must be equipped with an address that will allow for unique selection and amplification via DNA sequence primers.
Instead of storing blocks mimicking the structure and length of reads generated during high-throughput sequencing, we synthesized blocks of length 1000 bps tagged at both ends by specially designed address sequences.Adding addresses to short blocks of length 100 bps would incur a large storage overhead, while synthesizing blocks longer than 1000 bps using current technologies is prohibitively costly.
More precisely, each data block of length 1000 bps was flanked at both ends by two unique, yet different, address blocks of length 20 bps.These addresses are used to provide specificity of access (see Figure 1.1 (b) and the Supplementary Information for details).The remaining 960 bases in a block are divided into 12 sub-blocks of length 80 bps, with each block encoding six words of the text.The "word-encoding" process may be seen as a specialized compaction scheme suitable for rewriting, and it operates as follows.First, different words in the text are counted and tabulated in a dictionary.Each word in the dictionary is converted into a binary sequence of length sufficiently long to allow for encoding of the dictionary.For our current implementation and texts of choice, described in the Supplementary Information section, this length was set to 24.Encodings of six consecutive words are subsequently grouped into binary sequences of length 144.The two-bit 11 is appended as a word marker to the left hand side of each binary sequence of length 144, resulting in sequences of length 146 bits.The binary sequences are subsequently translated into DNA blocks of length 80 bps using a new family of DNA prefix-synchronized codes described in the Methods section.Our choice for the number of jointly encoded words is governed by the goal to make rewrites as straightforward as possible and to avoid error propagation due to variable codelengths.Furthermore, as most rewrites include words, rather than individual symbols, the word encoding method represents an efficient means for content update.Details regarding the counting and grouping procedure may be found in the Supplementary Information.
For three selected access queries, the 1000 bps blocks containing the desired information were identified via primers corresponding to their unique addresses, PCR amplified, Sanger sequenced, and subsequently decoded.
Two methods were used for content rewriting.If the region to be rewritten had length exceeding several hundreds, new sequences with unique primers were synthesized as this solution represents a less costly alternative to rewriting.For the case that a relatively short substring of the encoded string had to be modified, the corresponding 1000 bps block hosting the string was identified and the changes were generated via DNA editing.[4] uses a storage format consisting of DNA strings that cover the encoded compressed text in fragments of length of 100 bps.The fragments overlap in 75 bps, thereby providing 4-fold coverage for all except the flanking end bases.This particular fragmenting procedure prevents efficient file editing: If one were to rewrite the "shaded" block, all four fragments containing this block would need to be selected and rewritten at different positions to record the new "shaded" block.(b) The address sequence construction process using the notions of autocorrelation and cross-correlation of sequences [13].A sequence is uncorrelated with itself if no proper prefix of the sequence is also a suffix of the same sequence.Alternatively, no shift of the sequence overlaps with the sequence itself.Similarly, two different sequences are uncorrelated if no prefix of one sequence matches a suffix of the other.Addresses are chosen to be mutually uncorrelated, and each 1000 bps block is flanked by an address of length 20 on the left and by another address of length 20 on the right (colored ends).(c) Content rewriting via DNA editing: the gBlock method [10] for short rewrites, and the cost efficient OE-PCR (Overlap Extension-PCR) method [11] for sequential rewriting of longer blocks.
Both the random access and rewriting protocols were tested experimentally on two jointly stored text files.One text file, of size 4 KB, contained the history of University of Illinois, Urbana-Champaign (UIUC) based on its Wikipedia entry retrieved on 12/15/2013.The other text file, of size 13 KB, contained the introductory Wikipedia entries of Berkeley, Harvard, MIT, Princeton, and Stanford, retrieved on 04/27/2014.
Encoded information was converted into DNA blocks of length 1000 bps synthesized by IDT (Integrated DNA Technologies), at a cost of $149 per 1000 bps (see http://www.idtdna.com/pages/products/genes/gblocks-gene-fragments).The rewriting experiments encompassed: 1. PCR selection and amplification of one 1000 bps sequence and simultaneous selection and amplification of three 1000 bps sequences in the pool.All 32 linear 1000 bps fragments were mixed, and the mixture was used as a template for PCR amplification and selection.The results of amplification were verified by confirming sequence lengths of 1000 bps banks via gel electrophoresis (Figure 1.2 (a)) and by randomly sampling 3-5 sequences from the pools and Sanger sequencing them (Figure 1.2 (b)).
2. Experimental content rewriting via synthesis of edits located at various positions in the 1000 bps blocks.For simplicity of notation, we refer to the blocks in the pool on which we performed selection and editing as B1, B2, and B3.Two primers were synthesized for each rewrite in the blocks, for the forward and reverse direction.In addition, two different editing/mutation techniques were used, gBlock and Overlap-Extension (OE) PCR.gBlocks are double-stranded genomic fragments used as primers or for the purpose of genome editing, while OE-PCR is a variant of PCR used for specific DNA sequence editing via point editing/mutations or splicing.To demonstrate the plausibility of a cost efficient method for editing, OE-PCR was implemented with general primers (≤ 60 bps) only.Note that for edits shorter than 40 bps, the mutation sequences were designed as overhangs in primers.Then, the three PCR products were used as templates for the final PCR reaction involving the entire 1000 bps rewrite.Figure 1.1 (c) illustrates the described rewriting process.In addition, a summary of the experiments performed is provided in Table S3.Given that each nucleotide has weight roughly equal to 650 daltons (650 × 1.67 × 10 −24 grams), and given that 27, 000 + 5000 = 32, 000 bps were needed to encode a file of size 13 + 4 = 17 KB in ASCII format, we estimate a potential storage density of 4.9 × 10 20 B/g.This density significantly surpasses the current state-of-the-art storage density of 2.2 × 10 15 bytes/g, as we avoid costly multiple coverage, use larger blocklengths and specialized word encoding schemes.A performance comparison of the three currently known DNA-based storage media is given in Table S2.We observe that the cost of sequence synthesis in our storage model is significantly higher than the corresponding cost of the prototype in [4], as blocks of length 1000 bps are still difficult to synthesize.This trend it likely to change dramatically in the near future, as within the last seven months, the cost of synthesizing 1000 bps blocks reduced almost 7-fold.Despite its high cost, our system offers exceptionally large storage density, and for the first time, enables random access and content rewriting features.Furthermore, although we used Sanger sequencing methods for our small scale experiment, for large scale storage projects Next Generation Sequencing (NGS) technologies will enable significant reductions in readout costs.

Address Design and Encoding
To encode information on DNA media, we employed a two-step procedure.First, we designed address sequences of short length which satisfy a number of constraints that makes them suitable for highly selective random access [13].Constrained coding ensures that DNA patterns prone to sequencing errors are avoided and that DNA blocks are accurately accessed, amplified and selected without perturbing or accidentally selecting other blocks in the DNA pool.The coding constraints apply to address primer design, but also indirectly govern the properties of the fully encoded DNA information blocks.The design procedure used is semi-analytical, in so far that it combines combinatorial methods with computer search techniques.
We required the address sequences to satisfy the following constraints: • (C1) Constant GC content (close to 50%) of all their prefixes of sufficiently long length.DNA strands with 50% GC content are more stable than DNA strands with lower or higher GC content and have better coverage during sequencing.Since encoding user information is accomplished via prefix-synchronization, it is important to impose the GC content constraint on the addresses as well as their prefixes, as the latter requirement also ensures that all fragments of encoded data blocks have balanced GC content.
• (C2) Large mutual Hamming distance, as it reduces the probability of erroneous address selection.Recall that the Hamming distance between two strings of equal length equals the number of positions at which the corresponding symbols disagree.An appropriate choice for the minimum Hamming distance is equal to half of the address sequence length (10 bps in our current implementation which uses length 20 address primers).
• (C3) Uncorrelatedness of the addresses, which imposes the restriction that prefixes of one address do not appear as suffixes of the same or another address and vice versa.The motivation for this new constraint comes from the fact that addresses are used to provide unique identities for the blocks, and that their substrings should therefore not appear in "similar form" within other addresses.Here, "similarity" is assessed in terms of hybridization affinity.Furthermore, long undesired prefix-suffix matches may lead to read assembly errors in blocks during joint informational retrieval and sequencing.
• (C4) Absence of secondary (folding) structures, as such structures may cause errors in the process of PCR amplification and fragment rewriting.
Addresses satisfying constraints C1-C2 may be constructed via error-correcting codes with small running digital sum [7] adapted for the new storage system.Properties of these codes are discussed in Section 2.2.The novel notion of mutually uncorrelated sequences is introduced in 2.3.Constructing addresses that simultaneously satisfy the constraints C1-C4 and determining bounds on the largest number of such sequences is prohibitively complex [14,15].To mitigate this problem, we resort to a semi-constructive address design approach, in which balanced error-correcting codes are designed independently, and subsequently expurgated so as to identify a large set of mutually uncorrelated sequences.The resulting sequences are subsequently tested for secondary structure using mfold and Vienna [16].We conjecture that the number of sequences satisfying C1-C4 grows exponentially with their length: proofs towards establishing this claim include results on the exponential size of codes under each constraint individually.
Given two uncorrelated sequences as flanking addresses of one block, one of the sequences is selected to encode user information via a new implementation of prefix-synchronized encoding [17,16], described in 2.4.The asymptotic rate of an optimal single sequence prefix-free codes is one.Hence, there is no asymptotic coding loss for avoiding prefixes of one sequence; we only observe a minor coding loss for each finite-length block.For multiple sequences of arbitrary structure, the problem of determining the optimal code rate is significantly more complicated and the rates have to be evaluated numerically, by solving systems of linear equations [17] as described in 2.4 and the Supplementary Information.This system of equations leads to a particularly simple form for the generating function of mutually uncorrelated sequences, as explained in the Supplementary Information.

Balanced Codes and Running Digital Sums
An important criteria for selecting block addresses is to ensure that the corresponding DNA primer sequences have prefixes with a GC content approximately equal to 50%, and that the sequences are at large pairwise Hamming distance.Due to their applications in optical storage, codes that address related issues have been studied in a different form under the name of bounded running digital sum (BRDS) codes [7,8].A detailed overview of this coding technique may be found in [7].
Consider a sequence a = a 0 , a 1 , a 2 , . . ., a l , . . ., a n over the alphabet {−1, 1}.We refer to S l (a) = l−1 i=0 a i as the running digital sum (RDS) of the sequence a up to length l, l ≥ 0. Let D a = max {|S l (a)| : l ≥ 0} denote the largest value of the running digital sum of the sequence a.For some predetermined value D > 0, a set of sequences {a(i)} M i=1 is termed a BRDS code with parameter D if D a(i) ≤ D for all i = 1, . . ., M .Note that one can define non-binary BRDS codes in an equivalent manner, with the alphabet usually assumed to be symmetric, {−q, −q + 1, . . ., −1, 1, . . ., q − 1, q}, and where q ≥ 1.A set of DNA sequences over {A, T, G, C} may be constructed in a straightforward manner by mapping each +1 symbol into one of the bases {A, T} , and −1 into one of the bases {G, C}, or vice versa.Alternatively, one can use BRDS over an alphabet of size four directly.
To address the constraints C1-C2, one needs to construct a large set of BRDS codewords at sufficiently large Hamming distance from each other.Via the mapping described above, these codewords may be subsequently translated to DNA sequences with a GC content approximately equal to 50% for all sequence prefixes, and at the same Hamming distance as the original sequences.
Let (n, C, d; D) be the parameters of a BRDS error-correcting code, where C denotes the number of codewords of length n, d denotes the minimum distance of the code, while log C n equals the code rate.For D = 1 and d = 2, the best known BRDS-code has parameters n, 2 n 2 , 2; 1 , while for D = 2 and d = 1, codes with parameters n, 3 n 2 , 1; 2 exist.For D = 2 and d = 2, the best known BRDS code has parameters n, 2 • 3 ( n 2 )−1 , 2; 2 [8].Note that each of these codes has an exponentially large number of codewords, among which a (sufficiently) large number of sequences satisfy the required correlation property C3, discussed next, and the folding property C4.Codewords satisfying constraints C3-C4 were found by expurgating the BRDS codes via computer search.

Sequence Correlation
We describe next the notion of autocorrelation of a sequence and introduce the related notion of mutual correlation of sequences.
It was shown in [17] that the autocorrelation function is the crucial mathematical concept for studying sequences avoiding forbidden strings and substrings.In the storage context, forbidden strings correspond to the addresses of the blocks in the pool.In order to accommodate the need for selective retrieval of a DNA block without accidentally selecting any undesirable blocks, we find it necessary to also introduce the notion of mutually uncorrelated sequences.
Let X and Y be two words, possibly of different lengths, over some alphabet of size q > 1.The correlation of X and Y , denoted by X • Y , is a binary string of the same length as X.The i-th bit (from the left) of X • Y is determined by placing Y under X so that the leftmost character of Y is under the i-th character (from the left) of X, and checking whether the characters in the overlapping segments of X and Y are identical.If they are identical, the i-th bit of X • Y is set to 1, otherwise, it is set to 0. For example, for X = CATCATC and Y = ATCATCGG, X • Y = 0100100, as depicted below.
Note that in general, X • Y = Y • X, and that the two correlation vectors may be of different lengths.In the example above, we have Y • X = 00000000.The autocorrelation of a word X equals X • X.
In the example below, X • X = 1001001.
Intuitively, correlation captures the extent to which prefixes of sequences overlap with suffixes of the same or other sequences.Furthermore, the notion of mutual uncorrelatedness may be relaxed by requiring that only sufficiently long prefixes do not match sufficiently long suffixes of other sequences.Sequences with this property, and at sufficiently large Hamming distance, eliminate undesired address cross-hybridization during selection and cross-sequence assembly errors.
We proved the following bound on the size of the largest mutually uncorrelated set of sequences of length n over an alphabet of size q = 4.The bounds show that there exist exponentially many mutually uncorrelated sequences for any choice of n, and the lower bound is constructive.Furthermore, the construction used in the bound "preserves" the Hamming distance (see the Supplementary Information).
Theorem 2. Suppose that {X 1 , . . ., X m } is a set of m pairwise mutually uncorrelated sequences of length n.Let u (n) denote the largest possible value of m for a given n.Then As an illustration, for n = 20, the lower bound equals 972.The proof of the theorem is give in the Supplementary Information.
It remains an open problem to determine the largest number of address sequences that jointly satisfy the constraints C1-C4.We conjecture that the number of such sequences is exponential in n, as the numbers of words that satisfy C1-C2, C3 and C4 [15] are exponential.Exponentially large families of address sequences are important indicators of the scalability of the system and they also influence the rate of information encoding in DNA.
Using a casting of the address sequence design problem in terms of a simple and efficient greedy search procedure, we were able to identify 1149 sequences for length n = 20 that satisfy constraints C1-C4, out of which 32 pairs were used for block addressing.Another means to generate large sets of sequences satisfying the constraints is via approximate solvers for the largest independent set problem [18].Examples of sequences constructed in the aforementioned manner and used in our experiments are listed in the Supplementary Information.

Prefix-Synchronized DNA Codes
In the previous sections, we described how to construct address sequences that can serve as unique identifiers of the blocks they are associated with.We also pointed out that once such address sequences are identified, user information has to be encoded in order to avoid the appearance of any of the addresses, sufficiently long substrings of the addresses, or substrings similar to the addresses in the resulting DNA codeword blocks.For this purpose, we developed new prefix-synchronized encoding schemes based on [14].
To address the problem at hand, we start by introducing comma free and prefix-synchronized codes which allow for constructing codewords that avoid address patterns.A block code C comprising a set of codewords of length N over an alphabet of size q is called comma free if and only if for any pair of not necessarily distinct codewords a 1 a 2 . . .a N and b [17].Comma free codes enable efficient synchronization protocols, as one is able to determine the starting positions of codewords without ambiguity.A major drawback of comma free codes is the need to implement an exhaustive search procedure over sequence sets to decide whether or not a given string of length n should be used as a codeword or not.This difficulty can be overcome by using a special family of comma free codes, introduced by Gilbert [9] under the name prefix-synchronized codes.Prefix-synchronized codes have the property that every codeword starts with a prefix P = p 1 p 2 . . .p n , which is followed by a constrained sequence c 1 c 2 . . .c s .Moreover, for any codeword p 1 p 2 . . .p n c 1 c 2 . . .c s of length n + s, the prefix P does not appear as a substring of p 2 . . .p n c 1 c 2 . . .c s p 1 p 2 . . .p n−1 .More precisely, the constrained sequences of prefix-synchronized codes avoid the pattern P which is used as the address.
Due to the choice of mutually uncorrelated addresses at large Hamming distance, we encode each information block by avoiding only one of the address sequences, used for that particular block.
To explain how to perform encoding, assume that P = p 1 p 2 . . .p n ∈ {A, T, G, C} n is a self-uncorrelated sequence.This guarantees that p 1 = p n .Without loss of generality, let p 1 = A and p n = G, and define Pi = {A, C, T} \ {p i } for all 1 ≤ i ≤ n.In addition, assume that the elements of Pi are arranged in increasing order, say using the lexicographical ordering A ≺ C ≺ T. We subsequently use pi,j to denote the j-th smallest element in Pi , for 1 ≤ j ≤ Pi .For example, if Pi = {C, T} , then pi,1 = C and pi,2 = T.
Next, we define a sequence of integers G n,1 , G n,2 , . . .that satisfies the following recursive formula For an integer ≥ 0 and y < 3 , let θ (y) = {A, T, C} be a length-ternary representation of y.Conversely, for each W ∈ {A, T, C} , let θ −1 (W ) be the integer y such that θ (y) = W. Every integer 0 ≤ x < G n, can be mapped into a sequence of n + symbols {A, T, C, G} via an encoding algorithm that consists of two parts: EncodePSC(P, , x) and CodePSC(P, , x).
The steps of the encoding procedure are listed in Algorithm 1, where C P = {EncodePSC(P, , x) | 0 ≤ x < G n, }, and where n denotes the length of the sequence P .The decoding steps are described in the same chart.
Theorem 3. C P is a prefix-synchronized codeword.
A simple example describing the encoding and decoding procedure for the short address string P = AGCTG, which can easily be verified to be self-uncorrelated, is provided in the Supplementary Information.
The previously described EncodePSC(P, , x) algorithm imposes no limitations on the length of a prefix used for encoding.This feature may lead to unwanted cross hybridization between address primers used for selection and the prefixes of addresses encoding the information.One approach to mitigate this problem is to "perturb" long prefixes in the encoded information in a controlled manner.For small-scale random access/rewriting experiments, the recommended approach is to first select all prefixes of length greater than some predefined threshold.Afterwards, the first and last quarter of the bases of these long prefixes are used unchanged while the central portion of the prefix string is cyclically shifted by half of its length.For example, for the address primer ACTAACTGTGCGACTGATGC, the suffix ACTAACTGTGCGACTG produced by EncodePSC(P, , x) maps to ACTAATGCCTGGACTG.The process of shifting applied to this string is illustrated below: ACTAA CTGTGC GACTG cyclically shift by 3

ACTAA TGCCTG GACTG
For an arbitrary choice of the addresses, this scheme may not allow for unique decoding EncodePSC(P, , x).However, there exist simple conditions that can be checked to eliminate primers that do not allow this transform to be "unique".Given the address primers created for our random access/rewriting experiments, we were able to uniquely map each modified prefix to its original prefix and therefore uniquely decode the readouts.
As a final remark, we would like to point out that prefix-synchronized coding also supports error-detection and limited error-correction.Error-correction is achieved by checking if each substring of the sequence represents a prefix or "shifted" prefix of the given address sequence and making proper changes when needed.

Discussion
We described a new DNA based storage architecture that enables accurate random access and cost-efficient rewriting.The key component of our implementation is a new collection of coding schemes and the adaptation of random-access enabling codes from classical storage systems.In particular, we encoded information within blocks with unique addresses that are prohibited to appear anywhere else in the encoded information, thereby removing any undesirable cross-hybridization problems during the process of selection and amplification.We also performed four access and rewriting experiments without readout errors, as confirmed by post-selection and rewriting Sanger sequencing.The current drawback of our scheme is high cost, as synthesizing long DNA blocks is expensive.Cost considerations also limited the scope of our experiments and the size of the prototype, as we aimed to stay within a budget comparable to that used for other existing architectures.Nevertheless, the benefits of random access and other unique features of the proposed system compensate for this high cost, which we predict will decrease rapidly in the very near future.

acknowledgments
= 27 DNA blocks of length 1000 bps, as we grouped six words into fragments, and combined 12 fragments for prefix-synchronized encoding.Table S1 provides the word counts in the files and encoding lengths (in bits) of the of the outlined procedure.
Assume that instead of using a prefix-synchronized code, we used classical ASCII encoding without compression to encode the same Wikipedia pages.The total number of characters in the text equals 12, 874, and each character is mapped to a binary string of length 7. Hence, one would need 12874 × 7 = 90118 bits to represent the data, which is equivalent to 90118 2×960 = 47 DNA blocks of length 1000 bps if we set aside two unique address flags for the blocks.As one can see, prefix-synchronized codes offer an almost 1.7-fold improvement in description length compared to ASCII encoding.This comes at the cost of storing a larger dictionary, as one encodes words rather than symbols of the alphabet.For the working example, one would require roughly 70-times larger dictionaries, as there are 1933 words with an average of 5.1 symbols per word.This increased in the dictionary is not a significant problem, as only one copy of the dictionary is ever needed.Table S1.Comparison between character and word based encoding.Note the the number of bits per distinct symbol for the word encoding case is computed as the ceiling of the logarithm of the number of distinct symbols plus one, where the extra bit is used to prevent very small integers from being used in prefix-synchronized coding.Such integers may produce long runs of the first symbol in the address, which should be avoided.Furthermore, to ensure fixed length encoding, and hence avoid catastrophic error propagation, we doubled the number of bits used for encoding to 24.

Proofs of Theorems
Proof of Theorem 2. The proof consists of two parts.First, we prove the upper bound on u (n) in Lemma 1, and then proceed to prove a lower bound in Lemma 2. Recall that u(n) denotes the largest possible size for a set of mutually uncorrelated words of length n.Lemma 1.Let u(n) the largest set of distinct mutually uncorrelated sequences of length n.Then Proof: To prove the lemma, let us introduce some terminology.Let d H (•, •) stand for the Hamming distance between two words, and define the Hamming ball of radius d around a point W in {A, T, G, C} n as To prove the result, assume without loss of generality that W starts with the symbol A, i.e., W = AW 2 . . .W n .Next, consider two scenarios regarding the structure of W = AW 2 . . .W n : • W n = A : In this case, any word W in B (W, d) that starts with W n or ends with A is an element of C (W, d) .
• W n = A : In this case, any word W in B (W, d) which starts or ends with A is also an element of C (W, d).Using an argument similar to the one described for the previous scenario, one can show that Moreover, it is straightforward to see that For any mutually uncorrelated set {X 1 , . . ., X m } of size m, we have At the same time, the previous claim suggests that , which completes the proof.Lemma 2. Let u(n) the largest set of distinct mutually uncorrelated sequences of length n.Then Proof: For simplicity, assume that m is even.Given a mutually uncorrelated set {X 1 , . . ., X m } , with words of length n and over the alphabet {A, T, G, C}, partition {X 1 , . . ., X m } into two arbitrary sets A and B of equal size, say A = X 1 , . . ., X m n .Note that this bound is constructive, and the concatenation procedure preserves normalized minimum Hamming distances.
We now turn our attention to prefix-synchronized coding, and describe a number of results relevant for our subsequent discussion.

Theorem 5 ([17]). Given a positive integer N , chose the unique integer
Then, the maximal prefix-synchronized code of length N has cardinality for a prefix of the form 10 . . .0. Note that the above results indicate that codes avoiding one address sequence represent an exponentially large family of binary sequences.We prove a similar result for the case of 4-ary sequences that avoid a set of m mutually uncorrelated sequences.To establish the claim, we need the following definitions.Let g (0) , g (1) , . . ., be an integer sequence over a finite alphabet.Define the generating function of the sequence Theorem 6. Suppose that {X 1 , . . ., X m } is a set of mutually uncorrelated sequences of length n over the alphabet {A, T, C, G}.Let f (N ), with f (0) = 1, be the number of strings of length N over {A, T, C, G} that do not contain substrings in {X 1 , . . ., X m }.Then where F (z) is the generating function of the sequence {f (N )}.
Proof of Theorem 6.The result is a direct consequence of Theorem 4.1 of [17].For 1 ≤ i ≤ m, let f i (n) denote the number of strings of length n over {A, T, C, G} that contain no element of {X 1 , . . ., X m }, except for a single copy of X i at the right-hand side of the string.Let F i (z) be the generating function of f i (n).Then, we have the following system of equations that holds for the two sets of aforementioned functions: By using the fact that 3) The result follows by replacing (2.2) into the first line of (2.3).
As the dominant pole of the generating function is close to 4, the number of sequences avoiding a set of mutually uncorrelated sequences grows roughly as 4 n .
Proof of Theorem 3. Since P is self-uncorrelated, we need to show that this string is not contained in the output of CodePSC(P, , x), where the output of CodePSC(P, , x) equals CodePSC(P, , x) = P t1−1 pt1,s1 . . .P tr−1 ptr,sr θ t0 (•) , for some input θ t0 (•) , and 1 ≤ t 0 , t 1 , . . ., t r < n.Consequently, if P is a substring of the output of CodePSC(P, , x), then the last symbol of P (recall that we assumed this symbol to be G) has to appear in one of the following three positions: • The symbol appears in P ti−1 , for a unique 1 ≤ i ≤ r : In this case, there exists a suffix of P appearing as a prefix of P ti−1 .This contradicts our assumption that P is self uncorrelated.
• The symbol appears in pti,si , for a unique 1 ≤ i ≤ r : This contradicts our assumption that pti,si = G.
Therefore, the string P does not appear as a substring in the output of CodePSC(P, m, x), which completes the proof.
Proof of Theorem 4. It suffices to show that the output of CodePSC(P, , x) is uniquely decodable.We use induction arguments to establish this result.For the basis step, by the definition of the output of CodePSC, it is straightforward to show that CodePSC(P, , x) returns the encoding θ (x), which represents a one-to-one mapping from 0 ≤ x < 3 to {A, T, C} whenever < n.For the inductive step, we assume that the result is true for all < r, as well as for all r ≥ n, and show that it is consequently true for = r.
For = r, CodePSC(P, , x) returns for some integer values s, b and for some 1 ≤ t < n, where Therefore x is uniquely decodable if and only if s, t and b are unique.Since sequences of the form P t−1 pt,s are prefix-free one can uniquely identify both t and s.Moreover − t < r, hence by the induction hypothesis it follows that b is also uniquely decodable from CodePSC(P, − t, b).Hence, x can be uniquely decoded.

Desgination of primer
Sequence Table S2.List of primers for rewriting (editing) the blocks B1, B2 and B3.The primers for the gBlock method are listed separately for those used with the OE-PCR method.In the latter case, the labels of DNA fragments SU and SD stand for sample upstream and sample downstream.In OE-PCR, we linked two DNA fragments or three DNA fragments into the final PCR products; when two fragments were linked, the first fragment was labeled UP (U), while the second fragment was labeled DOWN (D); when three fragments were combined, the second fragment was labeled MIDDLE (M).

Address Sequences
Consider the following set of strings of length 20, ACTAACTGTGCGACTGATGC ACACTATCGAGCTGACACGT AGTCAGCAGTAGTCAGTCAG ACTGAGCTGAGCGTATATCG ACTCAGCTACGACTCACATG with GC content equal to 50%, i.e., 10 GC bases.The sequences are mutually uncorrelated and at Hamming distance exactly 10 from each other.The sequences do not exhibit secondary structures at room temperature, as verified by the mfold and Vienna packages.We used these addresses for a very small-scale, proof-of-concept random access/rewriting experiment of a 4 KB file.
In the large scale random access/rewriting experiment described in Section 5, we used different address sequences for the two flanking ends of the 1000 bps blocks.The sequences we synthesized include: The pairs of sequences were used to flank the two ends of the data blocks.Only the addresses on the left were used for subsequent prefix-synchronized coding.
The sequences on the left-hand side of the pairing have "interleaved" {G, C} and {A, T} bases -for example, they all start with CTCT . ... This ensures a "GC balancing" property for the prefixes of the addresses.

Encoding and Decoding Example
In this section, we illustrate the encoding and decoding procedure for the short address string P = AGCTG, which can easily be verified to be self-uncorrelated.
More precisely, we explain how to compute a sequence of integers G n,1 , G n,2 , . . ., G n,7 , described in the main body of the paper.As before, n denotes the length of the address string, which in this case equals five.
One has As a proof of concept, we performed a number of selection and editing experiments.These include selecting individual blocks and rewriting one of its sections, selecting three blocks and rewriting three sections in each, two close to the flanking ends, and one in the middle.The edits involved information about the budget of the institutions at a given year of operation.Detailed information about the original sequences and their rewritten forms is given in the following sections.

The gBlock method
Since a gBlock of length longer than 500 bps was needed, it was more costly to synthesize the gBlock and perform rewriting than to directly re-synthesizing the whole block.Hence, the gBlock method was not used in this case.

The OE-PCR based method
One pair of primers was designed to PCR amplify the first portion of the sequence B1-M.For the forward direction, the primer was 5'AATTACTAAGCGACCTTCTC3' while for the reverse direction, the primer was 5'CGTGCACTCATAACCCATATTTCAAGAGCTAGCTATTCCTCTCCCTTAAAAGTAAATGAC3'.
The second part of the sequence was PCR amplified by using the forward direction primer 5'GGGAGAGGAATAGCTAGCTCTTGAAATATGGGTTATGAGTGCACGATCATCACATAAC3' and reverse direction primer 5'ACTTATTGCGACTTCTAAGG3'.
Both PCR reactions used the sequence B1 as template.Two such PCR products are shown in Fig. S4, indicating that the correct length products were isolated in each reaction.
OE-PCR was performed in a 50 ul reaction volume containing the two aforementioned PCR products without primers for the first 5 cycles and the products with primers (B1 primers in Table S2) for the later 30 cycles.A single bank with correct size of 1000 bps was obtained (see Fig. S4).

B2 mutation B2-M synthesis
The unedited B2_original (B2) sequence is of the form:

The gBlock method
A 177 bps sequence, containing the entire edited region and the B2 string, was gBlock synthesized by IDT.Another part of B2 was PCR amplified using the forward primer 5'GAAGCACAGTGTTGCTGCGTG3' and reverse primer 5'AAACGATCCCCTGACAGAGC3' The B2 sequence served as a template.See Fig. S4 for an illustration.

The OE-PCR based method
Over extension PCR (OE-PCR) was performed in a 50 ul reaction volume containing the above 177 bps gBlock product and PCR products without primers for the first 5 cycles and with B2 forward and reverse primers listed in Table S2 for the subsequent 30 cycles.
The PCR product was deposited on a gel substrate and the correct 1000 bps band was obtained as shown in Fig. S5.One pair of primers was designed to PCR amplify the first part of the sequence B2-M, with forward primer
The second part was PCR amplified by the forward primer
Both PCRs used B2 as a template.Two PCR products are shown in Fig. S5.Fig. S7.Scheme for generating the B3 edits using standard 60 bps primers.

B3 mutation B3-M synthesis
The unedited original B3 sequence equals: Two sequences, the 560 bps sequence containing the first mutation region and the second 560 bps sequence containing the second mutation region, were gBlock synthesized by IDT.There was a 60 bps overlap between the two gBlocks.

The OE-PCR method
OE-PCR was performed in a 50 ul reaction volume containing the above two 560 bps gBlock products without primers for the first 5 cycles and additional B3 forward and reverse primers listed in Table S2 for the subsequent 30 cycles.The PCR product was deposited on a gel substrate and the correct 1000 bps band was obtained.One pair of primers was designed to PCR amplify the first part of the sequence B2-M, using

5'ATAATAGGCCTGATGATCTC3'
in the forward direction and
The second part was PCR amplified in the forward direction by using the primer

5'GTGTACACAGTTCAAGCTTAGATTGAGAGTGAGTAGATGTTGATGCGAGGCGAAAGATGT3'
and in the reverse direction by using the primer 5'GACTTCCCCCCTATAATCCATTAATGCTAGATCAAGCCGCATATACTATGTTGCAAATAC3'.
The third part was PCR amplified by the forward direction primer 5'GCGGCTTGATCTAGCATTAATGGATTATAGGGGGGAAGTCGCTGCTGGTACTCTG3'  All three PCRs used the sequence B3 as the template.All three PCR products are shown in Fig. S8.OE-PCR was performed in a 50 ul reaction volume containing the above three PCR products without primers for the first 5 cycles and with B3 primers listed in Table S2 for the subsequent 30 cycles.A single bank of correct size 1000 bps was obtained (See Fig. S9).
Correctness of the synthesized edited regions was confirmed via DNA Sanger sequencing as follows.The PCR products of the gBlock method and the OE-PCR method were named B1-M-gBlock, B2-M-gBlock, B3-M-gBlock and B1-M-PCR, B2-M-PCR, B3-M-PCR, respectively.All final mutations/edits of PCR products were purified using the QiaGen Gel Purification Kit.The purified 1000 bps edited sequences were blunt-ligated to the vector named pCR TM -Blunt (Fig. S10) using the Zero Blunt PCR Cloning Kit and following the manufacturers' protocol.Five colonies of each PCR-Bluntmutation were sent to ACTG, Int.Sequencing was performed using two universal primers: M13F_20 (for the reverse direction) and M13R (for the forward direction).Bi-directional sequencing was performed in order to ensure that the entire 1000 bps block was completely covered.

Hybrid DNA-Based and Classical Storage
In our small-scale experiments, Sanger sequencing produced two erroneous symbols in one strand which we were able to correct using prefix matching.One possible problem that may arise in large scale DNA-storage systems involving millions of blocks is erroneous sequencing which may not be corrected via prefix matching.In current High Throughput Sequencing technologies, such as Illumina HiSeq or MiSeq, the dominant sources of errors are substitutions.Due to our word grouping scheme, such substitution errors cannot cause catastrophic error propagation, but may nevertheless accumulate as the number of rewrite cycles increases.In this case, prefix matching may not suffice to correct the errors and more sophisticated coding schemes need to be used.Unfortunately, adding additional parity-check symbols into the prefix-encoded data stream may cause problems as the parities may violate the prefix properties and dis-balance the GC content.Furthermore, every time rewriting is performed, the parity-checks will need to be updated, which incurs additional cost for maintaining the system.A simple solution to this problem is a hybrid scheme, in which the bulk of the information is stored in DNA media, while only parity-checks are stored on a classical device, such as flash memory.Given that the current error-rate of short-read sequencing technologies roughly equals 1%, the most suitable codes for performing this type of coding are low-density parity-check codes [19].These codes offer excellent performance in the presence of a large number of errors and are decodable in linear time.

Figure 1 .
Figure 1.1.(a) The scheme of[4] uses a storage format consisting of DNA strings that cover the encoded compressed text in fragments of length of 100 bps.The fragments overlap in 75 bps, thereby providing 4-fold coverage for all except the flanking end bases.This particular fragmenting procedure prevents efficient file editing: If one were to rewrite the "shaded" block, all four fragments containing this block would need to be selected and rewritten at different positions to record the new "shaded" block.(b) The address sequence construction process using the notions of autocorrelation and cross-correlation of sequences[13].A sequence is uncorrelated with itself if no proper prefix of the sequence is also a suffix of the same sequence.Alternatively, no shift of the sequence overlaps with the sequence itself.Similarly, two different sequences are uncorrelated if no prefix of one sequence matches a suffix of the other.Addresses are chosen to be mutually uncorrelated, and each 1000 bps block is flanked by an address of length 20 on the left and by another address of length 20 on the right (colored ends).(c) Content rewriting via DNA editing: the gBlock method[10] for short rewrites, and the cost efficient OE-PCR (Overlap Extension-PCR) method[11] for sequential rewriting of longer blocks.

Figure 1 . 2 .
Figure 1.2.(a) Gel electrophoresis results for three blocks, indicating that the length of the three selected and amplified sequences is tightly concentrated around 1000 bps.(b) Output of the Sanger sequencer, where all bases shaded in yellow correspond to correct readouts.The sequencing results confirmed that the desired sequences were selected, amplified, and rewritten with 100% accuracy.
Furthermore, let C (W, d) = {W ∈ {A, T, G, C} n : W ∈ B (W, d) , W , W are correlated} denote the set of sequences correlated with W that are also at most at Hamming distance d from W .We claim that for n ≥ d + 2 ≥ 4, one has

5. 1
B1 mutation B1-M synthesisThe unedited B1_original (B1) sequence is of the form: bases written in red represent the regions we edited.

Fig. S3 .
Fig.S3.Illustration of the process of generating the B1 edit/mutation using general primers.
Fig.S4.A schematic depiction of the process of generating the B2 mutation using standard 60 bps primers.

Table 2 .
Comparison of storage densities for the DNA encoded information expressed in B/g (bytes per gram), file size, synthesis cost, and random access features of three known DNA storage technologies.Note that the density does not reflect the entropy of the information source, as the text files are encoded in ASCII format, which is a redundant representation system.