Resolution limit of image analysis algorithms

The resolution of an imaging system is a key property that, despite many advances in optical imaging methods, remains difficult to define and apply. Rayleigh’s and Abbe’s resolution criteria were developed for observations with the human eye. However, modern imaging data is typically acquired on highly sensitive cameras and often requires complex image processing algorithms to analyze. Currently, no approaches are available for evaluating the resolving capability of such image processing algorithms that are now central to the analysis of imaging data, particularly location-based imaging data. Using methods of spatial statistics, we develop a novel algorithmic resolution limit to evaluate the resolving capabilities of location-based image processing algorithms. We show how insufficient algorithmic resolution can impact the outcome of location-based image analysis and present an approach to account for algorithmic resolution in the analysis of spatial location patterns.

The work presented in this paper, including the algorithmic resolution limit, the probabilistic resolution and the adjusted K 2α -function, is derived through applying spatial statistical theory to imaging. In this supplementary material, we discuss this approach and formulate the mathematical framework.
We begin in Supplementary Note 2 with a review of key concepts and definitions relating to spatial point processes and patterns that will be needed to develop the mathematical framework. The paper focuses on fluorescence microscopy imaging, which we here consider to be the attempted recovery of an underlying spatial point pattern for the true locations of fluorophores. In Supplementary Note 3, we consider a mathematical formulation of the imaging process with regards to recovering the spatial point pattern.
From this, we are able to reason about image resolution and define the algorithmic resolution limit, the core concept of our paper, in Supplementary Note 5. Quantifying the effect this resolution limit has on the recovery of a spatial point pattern is explored in Supplementary Note 6, where we derive the probability that an event/object (in this case, a photon-emitting fluorophore) is affected by algorithmic resolution, which we term the probabilistic resolution. In Supplementary Note 7, we apply these ideas to quantify the effect of algorithmic resolution in single molecule localization microscopy. This in turn allows us to define both a probabilistic resolution and what we term the super-resolution limit. We consider in Supplementary Note 8 a number of different scenarios in which we apply these resolution measures, one of which is a tubulin data set in which we demonstrate how probabilistic resolution and the super-resolution limit can be computed.
Key to the core ideas of this paper is the ability to estimate the algorithmic resolution limit for a particular algorithm of interest from an estimate of a pair correlation function. In Supplementary Note 9, we give details for an estimator that performs this task and discuss its statistical properties.
The definition of the algorithmic resolution limit presented here does not depend on the density of the events/objects being analyzed. In Supplementary Note 10, we explore what effect event density and signal to noise ratio has on the algorithmic resolution limit. As predicted, we demonstrate with three different algorithms that the algorithmic resolution limit appears constant across a range of densities. However, a breakdown in this behavior is noticeable at high densities for some algorithms.
It will be shown that limited algorithmic resolution fundamentally changes the properties of the spatial point pattern. The altered, observed point pattern is then typically analysed with standard statistical techniques, the most common of which is Ripley's K-function. In Supplementary Note 11 we propose an adaptation of Ripley's K-function, the K 2α -function, which accounts for unwanted artefacts introduced by limited algorithmic resolution. Further discussions and mathematical proofs are contained in the Appendices in Supplementary Note 12.

Supplementary Note 2: Spatial point processes
In this section we provide some key definitions used for characterizing spatial point processes and spatial point patterns. While it is sometimes unavoidable, we refrain as much as possible from the formal measure theoretic approach. Those interested in this treatment are directed to Cressie (1993), from where much of this theory is taken.
A spatial point process is a stochastic mechanism that generates a random collection of points Φ = {s 1 , s 2 , ...}, termed events, each belonging to X, a locally compact subset of R d (typically d = 2 or 3).
The set of events Φ is called a spatial point pattern, and can alternatively be represented through a locally finite counting measure ξ on X where ξ(B) is the number of events within a set B ⊆ R d . Formally, B ∈ B where B is a Borel σ-algebra on X.
Formally, let (Ω, F, P) be a probability space, let Ξ be a collection of locally finite counting measures on X and let N be the smallest σ-algebra generated from sets of the form {ξ ∈ Ξ : ξ(B) = n}, for all B ∈ B and all n ∈ {0, 1, 2, ...}. A spatial point process N , defined on X, is a measurable mapping from (Ω, F) to (Ξ, N ). A spatial point process N is therefore a random counting measure. It is typical to use notation N (B) to be the random non-negative integer stating the number of events within a set B ∈ B.
For example, N (ds) indicates the random number of events in an infinitesimal ball ds centred at s. We will restrict ourselves to orderly processes where the number of events in ds can only be 0 or 1.
A spatial point process can be characterized by its moment measures. The kth moment measure for a collection of k sets B 1 , ..., B k ∈ B is µ (k) (B 1 × B 2 × · · · × B k ) = E{N (B 1 )N (B 2 ) · · · N (B k )}. (1) From this we can define the first and second-order intensity functions.
Definition 1 (First-order intensity). The first-order intensity, more commonly known as the intensity, of a spatial point process N at a point s ∈ X is where ν(·) is the Lebesgue measure.
We interpret the first-order intensity as being the expected density of events per unit volume generated by the process at a particular s ∈ X. Equivalently, we have the relationship λ(s)ν(ds) = P(N (ds) = 1).
For such purposes, the auto-covariance is given as c(s, u) = γ(s, u) − λ(s)λ(u) and the auto-correlation c(s, u)/λ(s)λ(u). A related measure to characterize second-order interactions in a point process is the pair correlation function.
Definition 3 (Pair correlation function). The pair correlation function of a spatial point process N at points s, u ∈ X is g(s, u) = γ(s, u) λ(s)λ(u) .
Two important notions that arise from these definitions are that of stationarity and isotropy.
Definition 4 (Second order stationarity and Isotropy). A point process is second-order stationary (from here-on referred to as stationary) if γ(s, u) = γ(s − u) and isotropic if γ(s, u) = γ( s − u ).
For a stationary and isotropic process, γ(·) and g(·) simply become functions of r ∈ R, the Euclidean distance between two points.
It is essential to define the Poisson process. Many definitions exist, we choose the following for its simplicity.

Clustered Inhibited
Supplementary Figure 1. Example realizations or a completely spatially random, clustered and inhibited point pattern.
Definition 5 (Poisson Process). Spatial point process N , defined on X ⊆ R d , is Poisson if the following hold: 1. For every B ∈ B, the number of events is Poisson distributed with expected value µ(B) = B λ(s)dν(s), i.e.
Poisson processes for which λ(s) is constant on X are called homogeneous and are both stationary and isotropic. Poisson processes possess the important memoryless property. That is to say, all events are independent of each other and there is no correlation across space. For this reason, homogeneous Poisson processes (HPP) are commonly termed completely spatially random (CSR). In general, the deviations from complete spatial randomness come in two forms. In clustered patterns, events lie closer to each other than one would expect for CSR data. For inhibited patterns, events lie further away from each other than one would expect for CSR data. Example realizations of CSR, clustered and inhibited patterns are shown in Supplementary Figure 1. These ideas can be rigorously formulated by reasoning about spatial correlation structure within the data.
Ripley's K-function is used extensively across the sciences, including microscopy (Lagache et al., 2013;Rossy et al., 2014), as a method for analyzing spatial point patterns, primarily to detect and characterize clustering behavior. Its widespread use lies in its interpretability and the ease at which it can be estimated with robust, well studied estimators.
Definition 6 (Ripley's K-function). For a stationary and isotropic process, with pair correlation function g(s, u) = g(r), Ripley's K-function is defined as 2πr g(r )dr = λ −1 E{number of events within distance r of an arbitrary event}.
The pair correlation function and Ripley's K-function are used to characterize the notion of spatial randomness, clustering and regularity. Complete spatial randomness has the property that g(r) = 1 for all r > 0, and consequently K(r) = πr 2 . If g(r) > 1 for some r then that indicates we are more likely to see an event at a distance r from an arbitrary event than under complete spatial randomness. If g(r ) > 1 for all r ∈ (0, r] then K(r) > πr 2 . Such second-order behavior gives rise to clusters. Alternatively, if g(r) < 1 then events are less likely to be found at a distance r from one another than you would expect under complete spatial randomness, and if g(r ) < 1 for all r ∈ (0, r] then K(r) < πr 2 . It is processes of this type that are called inhibited (or regular).
It is common to define L(r) = K(r)/π which linearises the behavior of the K-function such that L(r) = r for an HPP, is greater than r for a clustered process and less than r for a regular process. Further to this, it is common to compute and analyze the L(r) − r function which is equal to zero for an HPP, greater than zero for a clustered process, and less than zero for a regular process.
We further introduce another important summary property of a point process, the nearest neighbor distribution function.
Definition 7 (Nearest neighbor distribution function). For a stationary process, let D denote the distance from an arbitrary event to the nearest other event. The nearest neighbor distribution (NND) function is defined as Letting b(s, r) ⊂ X denote a ball of radius r centred at s, an equivalent expression for the NDD function is G(r) = 1 − P(N (b(s, r)) = 1|N (ds) = 1), where P(N (b(s, r)) = 1|N (ds) = 1) can be read as the probability that apart from the event at s there are no further events in b(s, r). For a stationary process, this does not depend on the location s. For an HPP we have G(r) = 1 − exp(−λπr 2 ).
In what follows, it will become necessary to consider Cox processes, sometimes referred to as doubly stochastic processes.
Definition 8 (Cox process). Point process N is a Cox process driven by a random intensity function Λ(s) if, conditioning on Λ(s) = λ(s), N is an inhomogeneous Poisson process with intensity λ(s).
To understand the Cox process, it can be useful to consider that a realization firstly involves a realization λ(s) of the random intensity function Λ(s), and then a realization of a Poisson process with intensity λ(s).
The first and second-order properties of a Cox process are intrinsically linked to the first and second order properties of the random intensity Λ(s), with a Cox process N being second-order stationary (as defined in Definition 4) if and only if E{Λ(s)} is constant for all s ∈ X and E{Λ(s)Λ(u)} depends only on s − u.
Cox processes represent a broad class of spatial point processes, including certain types of cluster process and fibre driven Cox processes (Illian et al., 2008, p. 383); point processes where events a generated on randomly placed fibres (curves). These ideas and their use in this setting will be discussed further in Supplementary Note 8.

Supplementary Note 3: Imaging point processes
We consider imaging a collection of objects whose positions we model as a spatial point pattern. The key philosophy of the presented work is treating the imaging procedure as the attempted recovery of this spatial point pattern and we judge an imaging system by how well it performs this recovery. In this section we consider what effect the imaging procedure, which in its most general form includes an optical system and a sequence of one or more image processing algorithms, has on the point pattern we are attempting to image. The fundamental property which we wish to consider is that of resolution -i.e. how well does the imaging procedure deal with objects (events) that are close to one another. In doing so we begin with defining the object point process, the imaging operator and the image point process. We refer the reader to Supplementary Figure 2 which illustrates this definition. We will call I a stationary and isoptropic imaging operator if it preserves stationarity and isotropy, i.e.
N I is stationary and isotropic if N O is stationary and isotropic. For the purposes of this paper, we will assume all imaging operators are stationary and isotropic.
We propose that resolution is a property of an imaging operator and consider two possible criteria that can be used as a metric for it. We begin by examining an idealized imaging operator in which there exists a hard detection resolution limit such that within it two events are not individually resolved, and outside they are resolved. We then consider a more applicable resolution limit, termed the algorithmic resolution limit.

Supplementary Note 4: Detection resolution limit
We consider the following idealized imaging operator which we denote I δ .
1. If two events from N O are within a distance δ > 0 of each other we observe just one of the events.
2. The event we observe is that which is most "prominent". (Prominence could, for example, be the intensity of the imaged event).
A mathematical formulation of this point process is as follows.
Model 1 (Imaging operator I δ ). Let N O be a point process, 1. each event s ∈ Φ O generated by the process N O is also assigned a random mark Z(s) with distribution function F (·) (which acts as its prominence/intensity), 2. an event s is removed if there exists another event u from N O with ||s − u|| < δ and Z(s) < Z(u).
This process was originally formulated in Matérn (1960) for the case of N O being an HPP and marks Z ∼ Uniform(0, 1), and is known as the Matérn II process. It was shown in Stoyan and Stoyan (1985) that the process is invariant to the choice of distribution function F , so generalizes to this application.
When N O is an HPP with intensity λ O , the intensity of the remaining process N I δ is given in Diggle (1983) as and the second-order intensity of N I δ is where U δ (r) = 2πδ 2 − 2δ 2 arccos(r/(2δ)) + 1 2 r(4δ 2 − r 2 ) 1/2 is the area of the union of two circles of diameter δ a distance r apart. This states that when N O is an HPP, and hence N O (ds) is uncorrelated with N O (du) for all s = u, the imaging operator introduces correlation into the image process N I δ that occurs over a distance of 2δ. Examples of the theoretical Ripley's K-function and pair correlation function for models of this type can be found in Supplementary Figure 3B.
For a general N O imaged with this idealized imaging operator I δ , the second-order intensity γ δ is intractable. However, we can say with certainty that from which we can also say the pair correlation function g I δ (r), Ripley's K-function K I δ (r) and NND function G I δ (r) have the form In this idealized case, we call δ the detection resolution limit (DRL) and can mathematically formulate it with the following definition.
No two events in the pattern Φ I δ , generated by process N I δ , can be within a distance ii. iii.
iv. with a circle of radius δ/2 round each one. Events whose circles intersect with another circle (shown in red) will be affected by resolution. We associate with each event a random mark Z i (we label only the marks for the events in red). The red events form a resolution limited sub-pattern (RSLP) of size 4 -see Defintion 13. Right: the resulting image point pattern Φ I δ in the case where Z 1 < Z 2 < Z 3 < Z 4 . Irrespective of the values of the marks, events in Φ I will always be further than δ away from any other event. B. Left: the theoretical pair correlation function and right: the theoretical L(r) − r function for idealized detection resolution limited HPP subject to detection resolution limits of 250nm (blue) and 500nm (red). C. Single estimate (blue) and averaged estimate (red) of Left: pair correlation function and Right: L(r) − r function for an HPP localized with Algorithm 5. See Methods for simulation and analysis details. D. Single estimate (blue) and averaged estimate (red) of Left: pair correlation function and Right: L(r) − r function for an HPP localized with Algorithm 2. The black curve represents the averaged ground truth. See the Methods for simulation and analysis details. E. Four segments of simulated images. True locations of emitters (events in Φ O ) are marked with green circles and estimated locations of emitters (events in Φ I ) are marked with red crosses. i. Two emitters well separated in space and not affected by resolution. ii. Two emitters in close enough proximity to each other as to be affected by resolution. iii. -iv. Three emitters, each close enough to one of the other emitters as to be affected by resolution. See Methods for details of simulations. Algorithm 1 was used as the analysis method.

Supplementary Note 5: Algorithmic resolution
In an experimental setting, although we sometimes see the clear presence of a DRL (for example in the case of Algorithm 5, see Supplementary Figure 3C), it is also common to see behavior that is uncharacteristic of this modelling assumption and where a DRL is not apparent (for example when using Algorithm 2, see Supplementary Figure 3D). In general, therefore, the DRL is an inappropriate measure of resolution and we seek to give a resolution limit that has physical meaning, can be readily estimated from data and effectively characterizes the performance of the image processing algorithms used, i.e. the imaging operator. We do this through defining the algorithmic resolution limit.
We motivate the algorithmic resolution limit by considering an idealized but more general imaging operator than that considered in Supplementary Note 4. Let N O be a general object process generating object point pattern Φ O . We consider the following rules for the stationary and isotropic imaging operator Model 2 (Imaging operator I α ). Let N O be a point process, if an event s ∈ Φ O generated by N O has one or more events within a distance α of it then s ∈ Φ Iα with probability p < 1.
Here, I α is a more general class of imaging operator than I δ . Indeed, the class of operators of type I δ is a subclass of operators of type I α , where α = δ. We saw in Supplementary Note 4 that when applied to an HPP (i.e. a completely spatially random process that has no correlation across distance), N O introduces correlation into the image process N I δ that ranges over a distance of 2δ. Building on this, we now have the following result for operators of type I α as defined in Model 2.
Proposition 1. Let N O be an HPP and let I α be the stationary and isotropic imaging operator stated in The proof to this proposition is given in Supplementary Note 12.3. In both Model 1 and Model 2, the correlation introduced into N I by the respective imaging operator acts over twice the distance at which resolution has an effect (δ and α, respectively). This correlation manifests itself in the pair correlation function. For an HPP N O , the pair correlation function is a constant 1 for all r > 0 (indicating zero correlation for all r > 0), however, for N I it will have a deviation from this constant 1 over the interval (0, 2α) (indicating positive or negative correlation). While I α is a specific model for the imaging operator and algorithms used in real life are unlikely to conform exactly to this model, we use this as justification for our definition of the imaging operator. A more realistic, yet analytically intractable model for the imaging operator is given consideration in Supplementary Note 12.1. It provides further justification for the definition of the algorithmic resolution limit we now present.
To formally define the algorithmic resolution limit we introduce the radius of correlation of an imaging operator.
Definition 11. Let Φ O be a point pattern generated by HPP N O , and let I be a stationary and isotropic imaging operator that acts on N O to produce N I with associated pattern Φ I . The radius of correlation ρ of I is given as: An equivalent definition is given as This distance is sometimes referred to as the range of correlation of N I (Illian et al., 2008), and given N I is the composition mapping of I on N O , where N O is completely spatially random, we can interpret it as being the range of correlation introduced by I into N I . We now formally define the algorithmic resolution limit (ARL).
Definition 12. Let I be a stationary and isotropic imaging operator with radius of correlation ρ. The algorithmic resolution limit (ARL) α of I is Through defining the ARL of an imaging operator I as half its radius of correlation, we are able to generalize the resolution limit to imaging operators that display a wide range of characteristics and artefacts.
We briefly note that all localizations are subject to measurement error, however the ARL naturally accounts for this. It is shown in Supplementary Note 12.2 that the general spatial structure of point processes are unaffected by localization errors. In particular, a CSR process remains CSR under localization error. Therefore, if an imaging operator perfectly resolves the HPP but with independent and identically distributed (iid) errors on all localizations, its ARL is still 0. An estimation procedure for the ARL is presented in Supplementary Note 9.
Supplementary Note 6: Quantifying the effect of resolution

Resolution limited sub-pattern
Supplementary Figure 4. Illustration of how a resolution limit α gives rise to resolved events, unresolved events and resolution limited sub-patterns.
An event that belongs to an RLSP is affected by resolution due to its proximity to another event in that same RLSP. All events that do not belong to an RLSP are not affected by resolution and are resolved by imaging operator I. For a given α, the number of RSLPs, M (α), is a random variable that is realized with each realization of the point pattern. For a pattern Φ O and α > 0, the corresponding RSLPs S 1 , ..., S M (α) will, by construction, have properties As an aside; we can draw analogies here with random geometric graphs (Penrose, 2003). Random geometric graphs are graphical models where nodes are randomly distributed in space and an edge is formed between two nodes if and only if the distance between those two nodes is less than a pre-defined neighborhood distance. Under this formulation, we can consider an event of the pattern to be a node and the resolution limit α to be the neighborhood distance. Two events are unresolvable if they are within distance α of each other, i.e. if an edge exists between them. An RLSP, in graphical models terminology, would therefore be a component of order greater than one. The work of Penrose (2003) looks at distributions for the number of components, however, the results are asymptotic and highly technical and we elect to leave out such analysis.

The resolved and unresolved processes
Events which are further than a distance α from any other event can be completely resolved and belong to the resolved point pattern Φ R ⊆ Φ O . We say the mechanism generating this is the resolved point process N R . The resolved events are all those that do not belong to an RSLP, and therefore Events that are within a distance α of another event are affected by resolution and form the unresolved For a stationary process N O with intensity function λ O , the intensity λ R of the resolved process N R is given as Recognising that λ O = λ R + λ U , where λ U is the intensity of the unresolved process N U , Supplementary Figure 4 illustrates the notion of RLSPs and resolved and unresolved processes.

Homogeneous Poisson object process
In the situation where the object process N O is an HPP, the resolved process N R takes the form of a well studied model of point process. This model was first proposed by Matérn (1960) and given formal treatment in Cressie (1993); Diggle (1983). It is known commonly as Matérn's Model I, or Matérn's hardcore process I. For an HPP object process N O with intensity λ O , if we exclude any events that have a neighbor within some predefined hardcore distance, then the remaining process is Matérn I. This is exactly what we have in the case of the resolved process N R , where α is the hardcore distance. With the NDD function of N O given as G(r) = 1 − exp(−λ O πr 2 ), from (17) and (18), the intensities λ R and λ U of N R and N U , respectively, become The second-order intensity function of N R is given in Diggle (1983) as where R α (r) = 2α 2 arccos(r/(2α)) − 1 2 r(4α 2 − r 2 ) 1/2 is the area of the intersection and U α (r) = 2πα 2 − R α (r) is the area of the union of two circles of diameter α with centers a distance r apart.

Probabilistic resolution
We can now make statements of probability regarding whether an event from the object process N O is affected by resolution or not. Reasoning about resolution in terms of probabilities is a useful exercise because the effect of resolution on a point pattern Φ O is fundamentally dependent on the nature of the point process N O . To motivate this idea, consider an inhibited point process. For example, suppose that N O is itself a Matérn I process with hardcore distance greater than α. All events from N O will be further than α away from each other and the entire point pattern will be perfectly resolved. In contrast, consider a heavily clustered process where each event has a high probability of having another event within distance α of it, then a large proportion of events will be affected by resolution and the point pattern will be poorly resolved. It is this difference in the resolvability of a point pattern that we wish to capture with probabilistic resolution.
To formalize this, we say an event s ∈ Φ O is affected by resolution if, for a resolution limit α, s ∈ Φ U .
Conversely, we say it is unaffected by resolution if s ∈ Φ R . We capture this with the probabilistic resolution.
Definition 14. Given a resolution limit α, the probabilistic resolution of a point process N O is the probability of an event being unaffected by resolution, Proposition 2. The probabilistic resolution of a point process N O imaged under resolution limit α is The result of Proposition 2 is obtained directly from Supplementary Equation (17). Clearly, the probability of an event being affected by resolution is G O (α).

Supplementary Note 7: Quantifying the effect of resolution in single molecule localization microscopy
We now demonstrate how probabilistic resolution can be used to provide a measure of resolution in single molecule localization microscopy (SMLM). In an SMLM experiment only a sparse subset of the events (molecules) from the object process N O are in the photon-emitting (On) state in any given frame, and therefore the probability an event is affected by resolution (i.e. is within a distance α of another molecule in the On state) is lower than it would be if all the molecules were on at the same time. We can use the concept of resolution limited point patterns to rigorously quantify this probability and use Definition 14 to provide a measure of resolution.

Idealized SMLM experiment
Consider an SMLM experiment in which the super-resolved image is formed through the capture of n F successive frames. We represent this as a random partition {Φ 1 , ..., Φ n F } of Φ O , with the modelling assumptions: 1. every molecule appears in exactly one frame, 2. the molecules are evenly divided amongst frames 3. molecules are independently divided among frames We say the spatial point pattern in each frame is q-thinned where q = n −1 F , i.e. the probability that an event (molecule) appears in a specific frame is q. For each frame pattern Φ i , under stationary and isotropic imaging operator I with ARL α, we observe the image point pattern Φ i I . Associated with this is the corresponding resolved pattern Φ i R and the unresolved pattern Φ i U . The collection of resolved molecules is and the super-resolved image pattern is formed by taking the superimposition of resolution limited point patterns across the n F frames, i.e. Figure 5. Illustration of SMLM in which a pattern Φ O is imaged across 3 frames. Each frame is a thinned version of Φ O that is subject to an imaging operator with resolution limit α. The resulting super-resolved image pattern Φ I is the superimposition of individual frames.
The probabilistic resolution p N O ,α,q of a super-resolved point process, now indicating its dependency on thinning rate q, is as defined in Definition 14, i.e. P(s ∈ Φ R |s ∈ Φ O ).
Proposition 3. The probabilistic resolution p N O ,α,q of a super-resolved process is equal to p Nq,α = 1 − G q (α), where G q (·) is the NND function for the process N q , a classically q-thinned version of N O .
(A classically q-thinned version of N O is where each event generated by N O is independently kept with a probability q or discarded with a probability 1 − q.) This result follows from the fact that the probability an event in Φ O is unaffected by resolution is the probability it is unaffected by resolution in the frame it appears in, i.e. P(s ∈ Φ R |s ∈ One corollary of this result is if one wishes to attain a minimum probabilistic resolution of p using an ARL of α, then one must ensure p ≤ 1 − G q (α). The maximum value of q required to achieve this is We are able to verify the core idea behind SMLM procedures with the following result.
Proposition 4. Let N O be a spatial point process. Consider a single molecule localization microscopy experiment with assumptions 1-3 as outlined above. As the number of frames n F → ∞ and q → 0 such This proposition states that as we tend the thinning of our pattern to zero while increasing the number of frames we image across, we tend to perfect probabilistic resolution, i.e. we guarantee that no event will be affected by resolution. The proof of this is found in Supplementary Note 12.4.
We now make use of this probabilistic reasoning about resolution to define the super-resolution limit for a point process.
Definition 15. For a point process N O imaged across n F = q −1 frames subject to an algorithmic resolution limit α, the super-resolution limit ς N O ,α,q is the α * that solves In other words, if N O was imaged in a single frame, one would require a resolution limit of ς N O ,α,q to achieve the same probabilistic resolution as under the SMLM procedure with thinning rate q and ARL α.
This follows directly from Propositions 2 and 3 and Definition 15.

Supplementary Note 8: Computing probabilistic resolution
Probabilistic resolution and the super-resolution limit can not be estimated directly from images and must instead be computed for specific scenarios of interest. For example, one might ask: given I want to image a point pattern of this type, what conditions do I need to obtain a certain probabilistic resolution.
Such computations require knowledge of the NND function for the data generating process of interest.
The NND function is only analytically tractable for a very small set of idealized examples, and therefore so is the probabilistic resolution and super-resolution limits. We begin by looking at three such examples.
We will then show how it can be estimated through simulations for more general cases.

Homogeneous Poisson Process
For an HPP N O with intensity λ O , the NND distribution function is G O (r) = 1 − exp(−λ O πr 2 ). An HPP imaged with ARL α therefore has a probabilistic resolution p N O ,α = exp(−λ O πα 2 ). For example, an HPP with λ = 1 events per µm 2 subject to ARL of α = 0.5µm has p N O ,α = 0.456.
With regards to an SMLM experiment, for HPP N O with intensity λ O , thinned process N q is an HPP with intensity qλ O . In this circumstance, the probabilistic resolution is p N O ,α,q = exp(−qλ O πα 2 ), and from Proposition 5 the super-resolution limit is ς N O ,α,q = √ qα. Further to this, Supplementary Equation (29) informs us that to ensure a probabilistic resolution of at least p one requires q ≤ −(λ O πα 2 ) −1 ln p.

Cluster Process
We now look at the case where N O is a cluster process. The specific type of cluster process we consider is a Thomas process. This is a process in which parent events are generated according to an HPP with intensity κ, and each parent has a random number of offspring which is Poisson(ζ) distributed. These offspring are randomly placed around the parent according to a circular Gaussian distribution centred at the parent with covariance σ 2 I. Process N O is comprised only of the offspring events. We note that the Thomas process is a type of stationary Cox process (Cressie, 1993, p. 663) and therefore the NND, as defined in Definition 7 is valid here. The NND function is given by Afshang et al. (2016) as where Q 1 (α, β) is the first-order Marcum Q-function defined as Q 1 (α, β) = ∞ β y exp(−(y 2 + α 2 )/2)I 0 (αy)dy where I 0 (·) is the 0th order modified Bessel function, and f V 0 (v 0 ) = v 0 σ 2 exp(−v 2 0 /2σ 2 ). For a given resolution limit α and process parameters κ, ζ and σ, the NDD function G O (α), and hence p N O ,α , can be computed through numerical integration.
For example, consider imaging a Thomas cluster process with a density of 10 clusters per 30µm 2 , an expected 100 events per cluster and the events in each cluster distributed with a covariance matrix equal to σ 2 I 2 , σ = 0.05µm (a typical simulation study from Rubin-Delanchy et al. (2015); Griffié et al.
(2017) -see Supplementary Figure 6A for an example realization). For an ARL of α = 0.5µm we have G O (α) = 1 to machine precision, which in turn gives a probabilistic resolution of p N O ,0.5 = 0 to machine precision. Suppose we now image it using an SMLM procedure with q = 1 × 10 −3 . The q-thinned process can now be considered a Thomas cluster process with the same cluster density but with an expected 0.1 events per cluster. This gives a probabilistic resolution of p N O ,0.5,1×10 −3 = 0.882 (the value of the curve at q = 1 × 10 −3 in Supplementary Figure 6B) and a super-resolution limit of 3.56 × 10 −3 µm. This calculation is illustrated in Supplementary Figure 6C.
Supplementary Equation (31) can further be used to numerically compute a bound on q to attain a certain level of probabilistic resolution. For example, suppose we wish to achieve a probabilistic resolution of at least 0.99 we require q ≤ 7.97 × 10 −5 .

Matérn I Hardcore Process
Suppose N O is a Matérn I hardcore process (Matérn, 1960) with hardcore distance greater than α, i.e.
no two events will be within α of each other with probability 1. This implies NND function G O (α) = 0 and therefore a probabilistic resolution of p N O ,α = 1. Under an SMLM procedure the probabilistic resolution will continue to be 1 for any q, and therefore the super-resolution limit is 0 for all q. This equates to the pattern being perfectly resolvable. Now suppose the hardcore distance is less than α. An expression for the NND function is given in Al-Hourani et al. (2016). Due to it's complexity, we elect not to include it here. However, interested readers should follow the same procedure used in Supplementary Note 8.2 for the Thomas cluster process to compute quantities of interest.

Estimating the NND function and probabilistic resolution
The above examples have tractable expressions for the NND function from which the probabilistic resolution and super-resolution limit can be calculated. Expressions for the NND function quickly becomes intractable as we move to more complex point process models. Here, we illustrate how we can estimate probabilistic resolution and super-resolution limits under such circumstances.
The NND function of a point process can be estimated from a realized point pattern {s 1 , ..., s n } on some region of interest X ⊂ R 2 . To do so, we use the following estimator from (Cressie, 1993, p. 614), there called theĜ 3 estimator:Ĝ , Here, r i,X and d i denote the distance from the ith event to, respectively, the nearest event in X and the nearest boundary of X, and I(B) is the indicator function of event B: I(B) = 1 if B is true and 0 otherwise.
The procedure for computing p N O ,α,q for a point-process N O of interest for a given α and q therefore proceeds as follows.

Compute the estimator for
The probabilistic resolution p N O ,α,q is estimated as 1 −Ĝ q (α).
6. The super-resolution limit can be estimated by computingĜ O (r) over a range of r and numerically findingĜ −1 O (Ĝ q (α)).
We illustrate this with an example.

Example: Tubulin
We consider a tubulin localization data set. We assume in a typical imaging experiment that the position, curvature and length of the microtubules are random and hence can be modelled as a fibre process (Stoyan et al., 1995, Chapter 9), and that the tubulin locations are realized on the fibres according to a Poisson process. In other words, if S i ⊂ R 2 is a fibre (microtubule) and S = i=1 S i ⊂ R 2 the fibre process (collection of microtubules), then the tubulin process is Cox with random intensity Λ(s) = ω · 1 S (s), ω > 0. Provided this fibre process is stationary (Stoyan et al., 1995, p. 282 ), the tubulin point process will be a stationary Cox process (Stoyan et al., 1995, p. 156 ), and hence has a NND function that can be estimated from a realization. The dataset we consider here is taken from the Single Molecule Localization Microscopy Symposium 2013 data challenge (Sage et al., 2015) and consists of 100000 emitters over a 40µm × 40µm region of interest, thus giving an estimated intensity of the object process ofλ O = 62.5µm −2 (see Supplementary Figure 6D). The structure is imaged over 2401 frames, giving q = 1/2401 ≈ 4.90 × 10 −4 With an ARL of 0.360µm (the estimated ARL of Algorithm 2 -see Figure 2), the probabilistic resolution is estimated to be p N O ,0.360,4.9e−4 ≈ 0.864 (the value of the curve at q = 1/2401 in Supplementary Figure   6E). This leads to a super-resolution limit of 1.38×10 −3 µm, i.e. with just a single image of the entire object pattern one would need an algorithmic resolution limit of 1.38 × 10 −3 µm to achieve the same probabilistic resolution as the SMLM procedure. This calculation is illustrated in Supplementary Figure 6F.

Supplementary Note 9: Estimating the algorithmic resolution limit
We acknowledge that there may be several suitable methods for estimating the ARL as defined in Definition 12. Here, we provide one method and show that under certain attainable conditions it has preferable statistical properties. The method we present makes use of the pair correlation function to form an estimator for the algorithmic correlation radius ρ, defined in Definition 11. From Supplementary Equation 15, the estimator for the ARL is then given asα =ρ/2.
We begin by modelling the pair correlation function for the image process as where there exists an 0 ≤ s < ρ such that h(r) is not a constant 1 on (s, ρ). Suppose we construct an unbiased estimatorĝ I (r) of g I (r) (see the Image Analysis section in the Methods for details) and compute it on each element of R = {r 1 , ..., r m }, a finely, regularly spaced collection of proposals for ρ. We assume ρ ∈ R, that the largest element r m is certainly larger than ρ and smallest element r 1 is certainly smaller than ρ. We further assume thatĝ I (r i ) = g I (r i ) + i at r i ∈ R, where 1 , ..., m are zero mean, iid random variables with finite variance σ 2 > 0.
The estimator of ρ we provide is defined aŝ where andḡ i = (m − i + 1) −1 m j=iĝ I (r j ). To show that this is estimating ρ, as required, we consider E{T (r i )}.
The intuition to this is that the function T (r i ) is the standard error of the pair correlation estimator across all proposals to the right of r i , including r i itself. Therefore, moving from right to left, while g I remains constant T will decrease in value. Once g I starts to deviate from the constant value of one, provided it does so by a certain amount, T will start to increase in value; leaving the minimum of T to be at the proposal closest to ρ. In practice, this means this estimator will be conservative as it will choose the first point from the right that appears to deviate significantly from one.
To formulate this, let R = {r 1 , ..., r K−1 , ρ, r K+1 , ..., r m } be the collection of proposals indicating that the true, unknown value of ρ is r K . For i < K we let n L,i = K −i, for K ≤ i < m we let n R,i = m−K +1, and for i < K we let n i = n L,i +n R,K . Subscripts L and R indicate a partitioning of the proposals r i , ..., r m that fall to the left or right of r K , respectively. With N i = n 2 i −n i , N L,i = n 2 L,i −n L,i and N R,i = n 2 R,i −n R,i , the function T (r i ) can be written as (Baker and Nissim, 1963) Taking the expected value, it follows that where and To demonstrate that this is a suitable estimator for the radius of correlation, we look to find a sufficient condition for i.e. the minimum of the expected value of the function is at ρ. Given E{T (r i )} < E{T (r j )} for all K ≤ i < j < n, such a condition needs to ensure E{T (r i )} > E{T (r K )} for all 1 ≤ i < K. We begin by considering a necessary and sufficient condition for E{T (r K−1 )} > E{T (r K )}. For i = K − 1, we have n 1,i = 1 and N 1 = 0, giving is necessary and sufficient for E{T (r K−1 )} > E{T (r K )}. In this circumstance,ḡ 1,i =ĝ(r K−1 ) and it follows (through manipulation) that (47) holds if and only if This states that provided g deviates away from 1 by a larger amount than ((n R,K + 1)/n R,K ) −1/2 σ (which is approximately equal to σ for large n R,K ) then E{T (r K−1 )} > E{T (r K )}, i.e. there exists a local minimum at r K . For it to be a global minimum it is sufficient to have E{T (r i )} > E{T (r K )} for all i < K. Further manipulation can be used to show that a sufficient condition for argmin Suppose we are able, through simulation, to compute M independent and identically distributed estimates of the pair correlation functionĝ I,1 (r i ),ĝ I,2 (r i ), ...,ĝ I,M (r i ), for all r i ∈ R, each with error of variance σ 2 , then estimatorĝ I (r i ) = M j=1 g I,j (r i ) has variance σ 2 /M . Therefore, provided we choose an M such that we will satisfy sufficient condition (49).

Verifying it is a true change point
We propose a test to verify whether an estimate of a correlation radius, as defined in (36) and (37), is due to a true change point in the pair correlation function. Suppose a genuine radius of correlation ρ > 0 does exist, then with everything to the right of ρ being a constant, we should find the variance of g I (r K ), ...,ĝ I (r m ) to be smaller than the variance ofĝ I (r 1 ), ...,ĝ I (r K−1 ). We propose an F -test to check whether the difference in variance is statistically significant.
Rejecting H 0 in favour of the accepting the alternative (s 2 L,K = s 2 R,K ) is equivalent to accepting there is a genuine change point in the pair correlation function atρ.
Under the null hypothesis s 2 where F n L,K −1,n R,K −1 represents the F distribution with n L,K − 1 and n R,K − 1 degrees of freedom.
We therefore reject the null hypothesis at the p significance level when s 2 L,K s 2 R,K > F p n L,K −1,n R,K −1 , where F p n L,K −1,n R,K −1 denotes the upper p · 100% of the F n L,K −1,n R,K −1 distribution.

Bootstrapping intervals of confidence
Consider the case where we have formedĝ I (r i ), r i ∈ R, by averaging M independent and identically distributed estimatorsĝ I,1 (r i ), ...,ĝ I,M (r i ). We can place all estimates in the m × M matrix Using the resampling with replacement framework of Efron and Tibshirani (1993), one can create R bootstrapped estimatesĝ * 1 I (r i ),ĝ * 2 I (r i ), ...,ĝ * R I (r i ) by creating m × M matrices G * 1 , G * 2 , ..., G * R , where the M columns of each G * k , k = 1, .., R are M randomly sampled (with replacement) columns of G.
Bootstrap estimateĝ * k I (r i ) is then computed by averaging across the columns, We then implement the estimator of the radius of correlation described above on each of the R pair correlation estimates [ĝ * k I (r 1 ), ...,ĝ * k I (r m )] T , k = 1, ..., R, to produce bootstrapped estimatesα * 1 , ...,α * R .

Example
We have applied the estimator and bootstrapping procedure presented here to Algorithms 1, 2 and  Table 1.
Further insight into issues encountered when estimating α can be found by looking at the histogram of the bootstrapped estimates. Algorithms 1 and 2 have a neat, unimodal distribution forα. This can be explained by observing that the pair correlation functions shown in Figure 2d of the main paper have a smooth shape that, moving from the left, cleanly settles on a constant value of one. This leaves little ambiguity as to the value of the radius of correlation, and thus the algorithmic resolution limit. However,  with Algorithm 3 we see the bootstrapped estimates of α take a multimodal distribution. This again can be understood from the pair correlation function in Figure 2d of the main paper in which there appears to be a interval over which the function approximately settles to one but with ambiguity over when one would consider it to have actually occurred.

Supplementary Note 10: Further analysis
In this section we look at two further properties -molecule density and signal to noise ratio (SNR)and analyze the effect they have on algorithmic resolution limit.

The effect of molecule density
We note that the definitions for the radius of correlation and algorithmic resolution limit, Definitions 11 and 12, respectively, do not depend on the intensity (also referred to as the density) λ O of the object process. Supplementary Figure 8 illustrates that this is what we observe in practice by demonstrating the pair correlation function for three different algorithms (imaging operators) has strikingly similar shape across a wide range of intensities. In particular, by eye, the point at which it settles to a constant one appears to be very similar for molecule densities (intensities) up to and including 1 molecule per µm 2 .
However, for two of the algorithms analyzed (Algorithms 1 and 2), there appears to be a break down of this behavior for densities greater than 1 molecule per µm 2 . This is, to some extent, verified by applying the estimator for α detailed in Supplementary Note 9, the results of which are shown on the right-hand-side of Supplementary Figure 8. This is not surprising; there will be a point at which the image becomes saturated and the algorithm is simply unable to perform its task. Interestingly, Algorithm 3 appears to have an approximately constant algorithmic resolution limit across all densities studied.

The effect of signal to noise ratio
We here explore the effect of SNR on the algorithmic resolution limit by changing the number of photons emitted from each molecule, which equates to varying the signal strength. Supplementary Figure 9 shows that above 100 photons per molecule the algorithmic resolution limit remains constant for Algorithms 1 and 2. However for 100 photons and below there is an increase in the algorithmic resolution limit. This can be seen in both the values of the estimated algorithmic resolution limit and the shape of the pair-correlation function. This degradation at low SNR of the algorithms' ability to distinguish between objects is in keeping with the results of Ram et al. (2006) and further discussion on this is provided in the Results section of the main text. This behavior is not noticeable with Algorithm 3, however its algorithmic resolution limit in all cases is significantly above Rayleigh's limit and therefore it is unsurprising that the effect predicted by Ram et al. (2006) is undetectable.

Supplementary Note 11: Analyzing resolution limited point patterns
The pair-correlation function g(r) (see Definition 3) and Ripley's K-function K(r) (see Definition 6) are the two most common functions characterizing point processes that are estimated from a realized point pattern.
The benefit of the pair correlation function is it gives a measure of correlation at a specific distance r of interest, however, it can be difficult to estimate, particularly from a single realization with a low number of events. The K-function, on the other-hand, integrates rg(r) over the range [0, r], so smooths out the localized information in the pair-correlation function. However, estimating it is straight forward and well-studied (Illian et al., 2008;Ripley, 1981;Cressie, 1993;Lagache et al., 2013).
For analyzing resolution limited point processes, we suggest, where possible, the use of the paircorrelation function. As Supplementary Figure 3 shows, the effects of limited resolution in the correlation structure of the process occur over a distance of 2α, therefore g(r) should only be used for analysis on distances greater than 2α. However, because the pair correlation function gives localized information on the process' correlation structure the resolution effect on 0 < r < 2α does not propagate into g for r ≥ 2α.
In contrast, due to the integration involved, the effects of limited resolution on the interval (0, 2α) propagates into the K-function at all r, as shown in Supplementary Figure 3C and 3D. For this reason we recommend the use of the K 2α function which we define as In the situation where N O is an HPP and the image point process N I under analysis has second-order intensity of form given in Supplementary Equation (9), K 2α = πr 2 − 4πα 2 , and hence if we define L 2α (r) ≡ (K 2α (r) + 4πα 2 )/π then L 2α (r) = r and L 2α (r) − r = 0 under CSR of N O .
The function K 2α (r) can be estimated asK 2α (r) =K(r) −K(2α), r > 2α, whereK is the traditional K-function estimator of Ripley (1981) where A denotes the area in the object space corresponding to the image being analyzed, w ij denotes the Ripley's isotropic edge correction weights Ripley (1981), n denotes the total number of localizations, and d ij denotes the distance between the i th and j th localizations. The indicator function is defined as Further discussion is provided in the Methods section of the main text.

Ripley's K-function for SMLM data
When dealing with SMLM data, it is possible to make advantage of the temporal information encoded into the data to mitigate for the algorithmic resolution limit. Given realizations φ 1 I , φ 2 I , ..., φ n F I of the point patterns Φ 1 I , Φ 2 I , ..., Φ n F I for each of the n F frames, we know that a pair of localizations that appear in separate frames cannot mutually interfere with each other through resolution issues. Typically, the classical estimator for Ripley's K-function would sequentially move through every event (localization) in the combined realization φ I = n F i=1 φ i I and count how many of the other events were within a distance r of that event (with edge-corrections if needed). The counts are then averaged over the entire dataset to get an estimate for the expected number of events within a distance r of an arbitrary event.
When estimating Ripley's K-function in the SMLM setting, one can use the following strategy: again, move through every event in φ I , however, when counting the number of events within a distance r, do not include events in the same frame and only include events from the other frames. This means no two events which may be mutually interfering with each other through resolution effects will be used together in the estimate, allowing estimation of the K-function for all r ≥ 0.
Suppose the realized localizations in frame i are φ i I = {s i 1 , ..., s i m i }, i = 1, ..., n F then this estimator can be expressed asK wheres i is the mean position of RSLP S i = {s i,1 , s i,2 , ...}, i = 1, 2, ..., i.e. While the second order properties of the image process N I for HPP N O appear intractable, we are able to simulate these processes and estimate the pair correlation function and radius of correlation using the method presented in Section 9. In Supplementary Figure 10 is presented the estimated pair correlation function and estimated radius of correlation under this model, demonstrating the radius of correlation ρ is indeed estimated to be at around 2α, and thus ρ/2 is an appropriate definition of the algorithmic resolution limit. Details can be found in the figure caption.

The effect of localization error on spatial point processes
We first show that a homogeneous Poisson process remains completely spatially random under the effect of localization errors. To do so, we model the resulting process as being a Neyman-Scott process (Cressie, 1993, p.664 Supplementary Figure 10. Estimated pair correlation function (blue) and radius of correlation (red) for HPP N O subject to imaging operator as described in Model 3 for (left) α = 0.01 and (right) α = 0.02. For each, 2000 independent realizations of an HPP with intensity λ = 500 per unit area were simulated. Pair correlation functions on each simulation were estimated (see Methods for details) and averaged together to give the plotted pair correlation estimate. The radius of correlation was estimated using the method outlined in Section 9, givingα =ρ/2 = 0.0105 and 0.0200, respectively.
exactly one offspring u i -the localization -that is placed a random vector i away such that u i = s i + i .
To demonstrate that clustered processes remain clustered we use the example of the Thomas cluster process -see Section 6 -that are a standard model used in microscopy (Rubin-Delanchy et al., 2015;Griffié et al., 2017). Suppose clusters have covariance σ 2 c I 2 , and each event is subject to independent measurement errors of covariance σ 2 I 2 , then the process remains Thomas with each cluster now having covariance (σ 2 c + σ 2 )I 2 . Proof. Under this stationary and isotropic imaging operator, we assume there is a distance α such that

Proof of Proposition
We wish to show that N I (ds) and N I (du) are correlated for s−u ≤ 2α and uncorrelated for s−u > 2α.
To do so, it suffices to show E{N I (ds)N I (du)} = E{N I (d(s)}E{N I (du)} if an only if s − u > 2α. It will become necessary to consider the first-order intensity of N I , which is given as λ I ν(ds) = P(N I (ds) = 1) = P(N I (ds) = 1|N O (ds) = 1)P(N O (ds) = 1) (64) where q = exp(−λ O πα 2 ) is the probability that given N O (ds) = 1 there is no further event of N O within the ball b(s, α). It follows that Let us consider E{N I (ds)N I (du)}.
E{N I (ds)N I (du)} = P(N I (ds) = 1, N I (du) = 1) Through symmetry, we can reduce the above to Let us now consider the case when s − u > 2α. In this circumstance, b(s, α) ∩ b(u, α) = ∅, which due to N O being an HPP implies N O (b(u, α)) are N (b(u, α)) independent, meaning P(N O (b(s, α)) = n, N O (b(u, α)) = m) = P(N O (b(s, α)) = n)P(N O (b(u, α)) = m), n, m ≥ 0. This in turn implies From Supplementary Equation (66) we conclude that s − u > 2α implies E{N I (ds)N I (du)} = E{N I (ds)}E{N I (du)}. To complete the argument we consider the case when s − u ≤ 2α. In being the area of intersection of two circles of radius α a distance s − u apart. Supplementary Equation (68) therefore becomes E{N I (ds)N I (du)} = λ 2 O (q 2 r + 2pqr(1 − q) + p 2 (1 − qr(2 − q)))ν(ds)ν(du) To show (66) and (68) are not equal we subtract one from the other, giving The right-hand side equals zero only when either (i) r = 1, or (ii) be satisfied under the assumptions of the model, completing the proof.

Proof for Proposition 4
Proposition 4. Let N O be a spatial point process. Consider a single molecule localisation microscopy experiment with assumptions 1-3 as outlined above. As the number of frames n F → ∞ and q → 0 such that qn F = 1, for s ∈ Φ O , P(s ∈ Φ R ) → 1.
Proof. For a given s ∈ Φ O , Supplementary Figure 11. Distribution of the ground-truth locations of detected molecules relative to the center of pixels varies between image analysis algorithms. For each of the indicated image analysis algorithms, the offset of the ground-truth location coordinate of each molecule detected by an algorithm from the center of the detector pixel the molecule is located within was calculated. The distributions of these offsets for each image analysis algorithm across the width and height of a pixel are plotted here for the image shown in Figure 2a and analyzed in Figure 2b. Since the object locations in Figure 2a are simulated with a completely spatially random distribution, the distribution of these offsets is expected to be uniform, as shown a. However, the Algorithm 4 (e) and Algorithm 5 (f) detect more molecules that are closer to the center of pixels compared to those located closer to the edge of pixels unlike the other image analysis algorithms we examined. Supplementary Figure 12. The resolution limit of the indicated analysis approach is illustrated here using images of deterministic structures similar to Figure 3. Each structure consists of single molecules positioned evenly around the edge of a ring, indicated by crosses. The x and y axes indicate the number of molecules comprising each ring structure and its radius respectively. Localizations obtained by the analysis approach, indicated by diamonds, corresponding to structures where all constituent molecules were accurately identified and localized to within 10nm of the true position are shown in blue. Localizations corresponding to structures where one or more molecules were either not identified or where the localization deviated by more than 10nm from the true location are shown in red. The solid line indicates the estimated algorithmic resolution limitα. The dashed lines show the 80% bootstrapped confidence interval for the estimatedα. All structures with single molecules separated by a distance larger than the algorithmic resolution limit, i.e., to the left of the lines corresponding toα, are accurately recovered. c. d.
Supplementary Figure 13. Results from applying Algorithm 6 which employs a multi-emitter analysis approach. (a) The results shown in Figure 2d augmented with the pair-correlation plot corresponding to Algorithm 6. The pair-correlations results for Algorithm 6 were calculated based on the localizations obtained by analyzing the same images used in Figure 2d. (b) Magnified view of the region in a indicated by the dashed rectangle. The dashed, vertical lines indicate the algorithmic resolution limits. (c) and (d) The results of using Algorithm 6 to analyze the same images of deterministic structures analyzed in Figure 3. Each structure consists of single molecules positioned evenly around the edge of a ring, indicated by crosses. The x and y axes indicate the number of molecules comprising each ring structure and its radius respectively. Localizations obtained by the analysis approach are indicated by diamonds. The solid line indicates the algorithmic resolution limitα estimated from the pair-correlation results shown in a. The dashed lines show the 80% bootstrapped confidence interval for the estimatedα in b. Localizations obtained using Algorithm 6 corresponding to structures where all constituent molecules were accurately identified and localized to within 10nm of the true position without any repeat detection of a particular molecule are shown in blue. Localizations corresponding to structures where one or more molecules were either not identified, where the localization deviated by more than 10nm from the true location, or where the number of molecules in the structure was overestimated by the algorithm are shown in red. In c, the localizations obtained in b are plotted again such that repeat detections of a particular molecule are ignored when determining whether a structure was recovered by the algorithm or not. Therefore, localizations corresponding to structures were all constituent molecules are identified at least once and localized to within 10nm of the true position are shown in blue while localizations corresponding to all other structures are shown in red.

a b
Low density High density Algorithm 1 Algorithm 2 Algorithm 3 Supplementary Figure 14. Results from analyzing both low and high-density super-resolution images of tubulin acquired from a BS-C-1 cell. (a) A super-resolution reconstruction of the distribution of tubulin in a BS-C-1 cell. The image is reconstructed using the localizations obtained by analyzing low-density superresolution images using Algorithm 1. The reconstruction was generated by simulating Gaussian profiles centered at each valid localization produced by Algorithm 1 using σ = 17nm for the width of the Gaussian PSF. The value for σ was selected by calculating the limit of the localization accuracy (Ober et al., 2004;Ram et al., 2006) for the corresponding experimental and imaging parameters, using the average number of photons detected from each fluorophore localized by Algorithm 1, and the average noise associated with the signal from each fluorophore. The blue box indicates the region displayed in higher magnification in the adjacent panel. Scalebar (yellow) = 5µm. (b) Magnified view of super-resolution reconstructions of the region marked in a using the localizations produced by the indicated algorithms. For all three algorithms, more localizations are obtained when analyzing the low-density images compared to the high-density images resulting in better clarity in the reconstructions corresponding to the low-density images. Scalebar (yellow) = 1µm.