Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.


Simulation details
Simulation data are generated following the 3D convolution process discussed in Fig. 2a-c. First, a number of emitters are randomly located in a 3D image space (like Fig. 2b). The x and y dimensions of the sample space are approximately 6 μm (32 image pixels in both x and y dimensions) and the z dimension is approximately 1.6 μm (corresponding to 21 layers in the simulated PSF matrix). The number of photons emitted by each emitter follows a Poisson distribution with mean of 2000 photons. The noise for each pixel follows a Poisson distribution with a mean of 20 photons. For each case of different emitter densities, approximately 1000 emitters in total are simulated and recovered using our algorithm. Mapping the recovered emitters and the simulated true positions is an assignment problem. The best match will have the minimum total Euclidean distance between the recovered emitters and true emitters 1 . We set a 1 pixel threshold detecting incorrect assignments. A recovered emitter having no true emitter within a 1 pixel distance is recorded as a false positive. Similarly, a true emitter lacking a recovered emitter within a 1 pixel distance is recorded as a false negative. Only matched emitters are used to calculate the fitting errors, as shown in Fig. 3b and Supplementary Fig. S2b and d. Training data for the machine learning algorithm consisted of 100 simulated images with a particle density of 0.5 μm -2 . The data used in Fig. 3 was employed as testing data for the algorithm. The effective testing data is the same data used in Fig. 3 as explained in detail above.

Machine learning thresholding
Overfitting is a severe problem in super-resolution image analysis, especially when PSFs overlap ( Fig.  3c and d). Therefore, in many algorithms, the designer will get rid of dim particles, as these particles are more likely to be false positive identifications. However, this threshold is ill-defined and can be difficult to set. Some algorithms use 5% of the largest intensity 2 , but it depends on the situation; a 5% threshold may not be large enough to get rid of all the false positive results.
In this work, we applied machine learning to define a threshold with simulated images rather than establish a universal, pre-set intensity threshold. This thresholding is a vector operator that is not only applied to signal intensities, but also considers emitter position updates during consecutive iteration. Machine learning is powerful in complicated classification problems. In our case, we want to separate the false positive identifications from the true identifications and remove all false positives. We first analyzed 100 simulated images (the density is approximately 0.5 μm −2 ) and labeled all true identifications and false positives. We applied the least squares fitting algorithm five times to update the position for each identified emitter. The intensity at each step is divided by the averaged intensity, and the change in position in each update is used to update the position. Labeling identified emitters is similar to particle tracking. As such, we used our previous particle tracking algorithm to identify emitters, allowing a search radius of 1 pixel. Using this labeled training data, we trained our simple logistic regression model: where is the vector operator, contains the relative intensity and position updates of all the identified emitters, and indicates if a emitter is true ( > 0.5) or false ( ≤ 0.5), which is the commonly accepted assignment in logistic regression 3 . After the training, can be used to indicate if an identified emitter is true or not. Even though the training is only conducted on one type of phase mask (phase mask 2 in Supplementary Fig. S1), the machine learning prediction works well on the other phase masks (phase mask 3 and 4, see results in Supplementary Fig. S2a and c) and experimental data (Fig. 4).

Preparation and characterization of porous polystyrene film
The porous polystyrene film was prepared via the breath figure method 4 . A pre-cleaned coverslip was placed in an environment of water vapor saturated air, and 100 µL 1 wt% polystyrene (Sigma-Aldrich, Mw = 36k) in toluene solution was drop casted onto the coverslip. While the toluene evaporated, water droplets condensed on the surface of polystyrene, forming micron-sized pores. The final porous film was obtained once the toluene was completely evaporated. The entire film formation process took around 5 minutes to finish. The resulting polystyrene films contained pores with various sizes, as demonstrated by dark field microscopy ( Fig. 4d in the main text).

Loading 40 nm orange fluorescent beads to the porous film
The carboxylate-modified microspheres (0.04 µm) were purchased from Invitrogen (stock number F-8792), which have a fluorescent emission maximum around 560 nm. The stock solution was diluted by 10 6 times in HEPES buffer (pH = 7.3, GIBCO). Diluted bead solution (100 µL) was drop cast onto the porous polystyrene film and dried in air to generate the final sample for fluorescence microscope measurements. The polystyrene beads adsorbed to the polymer film surface and pore edges due to the intrinsic hydrophobicity of polystrene. Performing 3D super-localization analysis of the adsorbed beads reveals the underlying 3D surface morphology of the film.

DH PSF 3D microscopy
The 3D detection data in Fig. 4 was collected by a home-built DH PSF 3D microscope. A 532 nm solid state laser (Coherent, Compass 315M -100SL) is used to excite the fluorescent emitters. The laser beam is focused at the center of a 1.45 numerical aperture, 100 × oil-immersion objective (Carl-Zeiss, alpha Plan-Fluar) in epi-fluorescence excitation mode. The fluorescent signal is separated by a dichroic mirror (Chroma, z532/633rpc) and then transferred into the 4f system after the tube lens with two 4f lenses (Thorlabs, Matched Achromatic Doublet Pairs) and transmitted through a DH phase mask (Double Helix LLC, Double helix phase mask). Final data signal is recorded with a sCMOS camera (Hamamatsu, ORCA-Flash 4.0).