Fully automatic and robust 3D registration of serial-section microscopic images

Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robust 3D registration technique for microscopic image reconstruction, and we demonstrate our method on two ssTEM datasets of drosophila brain neural tissues, serial confocal laser scanning microscopic images of a drosophila brain, serial histopathological images of renal cortical tissues and a synthetic test case. The results show that the presented fully automatic method is promising to reassemble continuous volumes and minimize artificial deformations for all data and outperforms four state-of-the-art 3D registration techniques to consistently produce solid 3D reconstructed anatomies with less discontinuities and deformations.

Robust and fully automatic three-dimensional (3D) image registration of serial-section microscopic images is critical for detailed 3D anatomical reconstruction of large biological specimens such as serial section Transmission Electron Microscopy (ssTEM) of neural tissues 1 , serial confocal laser scanning microscopic images of a brain 2,3 or serial histopathological microscopic images 4,5 . In biomedical applications, a large specimen is generally embedded in the medium block and cut into a series of microscopy image slices, called as sections, which are collected, stained and digitally imaged. The digital microscopic slices are then used to reconstruct detailed the 3D anatomy for further biological and medical investigation. Serial slides can be manually aligned by setting up a number of pairs of corresponding control points to the same (x, y) location for consecutive images z i and z i+1 , and the pairs of images and paired-sets of control points are then given to semi-automatic software 6 for image alignment. Fully automatic registration of biological images is possible as demonstrated by the software -TrakEM2 1,7-9 and in various studies 3,10-13 .
However, robust and fully automatic 3D registration of serial-section microscopic images is challenging as the disadvantage of serial-section microscopy is that cutting a block of specimen into several sections tends to create the discontinuity between every section and leads to deformation in individual sections 1 . Therefore, sections need to be aligned in order to remove deformation and discontinuity. Cardona et al. 8 also pointed out that "TrakEM2 acknowledges that any automatic procedure (such as image registration and image segmentation) will eventually fail partially or fully and will require manual correction by a human operator". Moreover, in comparison to laser scanning confocal images as used in the studies 2,3 where the serial image data maintains the property of geometrical continuity in 3D space, there are complex deformation problems for serial histopathological slides, including physical destructions caused by cutting and fixation, staining artifacts and uneven stain variations due to potential discrepancy in thickness of individual tissue sections. These complex distortion effects makes image registration of histopathological data an even harder task.
Scientific RepoRts | 5:15051 | DOi: 10.1038/srep15051 The main contribution of this study is to present a fully automatic and robust 3D image registration method for reconstruction of detailed 3D anatomy and able to deal with complex deformation problems for different types of microscopic images, such as serial ssTEM images, laser scanning confocal images and histopathological images. The proposed 3D image registration system contains a new 3D alignment and validation model utilizing the B-Spline Deformation Field and our recent efforts on robust 2D image registration 4,5 . Details are described in the methodology section. The experimental results show that the proposed fully automatic method is promising to perform 3D registration well for all data and consistently produces solid 3D reconstructed objects with less discontinuities and deformations in comparison to the benchmark methods.

Results
Regarding the experimental materials, four serial-section microscopic image data sets are tested, including two sets of ssTEM images of the neural tissues of the droshophila brain 14,15 , containing 20 and 30 gray images respectively, a set of 18 serial histopathological color images of renal cortical tissues 4 , and a set of 108 serial laser scanning microscope images of the drosophila brain 2 . In addition to the real world serial-section microscopic image sets, a synthetic test case is built for quantitative evaluation. Regarding the benchmark approaches, four state-of-the-art 3D registration methods are compared with the proposed method using TrakEM2 1,7-9 , including a method using least squares (linear feature correspondence) 16 , an elastic b-spline model for biological images (UnwarpJ) 17 , an improved bi-directional b-spline model for histopathological section alignment (bUnwarpJ) 18 and an elastic volume reconstruction method 1 , and the four benchmark registration methods are tested with four different transformation parameters, including translation, rigid, similarity and affine. Hence, there are 16 benchmark approaches tested in total.
In evaluation, for every data and method applied, registration results are reconstructed as a 3D anatomical object. In order to assess the continuity of the reconstructed 3D objects and evaluate the performance of the registration method, a randomly selected plane is defined for each dataset to extract side views of the 3D objects by individual methods. For example, in Fig. 1, (a) the inputs for 3D registration are original serial histopathological images. Without registration, (b) serial images are sequentially placed into a 3D space, and a randomly selected plane can be defined to cut the 3D object into two parts. Then, the side view of the upper part object can be used to assess the continuity of the reconstructed 3D object; without registration, the continuity of the reconstructed object is poor here. After registration by the proposed method, (c) registered images are sequentially placed in a 3D space to produce (d) a reconstructed 3D object. Next, the 3D object will be cut into two parts using the randomly selected plane defined previously, and the side view of the upper cut object is used to assess the continuity of the reconstructed anatomical object and evaluate the performance of the registration approach.

Serial section Transmission Electron Microscopy (ssTEM) of Droshophila brain. Two ssTEM
image sets of the droshophila brain 14,15 are tested in the experiments. The first one is released by Gerhard et al. 14 14 . Two planes are randomly chosen for the ssTEM data of the drosophila melanogaster third instar larva ventral nerve cord to extract the side views of the 3D reconstructed objects generated by individual methods. method produces solid 3D reconstructed objects with less discontinuity and deformation problems in comparison to the benchmark methods. For illustration purposes, red circles are marked to show that the proposed method yields a good continuity while some benchmark methods generate discontinued contours.
The second data set released by Cardona et al.9 contains 30 serial TEM sections of the drosophila first instar larval brain neuropile and one ventral nerve cord segment. Every image is in the dimension of 512 by 512 pixel, with resolution 4 nm/pixels and section thickness 50 nm. One plane as shown in Fig. 4 is randomly selected to extract the side views of reconstructed anatomical objects, and the side views of all objects are presented in Fig. 5, showing that the proposed method outperforms the benchmark approaches and yields a solid object with good continuity; a red circle highlights that the presented method align images well while the benchmark methods output more discontinuities. In addition, the fourth benchmark approach1 fails to find corresponding features among sections and is not able to generate registration outputs.    9 . One plane is randomly selected for the ssTEM data of the drosophila first instar larva brain neuropile and one ventral nerve cord segment to extract the side views of the 3D reconstructed objects generated by individual methods.
Scientific RepoRts | 5:15051 | DOi: 10.1038/srep15051 Serial histopathological images of renal cortical tissues. 18 serial histopathological images of renal cortical tissues 4 are used as the third test data. This data is comparably more challenging due to large variations on the data appearances, image orientation and size among sections, complex deformations and artifacts introduced during the data preparation process, causing existing methods fail to reconstruct a solid 3D object and perform poorly as shown in the Fig. 6. In addition, the fourth benchmark approach 1 fails to find corresponding features among sections and is not able to generate registration outputs. In comparison, the proposed method well aligns images in 3D and is still able to reconstruct a solid object with good continuity; the 3D reconstruction result is presented in the Fig. 7.
Laser scanning microscopic images of the drosophila brain. 108 serial-section images of laser scanning microscopy of the drosophila brain is adopted as the fourth test set. This dataset is originally used to evaluate a 4D registration technology -BrainAligner by Peng et al. 2 , which aligns pairs of 3D brain volumes. As shown in Fig. 8, a plane is randomly defined to extract side views of reconstructed anatomical objected by individual methods, and 3D reconstruction results with associated side views are presented in Fig. 9. Some of the benchmark approaches fail to produce solid 3D objects, and the fourth benchmark approach 1 fails to find corresponding features among sections and is not able to generate registration outputs. In comparison, the presented approach aligns images well in 3D and produces a solid object.  Quantitative evaluation on a synthetic image sequence. For quantitative evaluation, a synthetic serial section test case is built by firstly creating a serial of ten identical images and secondly adding random deformation effects to individual images as shown in the Fig. 10. For the synthetic test case, a general registration performance measurement method, i.e. the percentage of pixels with similar intensity levels, is adopted to measure the registration accuracy, and an automatic evaluation tool is built to conduct quantitative evaluation automatically. Given a serial of images, ...
, the overall registration accuracy, R, is formulated as the mean of the registration accuracies, r i , of individual pairs, < , > ∩ Ω = Ω Ω + ⁎ i i i 1 defines a mask common to the foreground Ω i and Ω i+1 of I i and I i+1 ; defines the foreground data after deformation, excluding the black background as shown in Fig. 10; defines the white pixels in the common area Ω ⁎ i of Figure 7. 3D reconstruction using serial histopathological microscopic images. (a) the inputs for 3D registration are original serial histopathological images. Without registration, (b) serial images are sequentially placed into a 3D space, and a randomly selected plane can be defined to cut the 3D object into two parts. Then, the side view of the upper part object can be used to assess the continuity of the reconstructed 3D object. After registration by the proposed method, (c) registered images are sequentially placed in a 3D space to produce (d) a reconstructed 3D object. One plane is randomly selected for the laser scanning microscopic images of the drosophila brain to extract the side views of the 3D reconstructed objects generated by individual methods. Table 1 presents the quantitative evaluation results, and the box plot of the quantitative evaluation results is provided in Fig. 11. Figure 12 presents the registration outputs of individual approaches. The experimental results shows that the proposed method achieves higher registration accuracy score and performs better than the benchmark approaches.

Discussion
We have presented a robust and fully automatic 3D image registration technique for detailed anatomical reconstruction of serial-section microscopic images. The method is promising to reassemble continuous volumes and able to deal with complex distortions, staining variations and artifacts. We have demonstrated our method in application to four different microscopic image sets, including two serial ssTEM images, one laser scanning confocal image sequences and one serial histopathological images. The registration and reconstruction results show that the proposed 3D image registration method is robust and performs consistently well, even for data sets with large morphological distortion problems. The presented 3D image registration technique is not limited to tissue images but can also be applied to other anatomically or histologically defined medical data and will prove to be a substantial advantage for any application that requires 3D image registration. The software implementation of the presented method Figure 9. 3D reconstruction results with associated side views for the laser scanning microscopic images 2 . 3D reconstruction results with associated side views of the 3D reconstructed objects generated by individual methods are presented for the laser scanning microscopic images of the drosophila brain 2 . and the test data used in this study are made publicly available for scientific communities to use (http:// www-o.ntust.edu.tw/~cweiwang/3DRegistration/).

Methods
The proposed 3D image registration system combines and extends our recent efforts on robust 2D image registration 4,5 and is devised with a new 3D alignment validation model utilizing the B-Spline deformation fields. The flowchart of the proposed method is shown in Fig. 13. Given the referenced layer I r specified ( = / ⌈ ⌉ r n 2 in this study), the proposed 3D registration conducts forward and backward image registration sequentially and bidirectionally for every two neighboring image pairs. The paired image registration consists of four steps: data normalization and feature enhancement, color deconvolution, feature matching and extraction, and image registration by using improved bi-directional elastic b-spline model. After the paired image registration is conducted to obtain an registered image with the associated deformation field, a validation model is applied by evaluating the deformation field. If accepted, the paired image registration output is as the final 3D registration result. Otherwise, the original image is used as the registration output. More details about each method can be found in the following subsections.
Data Normalization and Feature Extraction. The data normalization process is applied to reduce variations on image features and enhance tissue patterns. This greatly benefits global feature matching  and local area-based directing matching processes as it automatically adjust the brightness and contrast of image color channels based on the image histogram distribution. In color images, the value of each pixel is represented by a vector where , , c c c c : { } j 1 2 3 . Next, it maps the original pixel value x c j in the range from x low c j to x high c j to new value ⁎ x c j in the valid intensity scale from x min c j to x max c j . The data normalization is proven to reduce stain variation and enhance tissue patterns, furthermore improving the following feature extraction model accuracy to identify valid corresponding control points 4 .
Our goal is to extract the eosinophilic structures, which are generally composed of intracellular or extracellular protein, as image features for image registration, and the color decomposition technique is utilized to extract independent haematoxylin and eosin stain contributions from individual histopathological images using orthonormal transformation of RGB. In the RGB color-space, every color is defined as ≡ ( , , ) ≡ ( , , )  c c c c r g b 1 2 3 where r, g, b represent the red, green and blue components, and we can see additive color mixing as the vector addition of RGB components. To model the colors in an image as the vector addition of a desired (D) and undesired (U) components to a background color (P), new unit vectors can be defined as follows.
PD are alternative unit vectors based on the undesired and desired colors.
Then, color  c can be transformed to the new unit vectors.
where ≡  → p OP ; O is the origin in the RGB 3D space;  → OP is a vector. By setting u = 0, we remove the undesired component and obtain the new color ′ = ⋅ + ⋅ + c d d n n p. In the case of three channels, the color system can be described as a matrix of the form with every row representing a specific stain and every column representing the optical density (OD) as detected by the red, green and blue channel for each stain. 2 ). In this study, the normalized optical density (OD) matrix,  M, to describe the color system for orthonormal transformation is defined as follows: . . 2D image registration. Our proposed 3D image registration framework is based on our previous work,2D robust image registration 4,5 . In order to improve robustness, Our 2D image registration method incorporate two approaches: area-based and feature-based. Initially, sparse approximation for fast and coarse global registration is applied. Given I 1 and I 2 as two images for alignment, T as a set of all possible transformation between I 1 and I 2 and U t (I) as the function that maps an image I to its transferred images using the transformation t, the goal is to find optimal transformation ′ t 5 : Then, the coarse global image registration problem can be formulated as finding the optimal relative transformation ″ t between the K-sparse approximations with the smallest approximate transformation invariant distance ( , ) d p q : where β 3 is the B-Spline to the third degree, C k,l are the B-Spline coefficients, and s x and s y are scalars that control the degree of detail of the representation of the deformation field. A validation model is designed to automatically compute the average of geometrical distances using deformation fields. The system will reject the registration output, ′ I x , when the average of geometrical distances is too large, which indicates that the transformation is over-deformed. where α is a user-defined parameter that specifies the maximum of the mean deformation value to be allowed. Higher α value allows larger transformation but also allow higher accumulated transformation error, and hence over-deformations may occur with high α value. On the other hand, low α value may cause the validation model to reject a well-transformed registration output. Therefore, the α value should be low enough to reject over-deformed registration result and high enough to accept well-deformed registration. Table 2 shows the valid value range of the α value for individual datasets. An illustration is given in Fig. 14 using the serial-section laser scanning microscope images of the drosophila brain data. (a) 3D reconstruction results with a side view of the raw input data are displayed; Laser scanning microscopic images of the drosophila brain 2 10-30 Table 2. α value range for each dataset. The validation model ensures that the accumulated over-deformation error does not proceed and affect the rest of the registration process. Figure 15 compares the registration results with associated deformation fields with and without the proposed validation model, showing that the registration results without the presented validation model suffers from over-deformation problems and accumulates transformation errors. Over-transformed registration outputs accumulate the transformation errors, causing higher transformation error in the registration process of the next layer.
Data and Software. The software implementation of the presented method is developed in JAVA (with jdk 1.7.0.51 installed) and based on Fiji framework 16 and TrakEM2 image registration framework 9 . The software and the data are both made publicly available for scientific communities to use (http:// www-o.ntust.edu.tw/~cweiwang/3Dregistration/).