Abstract
Synchrotron Xrays can be used to obtain highly detailed images of parts of the lung. However, micromotion artifacts induced by such as cardiac motion impede quantitative visualization of the alveoli in the lungs. This paper proposes a method that applies a neural network for synchrotron Xray Computed Tomography (CT) data to reconstruct the highquality 3D structure of alveoli in intact mouse lungs at expiration, without needing groundtruth data. Our approach reconstructs the spatial sequence of CT images by using a deepimage prior with interpolated input latent variables, and in this way significantly enhances the images of alveolar structure compared with the prior art. The approach successfully visualizes 3D alveolar units of intact mouse lungs at expiration and enables us to measure the diameter of the alveoli. We believe that our approach helps to accurately visualize other living organs hampered by micromotion.
Introduction
Pulmonary alveoli are the gas exchange units in the lungs. Gas is exchanged at the surface membranes of the alveoli by direct diffusion, and is therefore affected by their 3D morphology^{1,2}. Pulmonary diseases^{3,4} that destroy alveolar morphology can hamper gas exchange and be lifethreatening. However, knowledge about real alveolar morphology in intact lungs is limited^{2,5} due to the absence of a proper visualization approach.
Thoracic structure obstructs visualization of 3D alveoli in intact lungs and the visualization without thoracic opening remains a challenging task. When a thorax is opened for imaging alveoli, atmospheric pressure may affect the alveolar morphology, so the natural structure of alveoli could be changed at the micrometer scale if pressure control is not perfect^{2}. Histological biopsy or scanning electron microscopy requires chemical fixation of the sample, which is known to induce distortion of the tissue^{6,7}. Although Intravital microscopy (IVM)^{8} and optical coherent microscopy (OCT)^{9} are direct alveolar imaging methods, they have a limitation to be required thoracic opening.
In recent studies, synchrotron Xrays, providing higher coherence and brightness than conventional Xrays^{10,11} with a significantly higher temporal resolution, have been used for CT to visualize alveoli in intact lungs^{12,13,14}. However, their attempts to overcome the problems are limited to manual segmentation^{12}, low resolution^{13,15}, and postmortem study^{14}. As well as thoracic structure, one of the major problems is motion artifacts induced by heartbeat. Recently, to overcome the cardiac motion, heartbeatgated synchrotron Xray imaging techniques have been performed for alveolar imaging in live lungs^{16,17}. But they have been limited to low resolution^{16,18} or longbreath hold the state of the lung^{17}. Thus, high quality 3D alveolar imaging in intact lungs remains a challenging problem.
Deep image prior (DIP) is a widely used^{19,20,21} method to solve inverse problems such as denoising, superresolution, and inpainting in an unsupervised manner. DIP demonstrates that the inductive bias of a convolutional layer is a strong prior for the representation of image signals, and results in outstanding image recovery without groundtruth data. Recently, DIP has also shown impressive ability to reconstruct medical images, notably those obtained using positron emission tomography (PET)^{22}, magnetic resonance imaging (MRI)^{23}, and CT^{24}. The task of 3D lung reconstruction can be addressed using 3D DIP^{22}, but its resolution is restricted by the amount of GPU memory. DIP has been used for 2D CT reconstruction^{24}, but the method has the drawback that it cannot fully leverage 3D information.
For dynamic images, the latent mapping DIP method^{23} generates 2D dynamic MR images, which extended the 2D DIP framework to the time domain. Our method is closely related to^{23} in the methods of processing a sequence of 2D images. However, our latent mapping focuses on the extension to the spatial domain, which lets us consider the relationship with 2D slices to compose a 3D image, in a manner analogous to what timedomain extension does for the temporal context. In this paper, we propose a neural network algorithm for synchrotron Xray CT reconstruction framework, capable of highquality 3D imaging, even in the presence of motion such as breathing (Fig.1a). Importantly, we are targeting synchrotron Xray, where extensive labeled data is difficult if not impossible to acquire. Thus, we opt for an unsupervised method based on deep image prior^{19}. The technique requires only the raw synchrotron Xray projection data that we wish to reconstruct the highquality 3D structures; it does not require a clean image as the ground truth. For this procedure, the slicing dimension (zaxis) is encoded as a change of the latent variables describing the structure of zslices. Importantly, sticking to 2D Neural Networks allows our method to run on a conventional GPU and still reconstruct a volume as large as (\(540\times 576\times 576\)) resolution.
We demonstrate new synchrotron Xray CT images of intact lungs and the efficacy of our method by applying the approach to the task of 3D reconstruction of the alveolar structure of intact mice at expiration without opening the thorax or long breath hold. We synchronize a mechanical ventilator with our imaging system for capturing only the expiration phase of lung apex where unexpected micromotion is minimized due to its location far from the heart even with no cardiac motion gated technique. to utilize a 2D reconstruction framework. We show that our method significantly improves imaging quality over existing alternatives.
Our results show the potential of our method, which may enable highquality 3D imaging for various other biological applications, and thereby allow discoveries that may not have been possible due to the limitations to the use of synchrotron Xrays on live subjects such as unexpected micromotion. We foresee the use of our method to visualize other living organs or specimens that show inevitable micromotion.
Method
This section introduces our approach to acquiring a new dataset for synchrotron Xray CT for live lungs (“Synchrotron Xray imaging”), and our approach to reconstructing the highquality 3D shape (“Framework”).
Synchrotron Xray imaging
Animal preparation
All experimental protocols are approved by the SPring8 Experimental Animals Care and Use Committee. This study follows ARRIVE guidelines and all methods were performed in accordance with the relevant guidelines and regulations. Total 5 Eightweekold SPF pathogenfree nude mice (BALB/cnu, body weight: 20–25 g, male) are anesthetized by injection of a mixture of medetomidine (1.2%, 0.3 mg/kg), butorphanol (20%, 5 mg/kg), midazolam (16%, 4 mg/kg) and saline (62.8%); the mice enter a deep sleep after about 10 min. For synchrotron Xray imaging, a tracheostomy is performed. The mouse is mounted for the synchrotron Xray imaging experiment, then the anesthesia is maintained using 2–3% of isoflurane in air/oxygen mixture through the endotracheal cannula by way of the mechanical ventilator (InspiraAdvanced Safety VentilatorPressure Controlled (ASVP), Harvard Apparatus, USA). Ventilator parameters are: respiratory rate 100 breath/min, tidal volume 10 ml/kg bodyweight, positive endexpiratory pressure 3 \(\text {cmH}_{2}\)O. At the end of the experiments, the mouse is euthanized by cervical dislocation under anesthesia.
Image acquisition
We use synchrotron Xrays to acquire projection images of a live lung by rotating the subject on a motorized stage at expiration (Fig. 1a,b). After passing through the sample, the transmitted synchrotron Xray beam is converted by a scintillator (LSO:Tb, \(5\,\textrm{mm}\times 5\,\textrm{mm}\), 8 \(\upmu\)m thickness) to visible light that is then reflected by a mirror (Fig. 1a). The image is magnified using an optical lens, then captured by a sCMOS (pco.edge 5.5 CLHS, PCO AG, Germany) at an effective size of 0.56 \(\upmu\)m and exposure time of 5 ms. To obtain the images, the lung is imaged only at the same point of the respiratory cycle (Fig. 1b) for expiratory imaging so that the lung’s position is maintained during tomography regardless of respiration. In detail, a motorized stage, shutter, and camera are synchronized and controlled by a mechanical ventilator using transistortransistor logic (TTL) signals. As a trigger to take an image in our system, the signals are fired at the start point of every inspiration in the respiratory cycle generated by the ventilator. A total of 360 images \((2160\times 2560)\) are captured at time delays of 570 ms from the start points of inspiration in every respiratory cycle during rotation of 180\(^{\circ }\) for expiratory imaging. Finally, we rescaled the images to \((540\times 640)\) having a pixel size of 2.24 \(\upmu\)m and cropped them to \((540\times 576)\) to adjust the tilted center of rotation (CoR). The synchrotron Xray imaging experiments are performed at the RIKEN Coherent Xray Optics beamline (BL29XU) at SPring8 (http://www.spring8.or.jp), which provides high spatial and temporal resolution by using a bright monochromatic 15 keV Xray beam (around \(6\times 10^{13}\) ph/s/mm\(^{2}\)/mrad\(^{2}\)/\(0.1\%\) bw). Xray dose is below 14 Gy for a set of tomography, which is below the dose threshold (around 15 Gy^{25}) for radiationinduced lung damage, thus little affecting the alveolar morphology even during 3D imaging.
Framework
We develop a generative network to fit the measurements (the image data), an overview of our framework (Fig. 1c). We leverage the fact that we can consider 3D reconstruction as the reconstruction of stacked 2D slices. We further assume that the relationships across 2D slices of our data lay on a piecewise linear manifold within a highdimensional subspace: i.e., an object’s 3D structure consists of a sequence of continuous 2D slices that typically undergo smooth relative contexts. Therefore, we treat our input data as coming from a sequence of highdimensional embeddings. This idea is similar to how^{23} dealt with timesequence data, but with the difference that our capture sequence does not follow time and relatively smoothly varying capture conditions (e.g., the slicing location along the zaxis).
We denote the set of this high dimensional mapping (Fig. 1c) as \(\{{{\textbf{z}}}_{k}\}_{k=0}^{K1}\), where K is the total number of observations (slices along the zaxis), and the corresponding set of measurements as \(\{{{\textbf{y}}}_{k}\}_{k=0}^{K1}\). The task is to find the optimal set of network parameters \({\varvec{\theta }}^*\) that satisfy:
where \({{\textbf{R}}}_{\varvec{\varphi }}\) is the Radon transform^{26} with the projection lines defined by the angles \({\varvec{\varphi }}\in {{\mathbb {R}}}^N\) because the raw Xray data come in the form of integrated projections, the \({{\textbf{T}}}{{\textbf{V}}}\) term is a total variation (TV) regularizer^{20} to further encourage the reconstruction to be denoised, and \(\lambda\) is the hyperparameter that controls the influence of the TV regularizer. We then get a set of reconstructed images \(\{{{\textbf{x}}}_k\} =f_{{\varvec{\theta }}^*}({{\textbf{z}}}_k)\in {{\mathbb {R}}}^{H\times W}\), where H and W are the height and width of the reconstructed image, which correspond to measurements \(\{{{\textbf{y}}}_k\}\in {{\mathbb {R}}}^{N\times D}\), where D is the number of valid pixels in a single measurement. In our experiment, we use \(N=360\), going from 0 to 179.5\(^\circ\) with a step size of 0.5\(^\circ\).
Note that optimization of Eq. (1) does not require clean and noisefree data as the ground truth. Instead, we find a neural network that can express the raw input data faithfully.
Latent space
A naive implementation of this pipeline would be to consider each latent encoding for each slice \({{\textbf{z}}}_k\in {{\mathbb {R}}}^Z\) independently (we set \(Z=324\)), but because the problem is illposed, this approach does not yield good reconstruction in practice. Instead, we follow^{23} and enforce our prior knowledge that the change in structure is continuous and smooth; for this purpose, we apply a piecewise linear mapping of the latent space (manifold); see Fig. 2. The simple latent design of a straight linear line is insufficient to embed the full context of structural information along the zaxis. Therefore, we use piecewise linear latent variables to exploit various textures along with multiple z slices:
where \(\alpha _{s}=s/S\), \(k=0\cdots K1\) is an index for zslice along the zaxis, \(j=0\cdots J1\) is an index for zslice stack, and \(s=0\cdots S1\) is an index for latent variable in zslice stack. As a result, we generate J stacks of latent variables, where each stack contains S interpolated latent variables (we set \(J=32\) and \(S=17\)). Furthermore, the last latent variable in each stack is smoothly continuous with the first one in the following stack. Thus, we construct a set of piecewise linear variables by stacking such stacks of latent variables.
Generative network architecture
Our generative network \(f_{\theta }\) consists of convolutional layers, batch normalization layers, activation functions, and nearest neighbor upsampling layers (Fig. 3). The convolutional layers are designed to extract features that preserve the spatial dimension, whereas the upsampling layer doubles the spatial scale.
Radon transform
Let \({\textbf{R}}_{\varphi }\in {\mathbb {R}}^{N \times M}\) be the differentiable linear operator, which represents the cumulative effect of performing the 2D Radon transform along the lines of angle \(\varphi\). Thus, the linear system for Xray projection can be modeled as:
where \({\textbf{x}}\in {\mathbb {R}}^{N}\) is a vector of images x, which has finite size \((N_{1} \times N_{2})\), and \({\textbf{y}}\in {\mathbb {R}}^{M}\) is a vector of measurements y, which has finite size \((M_{1} \times M_{2})\).
The structure includes five nearest neighbor upsampling layers, each followed by the combination of a convolutional layer, a batch normalization layer, and a ReLU activation function. The convolution layer is used in the first layer with a batch normalization layer and a ReLU activation function, whereas the convolutional layer is the only one in the last layer.
3D shape reconstruction
Our method takes piecewise linear variables as input and generates highquality reconstructed 2D CT images. Stacking these 2D slices along the zaxis yields an entire 3D CT image. For analysis, we construct the 3D alveolar duct and alveoli by binarizing the reconstructed 3D CT image using Cellpose^{27}, which is a deeplearningbased cell segmentation algorithm ;see Fig. 5a.
Implementation details
We use an imagegeneration architecture^{28} that converts a 324 dimension latent code to a \(576\times 576\) res. image. We sample the latent variables as \(\{{\textbf{z}}^{j}_{s}\}^{j\in [0, \ldots , J1]}_{s\in [0, \ldots , S1]} \sim {\mathcal {U}}(0, 0.1)\) with \(J=32\), \(S=17\) piecewise linear components. We use Adam optimizer^{29} with an initial learning rate of \(10^{3}\) and decay the learning rate by a factor of 0.9 for every 2 k iteration. We use TorchRadon^{30} for Radon transform. We use \(\lambda =10^{2}\) for all experiments unless stated otherwise.
When run on an Nvidia Titan RTX (24 GB), our unoptimized implementation takes approximately 30 min to reconstruct one sequence of CT images or equivalently a single 3D CT image (540 2D CT images with \(576\times 576\) res.) from a set of tomographic projections (360 projection images of alveoli with \(540\times 576\) res.).
Comparative methods
We compare our results with two baseline methods: (1) filtered back projection (FBP), which is a commonly used algorithm for CT reconstruction^{26} and (2) conjugate gradient for least squares (CGLS)^{31}, a Krylov subspace iterative algorithm that converges faster is less sensitive to noise compared to FBP^{32}. We use the implemented algorithms of FBP in^{30} and of CGLS in^{32,33}.
Evaluation metrics
We use (1) pixel accuracy, the proportion of correct predictions to total predictions, (2) Jaccard index, the size of the overlapped area divided by the size of the union area, and (3) Dice coefficient, which is similar to the Jaccard index but less penalizes the incorrect predictions, for evaluating the segmentation performance. Moreover, we evaluate the reconstruction performance using (4) peak signaltonoise ratio (PSNR) defined as the proportion of the maximum power of a signal to the power of noise, and (5) multiscale structural similarity (MSSSIM)^{34}, which performs as structural similarity (SSIM)^{35} at multiple scales for predicting the perceived image quality.
Results
Dataset
Currently, no public dataset is available for synchrotron Xray CT reconstruction of live animals that has sufficient detail about the interior parts. Therefore, (1) we capture a real dataset with a synchrotron Xray (as we described in “Synchrotron Xray imaging”), and (2) we generate a synthetic dataset by using an actual alveolar structure to enable quantitative comparison of existing approaches.
Synchrotron Xray projection dataset
For the validation of our idea, we follow the capturing configuration of Tracking Xray microscopy^{12}. We capture Xray projection images of size (\(540 \times 576\)) that have an effective pixel size of 2.24 \(\upmu\)m. A total of 360 projection images of the lung apex in five live mice are taken from 0\(^\circ\) to 179.5\(^\circ\) at expiration. We reduce Xray dose and unexpected movement of the lung during imaging by minimizing the number of projection images for a set of tomography.
Synthetic dataset
We create manually labeled binary 3D alveolar images from the noisy CT images, which are reconstructed from the real dataset, to generate the synthetic projection images (\(350 \times 576\)) with the same angular setting for the real dataset. We add severe Gaussian noise or speckle noise to the projection images to simulate the natural degradation that occurs during image capture. We generate this dataset because the real dataset has no groundtruth image to enable evaluation of the quality of the image reconstruction.
Experiment with synchrotron Xray projection dataset
Imaging results
Our methods successfully reconstruct the cleaner alveolar CT images (\(540\times 576\times 576\)) compared with baseline methods (Table 1, Fig. 4). The performance difference is noticeable in segmentation results (Fig. 4). We compare our segmentation results with masks that are manually annotated by a skilled person (Table 1). The quantitative result using segmentation evaluation metrics demonstrates that our method has better performance.
Findings
We visualize the 3D morphology of alveolar ducts and alveoli (alveolar entrance on the ducts) in live mice at the expiration (demonstrated in Fig. 5a) to understand the lung’s functionality, which could be affected by alveolar shape and size^{1}. Highquality 3D imaging enables us to easily distinguish between alveolar duct and alveoli even by manual process. We easily measure the size of the alveoli by determining their maximum diameters in 3D data for 51 alveoli from five mice. Alveolar diameters follow a normal distribution, but have diameters (\(79\pm 2.6\) \(\upmu\)m (mean ± s.e.m.) at expiration (Fig. 5b) that are about twice as larger as reported in prior work that analyzes 2D images of fixed inflated lungs (about 40 \(\upmu\)m)^{36,37}. Furthermore, in this study, the maximum value of the alveolar diameters is > 120 \(\upmu\)m, which is three times larger than the reports. Interestingly, recent studies for alveolar diameters measured in intact lungs, are larger than the reports for fixed lungs as 58 \(\upmu\)m^{38} and 40 \(\upmu\)m^{39} at even deflated lungs. Thus, we expect that 3D visualization of intact lungs will help to understand lung functionality and diseases more accurately.
Experiment with the synthetic dataset
Comparative results
Our approach reconstructs better CT images \((350\times 576\times 576)\) than the previous methods in the presence of Gaussian noise (Table 1, Supplementary Fig. 1) and speckle noise (Table 1, Supplementary Fig. 2). Remarkably, our method successfully generates a clear alveolar structure almost identical to GT images, whereas baseline methods provide retain noise. A comparison of PSNR and MSSSIM also demonstrates the powerful denoising effect of our work. The use of the TV term in our method contributes to a minor increase in sharpness.
Ablation study
We conduct an ablation study on the activation function in our generative network, \(f_{{\varvec{\theta }}}({{\textbf{z}}})\) (Table 2). In the analysis of the real dataset, Swish^{40} outperforms the segmentation performance of the pixel accuracy, Jaccard index, and Dice coefficient. Sine^{41} and ReLU also achieve reliable segmentation, whereas Leaky ReLU^{42} provides notably weaker results than others. In the synthetic dataset, ReLU reaches the best PSNR and MSSSIM performance against both Gaussian noise and speckle noise. Whereas Swish achieves a strong denoising effect, Sine does not work well. Similar to the real dataset result, Leaky ReLU yields low performance.
In addition to the quantitative result, ReLU reveals a clear structure, as well as less noise (Supplementary Fig. 3). Even Sine and Swish, succeed in clarifying the structure, by noise remains. Leaky ReLU yields the bestdenoised image quality but fails to represent the structure. In synthetic data experiments, ReLU outperforms the reconstruction quality of PSNR and MSSSIM, compared to other activation functions. Furthermore, ReLU is quite robust to extreme noise, whereas the results of other activation functions leave artifacts (Supplementary Fig. 3b,c). Hence, ReLU shows the best performance considering the tradeoff between segmentation performance and reconstruction quality. It is robust to severe noise occurring in the actual experimental process and improves the generative network to capture the clear structure.
Conclusion and limitation
We propose a neural networkbased approach for 3D reconstruction of alveolar ducts and alveoli in intact mice lungs at expiration using the synchrotron Xray in an unsupervised way. Our network successfully represents the 3D CT images of alveolar ducts and alveoli, and it enables the measurement of 3D alveolar size at the micrometer scale. The source code will be released in the public domain. Although we significantly improve the CT image, our approach has some limitations. First, our framework cannot work if some irregular movement occurs during the projection. This type of movement has commonly occurred when anesthesia is incomplete or cardiac motion in live mice lungs. We prevent incomplete anesthesia by instillation 2–3% of isoflurane by way of the mechanical ventilator. The apex of the lung at expiration is chosen for minimizing lung movement due to cardiac motion and respiration to apply our methods. Another limitation is the issue arising from the tilted CoR in the projection images. In CT reconstruction, the object’s rotation center should be the center of the projection image, or severe artifacts occur in the reconstructed image. Our experiment is performed at the micrometer scale, so CoR errors are unavoidable; therefore, we manually fixed the center of rotation during a preprocessing step. We are eager to solve this problem by adding network parameters in future work.
Data availability
The datasets used during the current study are not publicly available, but can be provided from the corresponding author upon reasonable request.
References
Knudsen, L. & Ochs, M. The micromechanics of lung alveoli: Structure and function of surfactant and tissue components. Histochem. Cell Biol. 150, 661–676 (2018).
Roan, E. & Waters, C. M. What do we know about mechanical strain in lung alveoli?. Am. J. Physiol. Lung Cell. Mol. Physiol. 301, L625–L635 (2011).
Carrozzi, L. & Viegi, G. Lung cancer and chronic obstructive pulmonary disease: The story goes on. Radiology 261, 688–691 (2011).
Suki, B. et al. Mechanical failure, stress redistribution, elastase activity and binding site availability on elastin during the progression of emphysema. Pulm. Pharmacol. Therap. 25, 268–275 (2012).
Hajari, A. J. et al. Morphometric changes in the human pulmonary acinus during inflation. J. Appl. Physiol. 112, 937–943 (2012).
Jonmarker, S., Valdman, A., Lindberg, A., Hellström, M. & Egevad, L. Tissue shrinkage after fixation with formalin injection of prostatectomy specimens. Virchows Arch. 449, 297–301 (2006).
Tran, T. et al. Correcting the shrinkage effects of formalin fixation and tissue processing for renal tumors: Toward standardization of pathological reporting of tumor size. J. Cancer 6, 759 (2015).
Carney, D., DiRocco, J. & Nieman, G. Dynamic alveolar mechanics and ventilatorinduced lung injury. Crit. Care Med. 33, S122–S128 (2005).
Mertens, M. et al. Alveolar dynamics in acute lung injury: Heterogeneous distension rather than cyclic opening and collapse. Crit. Care Med. 37, 2604–2611 (2009).
Ford, N. L., Wheatley, A. R., Holdsworth, D. W. & Drangova, M. Optimization of a retrospective technique for respiratorygated high speed microct of freebreathing rodents. Phys. Med. Biol. 52, 5749 (2007).
Drangova, M., Ford, N. L., Detombe, S. A., Wheatley, A. R. & Holdsworth, D. W. Fast retrospectively gated quantitative fourdimensional (4d) cardiac micro computed tomography imaging of freebreathing mice. Invest. Radiol. 42, 85–94 (2007).
Chang, S. et al. Tracking xray microscopy for alveolar dynamics in live intact mice. Sci. Rep. 3, 1–5 (2013).
Sera, T., Yokota, H., Uesugi, K. & Yagi, N. Airway distension during lung inflation in healthy and allergicsensitised mice in vivo. Respir. Physiol. Neurobiol. 185, 639–646 (2013).
Borisova, E. et al. Micrometerresolution Xray tomographic fullvolume reconstruction of an intact postmortem juvenile rat lung. Histochem. Cell Biol. 155, 215–226 (2021).
Fardin, L. et al. Imaging atelectrauma in ventilatorinduced lung injury using 4d Xray microscopy. Sci. Rep. 11, 1–12 (2021).
CercosPita, J.L. et al. Lung tissue biomechanics imaged with synchrotron phase contrast microtomography in live rats. Sci. Rep. 12, 1–11 (2022).
Lovric, G. et al. Tomographic in vivo microscopy for the study of lung physiology at the alveolar level. Sci. Rep. 7, 1–10 (2017).
Dubsky, S., Thurgood, J., Fouras, A., Thompson, R. B. & Sheard, G. J. Cardiogenic airflow in the lung revealed using synchrotronbased dynamic lung imaging. Sci. Rep. 8, 1–9 (2018).
Ulyanov, D., Vedaldi, A. & Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9446–9454 (2018).
Liu, J., Sun, Y., Xu, X. & Kamilov, U. S. Image restoration using total variation regularized deep image prior. In ICASSP 20192019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 7715–7719 (IEEE, 2019).
Mataev, G., Milanfar, P. & Elad, M. Deepred: Deep image prior powered by red. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 0–0 (2019).
Gong, K., Catana, C., Qi, J. & Li, Q. Pet image reconstruction using deep image prior. IEEE Trans. Med. Imaging 38, 1655–1665 (2018).
Yoo, J. et al. Timedependent deep image prior for dynamic MRI. IEEE Trans. Med. Imaging 20, 20 (2021).
Baguer, D. O., Leuschner, J. & Schmidt, M. Computed tomography reconstruction using deep image prior and learned reconstruction methods. Inverse Prob. 36, 094004 (2020).
Dubsky, S., Hooper, S. B., Siu, K. K. & Fouras, A. Synchrotronbased dynamic computed tomography of tissue motion for regional lung function measurement. J. R. Soc. Interface 9, 2213–2224 (2012).
Kak, A. C. & Slaney, M. Principles of Computerized Tomographic Imaging (Society for Industrial and Applied Mathematics (SIAM), 2001).
Stringer, C., Wang, T., Michaelos, M. & Pachitariu, M. Cellpose: A generalist algorithm for cellular segmentation. Nat. Methods 18, 100–106 (2021).
Radford, A., Metz, L. & Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434 (arXiv preprint) (2015).
Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv:1412.6980 (arXiv preprint) (2014).
Ronchetti, M. Torchradon: Fast differentiable routines for computed tomography. arXiv:2009.14788 (arXiv preprint) (2020).
Yuan, J.Y. & Iusem, A. N. Preconditioned conjugate gradient method for generalized least squares problems. J. Comput. Appl. Math. 71, 287–297 (1996).
Biguri, A., Dosanjh, M., Hancock, S. & Soleimani, M. Tigre: A matlabgpu toolbox for cbct image reconstruction. Biomed. Phys. Eng. Express 2, 055010 (2016).
Biguri, A. et al. Arbitrarily large tomography with iterative algorithms on multiple gpus using the tigre toolbox. J. Parallel Distrib. Comput. 146, 52–63 (2020).
Wang, Z., Simoncelli, E. P. & Bovik, A. C. Multiscale structural similarity for image quality assessment. In The ThritySeventh Asilomar Conference on Signals, Systems AND Computers, 2003, vol. 2, 1398–1402 (Ieee, 2003).
Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).
Faffe, D. S., Rocco, P. R., Negri, E. M. & Zin, W. A. Comparison of rat and mouse pulmonary tissue mechanical properties and histology. J. Appl. Physiol. 92, 230–234 (2002).
Mitzner, W., Loube, J., Venezia, J. & Scott, A. Selforganizing pattern of subpleural alveolar ducts. Sci. Rep. 10, 1–7 (2020).
Chang, S. et al. Synchrotron Xray imaging of pulmonary alveoli in respiration in live intact mice. Sci. Rep. 5, 1–6 (2015).
Lovric, G. et al. Automated computerassisted quantitative analysis of intact murine lungs at the alveolar scale. PLoS One 12, e0183979 (2017).
Ramachandran, P., Zoph, B. & Le, Q. V. Searching for activation functions. arXiv:1710.05941 (arXiv preprint) (2017).
Sitzmann, V., Martel, J., Bergman, A., Lindell, D. & Wetzstein, G. Implicit neural representations with periodic activation functions. Adv. Neural. Inf. Process. Syst. 33, 7462–7473 (2020).
Maas, A. L., Hannun, A. Y., Ng, A. Y. et al. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, vol. 30, 3 (Atlanta, Georgia, USA, 2013).
Acknowledgments
This work was supported by IITP grant funded by the Korea government (MSIT) (No.2019001906: Artificial Intelligence Graduate School Program (POSTECH) and No.2021002068: Artificial Intelligence Innovation Hub) and the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant. We thank Dr. SangHyeon Lee, Dr. Namho Kim, MoonJung Yong, and Un Yang for helping us conduct the synchrotron Xray imaging and experiments.
Author information
Authors and Affiliations
Contributions
S.S. and M.K. wrote the main manuscript text, figures and tables. K.H.J., K.M.Y., Y.K., T.I., J.H.J., and J.P. reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Shin, S., Kim, M.W., Jin, K.H. et al. Deep 3D reconstruction of synchrotron Xray computed tomography for intact lungs. Sci Rep 13, 1738 (2023). https://doi.org/10.1038/s4159802327627y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s4159802327627y
This article is cited by

Spherical alveolar shapes in live mouse lungs
Scientific Reports (2023)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.