Abstract
Investigation of cell structure is hardly imaginable without brightfield microscopy. Numerous modifications such as depthwise scanning or videoenhancement make this method being stateoftheart. This raises a question what maximal information can be extracted from ordinary (but well acquired) brightfield images in a modelfree way. Here we introduce a method of a physically correct extraction of features for each pixel when these features resemble a transparency spectrum. The method is compatible with existent ordinary brightfield microscopes and requires mathematically sophisticated data processing. Unsupervised clustering of the spectra yields reasonable semantic segmentation of unstained living cells without any a priori information about their structures. Despite the lack of reference data (to prove strictly that the proposed feature vectors coincide with transparency), we believe that this method is the right approach to an intracellular (semi)quantitative and qualitative chemical analysis.
Introduction
Brightfield microscopy in videoenhancement mode shows an unprecedented success as a method of living object investigation since it is cheap and nonintrusive in preparation of samples, and, in its innovative setup^{1}, has an excellent spatial and temporal resolution, which opens many possibilities for automation. Classical imageprocessing techniques such as feature extraction or convolution neural networks do not work so well due to huge variability in microworld data. It calls for image preprocessing techniques that would utilize all available information to supply rich, physically relevant feature vectors in subsequent methods of analysis.
Indeed, classical brightfield microscopy measures properties of incoming light affected by a sample. If multiphoton processes are negligible and, then, intensities are reasonable, a linear response model can be used. Then, a medium observed in such a model can be fully characterized by a transparency spectrum \(T(\vec {r})\) defined for each pixel. Such a spectrum can give ultimate information about the medium and boost subsequent machine learning methods significantly.
The most convenient, classical way of obtaining such a spectrum is to modify a measuring device (microscope). It is mostly done using single scanning interferometers^{2}, matrices of them^{3}, matrices of color filter arrays^{4}, or other adjustable media^{5,6}. Such technical arrangements can be further successfully coupled with machine learning methods as well^{7}. Purely instrumental methods are certainly the most correct but require sophisticated equipment and are not fully compatible with typical brightfield techniques like depthwise zscanning. Due to both hardware and algorithms, this makes these methods rather a separated group than a subtype of the brightfield methods.
For classical brightfield microscopy, the most approaches rely on trained (or fitted) models based on a set of reference images with known properties^{8}. Most mature methods rely on the principal component analysis^{9} or sparse spatial features^{10}. Some of such techniques do not aim to fullspectral reconstruction but rather to a more effective colour resolution (which has been very useful in distinguishing fluorescence peaks)^{11}. The main disadvantage of such methods is the global approach, which is feasible only for homogeneous images. Most “local” methods include different artificial neural networks^{12}. and can work well if they are trained with a reference dataset that is similar to the observed system. The data of this kind almost never occurs in microscopy due to bigger variability of objects in microworld (for the reason that, e.g., known objects are artificial, an investigated system is living, or the infocus position can be ambiguous). This gives a cutting edge to physically inspired methods which make no assumption about type of observed object and does not use special equipment except of a classical brightfield microscope.
Theoretical model
For most biologically relevant objects multiphoton interactions can be neglected^{13}. Thus, a linear response model can be used for description of the measurement process. The model consists of four entities (Fig. 1) which are physically characterized as follows:

1.
Light source gives a light spectrum \(S(\lambda )\), which is assumed constant and spatially homogeneous.

2.
Medium is, in each point of the projection onto a camera sensor plane, characterized by an unknown transparency spectrum \(T(x, y, \lambda )\).

3.
Camera filter, where each camera channel c is characterized by a quantum efficiency curve \(F_c(\lambda )\).

4.
Camera sensor is described (by purely phenomenological approach) by exposure time \(t_e\) and energy load curve \(I_c = f(E)\), where \(I_c\) is the pixel sensor output (intensity) and E is energy absorbed by the pixel sensor during the exposure time. We assume that the image is not saturated and, thus, f(E) can be approximated linearly.
Mathematically, it can be expressed as
where \(I_c\) is the image intensity at a given pixel. All observable, biologically relevant, processes are slow compared with the camera exposure time (usually in a few ms) and, therefore, the outer integral can be eliminated. More importantly, let variable f, which reflects the dependence between the spectral energy and the sensor response, be 1. The multiplication inside the internal integral is commutative, which allows us to introduce an effective incoming light \(L_c(\lambda ) = S(\lambda )\cdot F_c(\lambda )\). These all mathematical treatments give the reduced equation for the measurement process as
Intentionally, this simple model does not include any properties of optics, sophisticated models of lightmatter interactions, and spatial components (focus, sample surface, etc.). The aim of the method is to describe an observed object in the best way, with minimal assumptions on its nature or features.
Model extension for continuous media
In order to extract a transparency profile from the proposed model, one has to solve an inverse problem for a system of three integral equations (in case of a 3channel, RGB, camera). This cannot be solved directly, since the model is heavily underdetermined. (In this text, by terms “transparency” and “spectrum” we mean “quasitransparency” and “quasispectrum” since this method determines only the properties of a microscopy image which are similar to the transparency spectra but not the transparency itself).
Additional information can be squeezed from the physical meaning of the observed image—neighbouring pixels are not fully independent. The observed object usually has no purely vertical parts (which is quite typical for celllike structures) and other Zaxis related changes are not fast^{14}. If this holds, the image can be treated as a continuous projection of the object’s surface (in optical meaning) onto the camera sensor. In this case, the neighbouring pixels correspond to neighbouring points in the object.
In addition, let us assume that the object’s volume can be divided into subvolumes in a way that the transparency spectra inside a subvolume will be spatially continuous (in L2 meaning). This assumption is quite weak, because it can be satisfied only if the volumetric image has a subvolume of the size which is equal to the voxel size.
For biological samples which show almost no strong gradients of structural changes holds that the pixel demarcates the projected image. Formally, this criterion can be expressed as
where \(\vec {u}\) is a random vector and \(q, \epsilon\) are small numbers. This equation closely resembles the Lyapunov stability criterion. The \(\epsilon\) reflects the neighbourhood size and q is related to the degree of discontinuousness. It can be violated, if \(\vec {u}\) crosses a border between objects, but not inside a single object.
Optimization procedure
For pixel m, the combination of optimization criteria in Eqs. (2) and (3) gives (in discrete form)
where C is the number of channels, w is the number of discrete wavelengths, \(G_{mn}\) is a measure of discontinuousness between pixels m and n. The \({\mathbb {N}}_m\) is a set of points, which have the Euclidean distance to the pixel m equal or less than \({\mathcal {T}}_{ED}\). Authors used \({\mathcal {T}}_{ED}=1\), but a larger neighbourhood may improve convergence speed. The integral in the first part of Eq. (4) is supposed to be solved numerically. Authors used the Simpson integration method^{15} with discretization \(\lambda _i= 48\).
The trickiest issue in Eq. (4) is calculation of discontinuousness measure \(G_{mn}\). We defined it as
where \(D_{k}\) is a central gradient in pixel k, \({\mathcal {T}}_b\) is a bias parameter (authors used \({\mathcal {T}}_b = 0.9\)), and \({\mathbb {B}}_{mn}\) is a set of points, which form lines between pixels m and n. The set of such points is calculated using the Bresenham algorithm^{16}. The \({\mathcal {E}}_k\) indicates whether pixel k is classified as an edge. For this we used the Canny edge detection algorithm^{17} applied to a gradient matrix smoothed by a 2D Gaussian filter with the standard deviation equal^{18} to 0.5.
The gradient calculation is different for the first and further iterations. In the first iteration, there is no valid spectral guess, and the gradients and the edge detection are calculated for the original image. The used edge detection algorithm requires a singlechannel (grayscale) image, however, the input image is RGB. We used the principal component analysis (PCA)^{19,20} and retained only the first principal component in order to obtain the maximal information on the grayscale representation of data.
In the nonfirst iterations, there is a spectral guess and, instead of the gradient, we used the crosscorrelation with zero lag: \(D_{k} = T_{k1}(\lambda )\star T_{k+1}(\lambda )\). The vertical and horizontal gradient were merged by the Euclidean norm.
For numerical optimization of Eq. (4), the covariance matrix adaptation evolution strategy (CMAES)^{21} was proved to be a suitable robust global optimization method^{22}. Due to the meanfield nature of the second part of Eq. (4), the method is iterative with, usually, 20–40 iterations to converge. In each iteration step and for each pixel, the minimization is conducted until a predefined value of loss function is achieved. Different schedules of tolerance changes can be applied, authors used the simplest one—linear decrease. The algorithm flow chart is presented in Fig. 2.
Microscopy system and camera calibration
In order to obtain reasonable local spectra, we must ensure that camera sensor pixels have homogeneous responses. From hardware point of view, they are printed as semiconductor structures and cannot be changed. Therefore, we introduced a spectral calibration in the form of postprocessing routine, which is designed for obtaining equal responses from all camera pixels.
The first part of calibration is experimental and aimed at measuring each pixel’s sensitivity. We took a photograph of the background through a set of gray layers with varying transparency, covering a 2mm thick glass (type Step ND Filter NDL10S4). After that, we replaced the microscope objective by a fibre of a spectrophotometer (Ocean Optics USB 4000 VISNIRES) to record spectra corresponding to each of the filters, see Fig. 3a.
The second part is computational. For each pixel, we constructed a piecewise function S(M), where S is an integral of the spectrum measured by the fibre spectrometer in each point \(S_i\) and \(M_i\) is an intensity value in the image. Between these points, the function S(M) is linearly interpolated, see Fig. 3d. For a colour camera that we used, the algorithm is slightly different. Most of the RGB cameras are equipped with a Bayer filter, which effectively discriminates 3 sorts of pixels. Each ‘sort’ has a different dependence of the quantum efficiency on the wavelength, see Fig. 3b. These dependencies are usually supplied by the camera producer. In this case, the recorded spectrum should be multiplied by the corresponding efficiency curve prior to the integration. The result of the multiplication is shown in Fig. 3c.
The proposed method of calibration is universal, applicable to any camera producing raw data, and is not based on any assumption about nature of image or underlying acquisition processes. The algorithm itself is postprocessing technique and requires calibration images and data from spectrometer. All results described below were obtained after this image correction. The calibration and correction routines are implemented as a native application and are freely available.
Results
The method essentially requires only three specific inputs: an image, incoming light spectrum, and camera filter profiles. The camera filter profiles are usually supplied with the camera or can be measured directly using an adjustable monochromatic light source. The incoming light spectrum is less straightforward, because the light emitted by the source is somehow altered by the light path. A convenient way is to replace the objective inlet by a cosine corrector with a spectrometer and measure the incident light spectrum. This implies that, in case of any substantial changes in the optical path (e.g., like the objective replacement), the incoming light spectrum has to be remeasured. In practice, it makes no problem to measure a set of spectra corresponding to a different objective, iris settings, etc.
The proposed method appears to be quite robust to parametrization inaccuracies and errors. We used the quantum efficiency curves supplied by the vendor and measured the spectrum, which is reaching the sample, and obtained practically feasible results. The method can be applied to any brightfield microscope setup. The only condition is to access the camera primary signal immediately after the analogtodigital conversion, before some kind of thresholding, whitebalancing, gamma correction, or another visual improvement is employed.
The sample has to obey three assumptions: localized gradients, reasonable flatness, and linear response. If these assumptions hold, the obtained results will be in agreement with physical properties of the medium. Most of relatively flat biological samples (e.g., a single layer of cells) fulfil all these criteria. In order to show the capacity of the method, we used it for analysis of images of unstained live L929 mouse fibroblasts recorded using a videoenhanced brightfield widefield light microscope in time lapse and with throughfocusing. For determination of the best focal position in the zstack, we used the graylevel local variance^{23}. The effective light spectrum as the result of multiplication of the light source spectrum by the camera filter transparency curves is shown in Fig. 4b. The original raw image is shown in Fig. 4a and looks greenish due to the prevalence of green colour in incoming light spectrum.
As clearly seen in Fig. 4d and e, the method has a nontrivial convergence behaviour of the variation coefficient (with the local maximum at iteration 2 and the local minimum at iteration 4) and of the cost. The behaviour of the iteration computing process is not related to changes in the schedule of tolerances. This behaviour in iteration process is linearly decreasing until iteration 10, and then is kept constant and the iteration process is stopped if the value of change is 0.01. We have not investigated the reason for this course deeply, but it is definitely repeatable for all the tested measurements (e.g., Fig. S3b,c). A natural way of visual verification of an image of transparency spectra is artificial illumination. We used a spectrum of the black body at T = 5800 K according to the Planck Law (Figs. 4c, S3a). The transformed image is quite similar to the raw data, which supports the method validity. To obtain such an image, we multiplied each pixel’s transparency spectra by the illumination spectrum and the CIE standard matching curves. The integrals of the corresponding curves gave coordinates in the CIE 1931 colour space.
Evaluation of the asset of the proposed method of the quasispectral reconstrunction (Fig. 5a–e) for clusterization against the raw data is quite tricky, because we have no ground truth. But, nevertheless, there are numerous methods of quality estimation for unsupervised learning^{24}. Such methods are usually used for determination of the optimal number of clusters in datasets. Our aim is slightly different—to compare the accuracy of the clusterization for two datasets with different dimensionality. This naturally yields a choice of metric—cosine—since this metric is normalized and not affected by magnitude to such an extent as the Euclidean metric. Another fact that can be utilized from the data is that each single image provides \(10^5\)–\(10^6\) points. It enables us to use a distributionbased method for estimation of clustering accuracy. One of the most general method from this family is gap statistics^{25}, which is reported to perform well and robust even on noisy data, if a sufficient number of samples is present^{24}. As the clusterization method itself, we used kmeans with 10 clusters and the cosine metric. Figure 5g shows gap criteria for timelapse raw images and relevant spectral counterparts. The proposed method leads to better and more stable clustering concurrently. We also investigated different dimensionality reduction techniques (namely PCA^{19}, Factor Analysis^{26}, and NNM^{27}), which can be applied before the clustering, but these techniques did not bring any improvement in cluster quality. Despite that, these techniques can be used, e.g., for digital staining and highlighting the details in objects, see Fig. 5f.
In order to verify the benefits of the clusterization of the obtained spectra using kmeans against the direct image clusterization, simple phantom experiments on microphotographs of oilair and egg protein–air interface, respectively, were conducted. These phantom experiments showed that the spectral clusterization resulted in both a higher cluster accuracy and a lower variation. Moreover, in order to prove the capacity of the method, we applied supervised segmentation, namely a classical semantic segmentation network, UNet^{28}. It is a symmetric encoderdecoder convolution network with skip connections, designed for pixelwise segmentation of medical data. One of the strongest advantage of this network is a very low amount of data needed for successful learning (only a few images can be sufficient for this purpose). We employed 6 images for the network training and 1 image for the method validation. To avoid the data overfitting in the training phase, aggressive dropout (0.5, after each convolution layer) and intensive image augmentation (in detail in Suppl. Material 1) was rendered. We compared the performance of the UNet network for the original raw images, contrastenhanced images, and spectral images (Fig. 4f). The results of segmentation for the spectral images showed a significantly (\(>10\%\)) increased accuracy, intersection over union (IoU) 0.9, and a faster convergence speed (8 epochs vs. 40 epochs for contrastenhanced images). The results were stable to changes in the training and test sets (even when using a single validation image or a set of augmented images derived from validation as mentioned above).
Discussion
The primary aim of the method is, in the best possible way, to characterize individual cell parts physically (by a colour spectrum) and, consequently, identify them as different cell regions. Currently, the standard approach for the recognition of organelles is fluorescent (or other dye) staining. In unstained cells, identity of an organelle is guessed from its shape and position. Our approach gives the promise to be able to identify the organelles according to their spectra. However, in order to obtain the same spectra for cells of different samples, full reproducibility of the whole experiment such as optical properties of a Petri dish, thickness and colour of cultivation medium has to be ensured.
An important issue that we have not investigated yet is the influence of sample thickness. The question remains what is the identity of the spectrum if the sample has a nonzero thickness. In Rychtáriková et al.^{1}, we showed that the position of the effective focus differs even with the usage of a fully apochromatic lens. This is the biggest complication in interpreting the spectrum. In case of a relatively thick and homogeneous organelle it can be assumed that, in the centre of the focus, the contribution from geometrically different levels are similar. The full answer to this question would be given by a complete 3D analysis that has to be theoretically based on completely new algorithms and is currently out of the possibilities of our computing capacity. To this point, however, we allow to claim that the thickness of the sample affects mainly the integrals below the spectra, not the shapes of the spectra themselves. The usage of the cosine metric, which is, in effect, the angle between distance vectors and is insensitive to the magnitude, would help to mitigate this problem.
It is worth mentioning that, for some reallife biological samples, the measurement model can be violated. We implicitly assume that light intensity reaching the camera chip is always lower than at the time of its production by a light source. The transparency coefficient is bounded by the range [0, 1]. Indeed, this is not always true because the sample can contain lightcondensing objects (most of these objects are bubbles or vacuoles) which act as microlenses. It does not break the method generally but, due to inability to fulfil Eq. (2), the local optimization gives an abnormally high cost. Such objects should be eliminated from a subsequent analysis because their quasispectra are unreliable. After excluding those dubious regions (which occupy only a very small part of the image, provided they are present at all), the rest of the image can be analysed in an ordinary way.
The obtained quasispectra should not be considered as object features but are rather imaging process features. Due to the modelfree nature of the method, the obtained classes reflect the observed data, not the internal structures of the objects. We think that the convenient bridge between the observed, phenomenological, spectra and the structure is machine learning, since it shows advantage of enormously good statistics (\(10^5\)–\(10^6\) samples per image) and compensate influence of the complicated shape.
Conclusions
This novel method of extraction of quasispectra aims at a very challenging problem, which cannot be solved precisely even in theory: some information is irrecoverably lost. The method arises from very general assumptions on the measurement system. The method does not rely on any lightmedia interaction model or physical properties of the system, which makes this method quite universal. The obtained spectra are applicable in practice for visualization and automatic segmentation task. We intentionally did not consider questions of voxel spectrum, Zstack spectral behaviour, and meaning of the compromised focus in order to keep the method and its application simple. We pose the described method as an ultimate information squeezing tool, which is a nearly modelfree way how to compress the colour and spatial information into representation of the physically relevant features. We believe that, in the future, the method will find its use in robust, mainly qualitative, (bio)chemical analysis.
Microscopy data acquisition
Sample preparation
A L929 (mouse fibroblast, SigmaAldrich, cat. No. 85011425) cell line was grown at low optical density overnight at 37 °C, 5% \({\text {CO}}_2\), and 90% RH. The nutrient solution consisted of DMEM (87.7%) with high glucose (\({>}1 \, {\hbox {g L}}^{1}\)), fetal bovine serum (10%), antibiotics and antimycotics (1%), Lglutamine (1%), and gentamicin (0.3%; all purchased from Biowest, Nuaillé, France).
Cells fixation was conducted in a tissue dish. The nutrient medium was sucked out and the cells were rinsed by PBS. Then, the cells were treated by glutaraldehyde (3%) for 5 min in order to fix cells in a gentle mode (without any substantial modifications in cell morphology) followed by washing in phosphate buffer (\(0.2 \, {\hbox {mol L}}^{1}\), pH 7.2) two times, always for 5 min. The cell fixation was finished by dewatering the sample in a concentration gradient of ethanol (50%, 60%, and 70%) when each concentration was in contact with the sample for 5 min.
The timelapse part of the experiment was conducted with living cells of the same type.
Brightfield widefield videoenhanced microscopy
The cells were captured using a custommade inverted highresolved brightfield widefield light microscope enabling observation of submicroscopic objects (ICS FFPW, Nové Hrady, Czech Republic)^{1}. The optical path starts by two Luminus CSM360 light emitting diodes charged by the current up to 5000 mA (in the described experiments, the current was 4500 mA; according to the LED producer, the forward voltage was 13.25 V which gave the power of 59.625 W) which illuminate the sample by series of light flashes (with the mode of light 0.2261 s–dark 0.0969 s) in a gentle mode and enable the videoenhancement^{29}. The microscope optical system was further facilitated by infrared 775 nm shortpass and ultraviolet 450 nm longpass filters (Edmund Optics). After passing through a sample, light reached an objective Nikon (in case of the live cells, CFI Plan Achromat \(40\times\), N.A. 0.65, W.D. 0.56 mm; in case of the fixed cells, LWD \(40\times\), Ph1 ADL, \(\infty /1.2\), N.A. 0.55, W.D. 2.1 mm). A Mitutoyo tubus lens (\(5\times\)) and a projective lens (\(2\times\)) magnify and project the image on a JAI camera with a 12bpc colour Kodak KAI16000 digital camera chip of \(4872 \times 3248\) resolution (camera gain 0, offset 300, and exposure 293.6 ms). At this total magnification, the size of the object projected on the camera pixel is 36 nm. The process of capturing the primary signal was controlled by a custommade control software. The zscan was performed automatically by a programmable mechanics with the step size of 100 nm.
Microscopy image data correction and visualization
The acquired image data were corrected by simultaneous calibration of the microscope optical path and camera chip as described in Suppl. Material 1. In this way, we obtained the most informative images on spectral properties of the observed cells.
For visualization, very bright pixels which correspond to lightfocusing structures in the sample (mostly bubbles that act as microlenses) and violate the assumptions of the model of the proposed quasispectral method were detected (as 99% percentile of intensities) and treated as saturated. After their elimination, the rest of intensities was rescaled to the original range.
Data availability
The software for quasispectral characterization of images, the relevant Matlab codes, the software for image calibration, the UNet segmentation package, and testing images are available in the supplementary materials at the Dryad Data Depository^{30}.
Abbreviations
 \({\mathbb {B}}_{mn}\) :

Set of pixels that form lines between pixels m and n
 c :

Colour of a camera filter or an image channel; for colour camera \(c = \{red, green, blue\}\)
 C :

Number of image channels
 \(D_k\) :

Central intensity gradient in pixel \(k \in {\mathbb {B}}_{mn}\) in calculation of \(G_{mn}\)
 E :

Energy absorbed by a camera sensor during an exposure time \(t_e\)
 \({\mathcal {E}}_k\) :

Parameter in computation of \(G_{mn}\) which indicates if the pixel k is classified as an region edge
 f :

Variable which reflects a dependence between the spectral energy and the sensor response; \(f = 1\)
 \(F_c(\lambda )\) :

Spectral quantum efficiency of a camera filter c
 \(F_m\) :

Spectral quantum efficiency of a pixel m
 \(G_{mn}\) :

Measure of discontinuousness between pixels m and n
 i :

Label of a discrete wavelength; \(i = \{1,2, \ldots , w\}\)
 \(\hbox {iter}\) :

Iteration
 it\(\_\)max:

Maximal iteration (predetermined)
 \(I_c\) :

Pixel intensity at colour channel c
 k :

Pixel in the set \({\mathbb {B}}_{mn}\)
 \(L_c\) :

Light effectively incoming onto a camera sensor, i.e. onto a camera filter
 m, n :

Pixel labels
 \(M_i\) :

Intensity value in the image
 N :

Number of pixels in the set \({\mathbb {N}}_m\)
 \({\mathbb {N}}_m\) :

Set of pixels with the Euclidean distance to the pixel m equal or less than \({\mathcal {T}}_{ED}\)
 q :

Parameter related to the degree of discontinuousness in spectral regions
 \(\vec {r}\) :

Position vector for a pixel at coordinates (x, y)
 S :

Integral of the spectrum measured by the fibre spectrophotometer in each point \(S_i\)
 \(S(\lambda )\) :

Light spectrum of a light source
 \(t_e\) :

Camera exposure time
 T:

Thermodynamic temperature; kelvin [K]
 \(T_m(\lambda _i)\) :

Transparency spectrum of pixel m at wavelength \(\lambda _i\)
 \(T_n(\lambda _i)\) :

Transparency spectrum of pixel n at wavelength \(\lambda _i\)
 \(T(x, y, \lambda )\) :

Transparency spectrum of a medium at each pixel in general
 \({\mathcal {T}}_b\) :

Bias parameter in computation of \(G_{mn}\); \({\mathcal {T}}_b = 0.9\)
 \({\mathcal {T}}_{ED}\) :

Threshold for the selection of the neighbourhood of pixel m, i.e., the Euclidean distance between pixels m and n; \({\mathcal {T}}_{ED} = 1\)
 \(\vec {u}\) :

Change of a pixel position vector
 w :

Number of discrete wavelengths
 x, y :

Vertical and horizontal pixel coordinates
 \(\epsilon\) :

Parameter which reflects the studied pixel’s neighbourhood size in general
 \(\lambda\) :

Light wavelength; nanometer [nm]
References
Rychtáriková, R. et al. Superresolved 3d imaging of live cells organelles’ from brightfield photon transmission micrographs. Ultramicroscopy 179, 1–14 (2017).
Lindner, M., Shotan, Z. & Garini, Y. Rapid microscopy measurement of very large spectral images. Opt. Express 24, 9511 (2016).
Heist, S. et al. 5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light. Opt. Express 26, 23366 (2018).
Wu, J. et al. Snapshot hyperspectral volumetric microscopy. Sci. Rep. 6, 24624 (2016).
Wachman, E. S. et al. Simultaneous imaging of cellular morphology and multiple biomarkers using an acoustooptic tunable filter–based bright field microscope. J. Biomed. Opt. 19, 056006 (2014).
Dahlberg, P. D. et al. A simple approach to spectrally resolved fluorescence and bright field microscopy over select regions of interest. Rev. Sci. Instrum. 87, 113704 (2016).
Zhu, S., Gao, L., Zhang, Y., Lin, J. & Jin, P. Complete plenoptic imaging using a single detector. Opt. Express 26, 26495 (2018).
Garini, Y., Young, I. T. & McNamara, G. Spectral imaging: principles and applications. Cytometry A 69A, 735–747 (2006).
Maloney, L. T. Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Am. A 3, 1673 (1986).
Parmar, M., Lansel, S. & Wandell, B. A. Spatiospectral reconstruction of the multispectral datacube using sparse recovery. In 2008 15th IEEE International Conference on Image Processing (IEEE, 2008).
Wang, Y., Yang, B., Feng, S., Pessino, V. & Huang, B. Multicolor fluorescent imaging by spaceconstrained computational spectral imaging. Opt. Express 27, 5393 (2019).
AlvarezGila, A., van de Weijer, J. & Garrote, E. Adversarial networks for spatial contextaware spectral image reconstruction from RGB. In 2017 IEEE International Conference on Computer Vision Workshops (ICCVW) (IEEE, 2017).
Hoover, E. E. & Squier, J. A. Advances in multiphoton microscopy technology. Nat. Photon. 7, 93–101 (2013).
Lugagne, J.B. et al. Identification of individual cells from zstacks of brightfield microscopy images. Sci. Rep. 8, 11455 (2018).
Velleman, D. J. The generalized Simpson’s rule. Am. Math. Monthly 112, 342 (2005).
Kuzmin, Y. P. Bresenham’s line generation algorithm with builtin clipping. Comput. Graph. Forum 14, 275–280 (1995).
Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. PAMI–8, 679–698 (1986).
Elboher, E. & Werman, M. Efficient and accurate Gaussian image filtering using running sums. In 2012 12th International Conference on Intelligent Systems Design and Applications (ISDA) (IEEE, 2012).
Pearson, K. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 2, 559–572 (1901).
Hair, J., Black, W., Babin, B. & Anderson, R. Multivariate Data Analysis (Cengage Learning EMEA, 2018). https://www.ebook.de/de/product/33402106/university_of_south_alabama_joseph_hair_william_black_barry_louisiana_tech_university_babin_rolph_drexel_university_anderson_multivariate_data_analysis.html.
Hansen, N. & Ostermeier, A. Completely derandomized selfadaptation in evolution strategies. Evol. Comput. 9, 159–195 (2001).
Hansen, N. The CMA evolution strategy: a comparing review. In Towards a New Evolutionary Computation (eds Lozano, J. et al.) 75–102 (Springer, Berlin Heidelberg, 2007).
PechPacheco, J., Cristobal, G., ChamorroMartinez, J. & FernandezValdivia, J. Diatom autofocusing in brightfield microscopy: a comparative study. In Proceedings 15th International Conference on Pattern Recognition. ICPR2000 (IEEE Comput. Soc., 2002).
Mekaroonkamon, T. & Wongsa, S. A comparative investigation of the robustness of unsupervised clustering techniques for rotating machine fault diagnosis with poorlyseparated data. In 2016 Eighth International Conference on Advanced Computational Intelligence (ICACI) (IEEE, 2016).
Tibshirani, R., Walther, G. & Hastie, T. Estimating the number of clusters in a data set via the gap statistic. J. R. Stat. Soc. B 63, 411–423 (2001).
Stephenson, W. Technique of factor analysis. Nature 136, 297 (1935).
Lee, D. D. & Seung, H. S. Learning the parts of objects by nonnegative matrix factorization. Nature 401, 788–791 (1999).
Ronneberger, O., Fischer, P. & Brox, T. Unet: convolutional networks for biomedical image segmentation. In Medical Image Computing and ComputerAssisted Intervention  MICCAI 2015 (eds Navab, N. et al.) 234–241 (Springer International Publishing, Cham, 2015).
Lichtscheidl, I. K. & Foissner, I. Video microscopy of dynamic plant cell organelles: principles of the technique and practical application. J. Microsc. 181, 117–128 (1998).
Lonhus, K., Rychtáriková, R., Platonova, G. & Štys, D. Supplementary to Quasispectral characterization of intracellular regions in brightfield light microscopy images. Dryad Dataset.https://doi.org/10.5061/dryad.w0vt4b8p6.
Acknowledgements
This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic—project CENAKVA (LM2018099)—and from the European Regional Development Fund in frame of the project Kompetenzzentrum MechanoBiologie (ATCZ133) and Image Headstart (ATCZ215) in the Interreg VA Austria–Czech Republic programme. The work was further financed by the GAJU 013/2019/Z project.
Author information
Authors and Affiliations
Contributions
K.L. is the main author of the paper and of the novel algorithm, G.P. and R.R. are responsible for sample preparations, microscopy data acquisition, and image calibration, R.R. contributed to the text of papers substantially, D.Š. is an inventor of the videoenhanced brightfield widefield microscope. D.Š. and R.R. lead the research. All authors read the paper and approved its final version.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lonhus, K., Rychtáriková, R., Platonova, G. et al. Quasispectral characterization of intracellular regions in brightfield light microscopy images. Sci Rep 10, 18346 (2020). https://doi.org/10.1038/s41598020754417
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598020754417
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.