Background & Summary

Multi-Photon Intravital Video Microscopy (MP-IVM), in combination with image-based systems biology1, represent a key methodology for studying the interplay of cells in organs and tissues of living animals2. Indeed, recent analyses of leukocyte migration in MP-IVM data, highlighted unprecedented cell-to-cell interaction patterns such as antigen capturing3 and presentation4, host-pathogen interaction5,6, tumor immune surveillance7 and cell activation8 amongst others. The advantage of MP-IVM with respect to other optical methods relies on the usage of multiple infrared photons. The low energy of the photons allows a deep and point-wise excitation of the sample which reduces light scattering and limits photo-damage. These properties make MP-IVM suitable to capture 4D data with remarkable resolution, depth and prolonged periods of observation9. The most common image acquisition and analysis pipeline of MP-IVM data is depicted in (Fig. 1). Initially, an animal having fluorescent cells, is anaesthetized, and prepared for imaging by immobilization and surgical exposition of the organ of interest (Fig. 1a left). Then, 4D data, composed of parallel image planes at different depths, are acquired for several time instants (Fig. 1a right). After acquisition, data are analysed by detecting cells (Fig. 1b left), tracking their position over time (Fig. 1b right) and finally quantifying cell migration10. The described pipeline was used to generate all the entries proposed in the current work (Fig. 1d).

Figure 1: Data generation workflow.
figure 1

(a) in vivo imaging acquisition (left) Surgically exposed tissues from an anaesthetized and immobilized mouse are subjected to MP-IVM. (right) 4D data composed by z-stack of parallel image planes are acquired at different time points for multiple channels. (b) Cell detection and tracking (left) Cells are detected and the centroid position annotated the image series (red dots). Subsequently, (right) centroids are associated over time producing the cell tracks. For each phase of the imaging pipeline are reported specific problematics that affect cell tracking. c. Database. Both 4D imaging data and cell tracks are included in LTDB.

Despite the existence of specialized imaging software packages such as Imaris (Bitplane), Volocity (PerkinElmer) and FIJI11, the automatic analysis of immune cell migration10 in MP-IVM data is problematic. Challenges are introduced at each stage of the previously described pipeline and arise both from the complex biomechanical properties of leukocytes and from technical artifacts of in vivo imaging (Fig. 1,Table 1 and Fig. 2a). More specifically, high plasticity of cell shape, sustained speed and frequent contacts, set a limit on the capacity of detecting and tracking cells for long time periods12. Additionally, technical artifacts such as the variation and non-uniform diffraction of the light emitted by fluorescently- tagged cells or the physiological movement of the sample due to peristalsis, breathing or pulsing of blood vessels, further challenge the automatic analysis. Therefore, additional steps such as image pre-processing, tuning of software parameters and manual curation of tracks, are required to improve tracking results. As a consequence, usability of imaging software is reduced13, bias introduced and the reproducibility of the results is compromised. An example is provided in (Fig. 2b) where the Track Speed Mean, Directionality, Track length and Track duration were computed for the entry LTDB017a (Data Citation 1). These values exhibited highly significant differences (p0.0001) between automatically-generated vs. manually-generated tracks.

Table 1 Biomechanical and technical problems.
Figure 2: Tracking challenges.
figure 2

(a) Example case studies (i-viii) Representative snapshots of selected MP-IVM micrographs from problematic cases indicated in the upper part of the picture. (i) Surface reconstruction (SR) (green) of a T cell with uropodia. (ii) SR (red) of a T cell migrating with pseudopodia on a high background. (iii) SR (yellow, red) of two neutrophils forming a brief contact. (iv) Centers of mass (red dots) of a T cell forming a brief contact with a non motile object. (v) Estimated center of mass (red dot) of a T cell close to a boundary of the field of view. (vi) SR (blue) of a rapidly migrating Neutrophil. Arrows indicate cell displacement. (vii) SR (colored objects) of Neutrophils forming multiple contacts. (viii) Tracks (colored lines) of two B cells. (b) Effect of tracking errors Tracking errors limit research reproducibility, significantly (p0.0001) affecting the typical readouts from MP-IVM experiments. The figures (top-left and top-right) and the graphs (bottom) compare the manual tracks presented in LTDB017_a vs. the tracks generated automatically by Imaris. Automatic tracks were interrupted when the software could not detect or link cells, yielding to the creation of an increased number of shorter tracklets.

Providing the scientific community with datasets interpreted by experts is essential to foster the development of data science methods. To this end, international cell tracking challenges on public datasets14,15 allowed to highlight the properties amongst different algorithms. However, the provided datasets did not include leukocytes observed by intravital imaging. For this reason, it is necessary to develop an extended dataset of MP-IVM videos, where a significant number of leukocytes are tracked.Here we present a leukocyte tracking database, namely "LTDB", that includes MP-IVM videos of immune cells, together with their relative tracks which were manually annotated by experts. Each video contains one or more challenges for the automatic analysis (Table 2 (available online only)), and captured the behaviour of one or more cell populations (Table 3) in response to different stimuli (Table 4).

Table 2 Dataset overview Overview of the size and complexity of each video-track entry of LTDB.
Table 3 Channel specification Description of which cell population is expected to be visible in each channel of the provided videos.
Table 4 Experimental settings.

All the videos and tracks are made available as individual files or as a spatio-temporal database (Fig. 3a) which was optimized for faster access to data and metadata (Fig. 3b).

Figure 3: Data organization and formats.
figure 3

Videos, metadata and the position over time of tracked cells are organized as the conceptual Entity-Relationship model described in (a), corresponding to the logical database (optimized version) depicted in (b). A video belongs to a Collection, depicts one or more Problematic and includes an Image series. The Image entity is double-framed because it is a weak entity, which depends on the Video entity. A Cell has one type and one unique identifier. One Video tracks one (or more) Cell, every Cell being depicted by the Track association at a given timestamp (t) and in a spatial position (x, y, z) of that Video. The VisibleIn association further describes the channel of the video in which a cell is visible. The logical database is derived from the conceptual model and then optimized for read-access. The cells_positions table stores the instantaneous coordinates of each cell and is pre-exported in one or more CSV text files for each video. Imaging data are logically stored as TIFF image series with a specific filename.

The expected usage of LTDB is to serve as a ground truth for the validation of tracking algorithms (Fig. 4a). Differences with respect to the ground truth can be evaluated using, for instance, a metric that accounts for complete tracking graph comparison16.

Figure 4: Typical usage scenarios.
figure 4

(a) Evaluation of a tracking algorithm. LTDB videos are provided as input to a tracking algorithm. Computed tracks can be compared with respect to the ground-truth tracks using a methodology of choice such as the complete graph comparison15,16. In the example red arrows indicate errors where a cell was detected not sufficiently close, and when a track was interrupted. (b) Machine learning dataset. LTDB videos and tracks can potentially be used in the context of supervised machine learning as training and validation dataset. The generated predictive model can be generalized and used to track new videos. (c) Resource for big data analysis. Properties of leukocyte migration in different experimental conditions can potentially be discovered by the application of pattern recognition on LTDB metadata and tracks.

LTDB further aims at being a training dataset for supervised machine learning methods. Indeed, in light of the recent application of deep learning for object detection and tracking in highly variable scenarios1719, LTDB can provide the large number of images-tracks pairs required for the training of predictive models (Fig. 4b). In this case, broad imaging conditions may support the generalization capabilities of these methods.

Although LTDB was provided to primarily enhance tracking algorithms, the database embeds biomedical knowledge. To this end, data-mining and image-based systems biology methods can be applied to correlate images, tracks and metadata for investigating properties of the immune system in health and disease (Fig. 4c).

Methods

Imaging data generation

Experiments were performed by four research groups using three customized two-photon microscopy platforms (Table 5). Either the splenic or the lymph-node surgical models were used for acquisition (Fig. 1a). Videos were acquired from 26 unique experiments, to observe the interplay of neutrophils, B cells, T cells and natural killer cells in innate or adaptive immune responses (Table 4).

Table 5 Microscopy platforms.

Data pre-processing

No image processing was applied to the provided videos. RAW images were also used for manual tracking. Cropping of large 4D volumes in space and/or time was performed for the entries of the case study collection to focus on the area of interest.

Manual tracking

Centroids of cells were manually annotated and linked over time, using the "Spots drawing" tool from Imaris (Bitplane). This process was performed by a group of three operators who tracked all the cells independently, redundantly and in three different locations without seeing the results produced by each other. In order to maximize track duration, cells were tracked also if partially visible. Tracks were interrupted only when cells completely disappeared. For specific studies, tracks of partially visible cells, migrating close to the boundaries of the field of view, can be excluded a posteriori by the user. Videos with ID (LTDB001 to LTDB020) have the maximum number of visible cells tracked. Videos in the Case Study collection (CS001 to CS018), instead, have only selected and challenging cells tracked.

Consensus tracking ground truth generation

Multiple independent annotations and tracks were merged into the consensus ground truth provided along with the dataset using a majority-voting scheme. This process was performed manually by a fourth expert using the "Unify" functionality of Imaris. The Matlab script LTDBCheck.m was used to facilitate track matching, detecting common errors and highlighting conflictive situations. Two tracks were said likely to "match" (i.e. referring to the same cell) if their annotations were closer than 10um for at least N time instants. N was defined as the minimum between the track duration and 10. Conflictive situations were detected as tracks matching for only certain time instants but not for the entire track duration. These include a) tracks with an annotation in a far position by mistake, b) a longer track matching with one or multiple shorter tracks, c) two tracks matching for N instants but having different initial and/or final positions (i.e. track switches for closely interacting cells) amongst others. Tracks with a duration shorter than 4 time instants were also inspected manually. Due to the high plasticity of cells these criteria were used only to facilitate the work of the fourth expert who had to manually merge multiple tracks as follows: If at least two operators agreed on the direction of a cell, the track was included in the dataset (i.e. two matching tracks having the same duration and detected in the same frames). If two operators tracked a cell, but the track duration was different, the points annotated only by one operator were evaluated, confirmed or discarded by the fourth expert. When two operators could not agree on the direction of a cell, the following method was applied. If the fourth expert or the Matlab script identified an evident tracking error (i.e. cells not annotated by mistake, unrealistic jumps or broken tracks) the error was corrected and the tracks were merged. For real conflictive situations (i.e. track switching for closely interacting cells) the experts were asked to meet and discuss the most appropriate solution. If still the majority consensus could not be reached, and only in this case, tracks were interrupted. Finally, the position of cell centroids included in the ground truth was not averaged but selected as the centroid closer to the mean. Although this choice may produce less smooth tracks, it avoids to position a centroid outside non-convex cells.

These criteria together with the manual merging of tracks and re-evaluation of tracking conflicts, allowed to include the maximum number of tracks for the longest possible period of time.

Animal models

The mouse strains included in this study are specified in Table 6 (available online only).

Table 6 Mouse strains Specification of mouse strains used as host and as source of cells for each video.

Prior to imaging, mice were anesthetized with a cocktail of Ketamine (100 mg/Kg) and Xylazine (10 mg/Kg) as previously described4. All animals were maintained in specific pathogen-free facilities at the Institute for Research in Biomedicine (Bellinzona, CH), Theodor Kocher Institute (Bern, CH) and Massachusetts General Hospital (Boston, MA). All the experiments were performed according with the rules and regulations of the local authorities and approved by the institutional animal committees: Swiss Federal Veterinary Office, Research Animal Care of the Massachusetts General Hospital, MGH Institutional Animal Care and Use Committee (IACUC).

Code availability

To facilitate the usage of LTDB, the following Matlab code is provided, under the GPL v3 Open Source licence, at http://www.ltdb.info/downloads/ or via Git-Hub at https://github.com/IRB-LTDB/.

LTDBReadTracks.m Reads the tracks contained in a CSV file.

LTDBReadImages.m Reads the 4D images contained in the TIFF files.

LTDBExampleQuery.m Provides an example for querying the locally installed database.

XTLTDBImportTracks.m Imports tracks as spots in Imaris.

LTDB2C2C.m Exports LTDB tracks in the format used for the Cell Tracking Challenge described in14,15, mainly for evaluation with the methods proposed in15.

LTDBCheck.m Checks for common tracking errors (i.e. annotations deleted by mistakes, broken tracks and overlapping tracks).

EstimateDSMeasures.m Estimates the measures in (Table 2 (available online only)) regarding the dataset complexity.

LTDBCheck.m and EstimateDSMeasures.m make use of the following libraries: ImarisReader (https://github.com/PeterBeemiller/ImarisReader) to read Imaris files and bwdistsc20 to efficiently estimate the distance of each voxel from the closest centroid.

Data Records

Data included in this work (Videos and Tracks) are available through figshare (Data Citation 1)

Images resulting from MP-IVM are contained in two zip archives with name TIFFS_LTDB001_LTDB020.zip for the videos with ID 001 to 020, and in TIFFS_CS001_CS018.zip for the videos in the case study collection with ID 001 to 018. In these archives, a folder for each video contains 4D images as TIFF files.

Tracks resulting from the consensus tracking ground truth generation, are contained in the archive GT_TRACKS.zip

A dump of the SQL database used to organize data and metadata is provided in LTDB.sql.

The following supplementary files are available through figshare (Data Citation 1). For a quicker preview, each is available in a H264 encoded MP4 file named <VideoID>.mp4.

A snapshot of all the videos is contained in the archive SNAPSHOTS.zip respectively named <VideoID>.png

Individual tracks produced by different operators are provided in the supplementary archive operator_individual_tracks.zip and named <VideoID>_<TrackID>_<OpID>.csv. In this case <OpID> is the ID of the operator (OP1, OP2 or OP3).

Technical Validation

Imaging data

Imaging data were captured from organs of living animals using either the splenic or the popliteal lymph node surgical models (Fig. 1a and Table 4) which are typical for MP-IVM investigations of the immune system2. Cells involved in both innate and adaptive responses were included in the dataset. Videos 12, 13, 14 (Data Citation 1) come from recently published MP-IVM studies5,7,21. To represent data generated by multiple laboratories in different experimental settings22, LTDB includes videos with different size, resolution, sampling rate and challenges for the automatic analysis (Table 2 (available online only)), acquired by three different microscopy platforms (Table 5). Moreover, cells were labelled with different fluorescent tags and detected by multiple channels (Table 3).

The following measures were computed to estimate the complexity of each video: signal to noise ratio (SNR), minimum distance between two cells (Dist) and number of cells per time instant. Since the proposed dataset is centroid-based rather than segmentation-based, SNR was estimated by adapting the definitions proposed in15 with the following heuristic. Let ci,t be the centroid position of cell i at time t. For each voxel v in the current frame, the distance to the closest centroid was computed as d v =min(||v c i , t ||)i. Then, considering a typical cell diameter of 10um, each voxel v was defined as foreground (FG-inside a cell) or background (BG-outside a cell) according with (Equation 1). This assumption allowed to sample a sufficient number of points in each video to estimate the aforementioned measures. (Table 2 (available online only)) summarizes the average values of each video while the additional script EstimateDSMeasures.m can be used to compute the values for each time instant.

(1) v F G d v < 4 u m v B G d v > 20 u m
(2) S N R = || a v g ( F G ) a v g ( B G ) || s t d ( B G )

Tracks

The consensus tracking ground truth provided with LTDB includes 728 unique tracks composed of 44722 instantaneous annotations. On average, each track is composed by 61 annotations. This varying with the track duration. The total observation time included in LTDB amounts to the equivalent of 260 hours for a single cell.

Common tracking errors (i.e. cells not annotated by mistake, broken tracks or jumps in the z-axis) as well as conflicts produced by multiple operators were detected by executing the Matlab script LTDBCheck.m provided in the code availability section.

Individual operators produced 1850 tracks (113807 annotations) which were merged into the 728 tracks of the consensus tracking ground truth. The performances of each operator with respect to the consensus ground truth is reported in (Table 7). To this end, the TRA15 measure was computed. This measure includes a complete comparison of tracks represented as an acyclic oriented graph16. In order to estimate this measure, the ground truth and the individual tracks were converted in the format described in15 and evaluated using the TRAMeasure software provided along. However, that software and methodology matches a cell in the ground-truth with a cell in the track to be evaluated, when they overlap more than 50% in space. Being our dataset centroid-based a difference of 1 voxel would made the matching not possible. Hence, considering the typical cell diameter, we approximated a sphere around each of the centroids. The tolerance radius of the spheres was at maximum of 10um and was truncated in case of two centroids closer than 10um. The script LTDB2CTC.m was used to export the LTDB tracks in the acyclic oriented graph format described in14,15.

Table 7 Comparison of tracking operators Differences between the tracks produced by individual operators and the consensus ground truth, for all the videos in LTDB.

Usage Notes

The expected use case scenario of LTDB is the evaluation of results produced by a cell tracking algorithm (Fig. 3a). Considering a generic cell tracking algorithm as an input-output system that reads an image sequence and outputs the tracks, LTDB can be used both as a source of images and as a ground truth for comparing the output.

To assess the overall performances of a cell tracking algorithm, we direct the user of LTDB towards the entries LTDB001 to LTDB020. To test the behaviour of an algorithm on specific cases instead, we recommend the user with the videos in the Case Study collection CS001 to CS018 that facilitates manual investigation and debugging having a reduced number of cells.

4D images are provided as uint16 TIFF files. File names were structured as <VideoID>_Txxx_Cxxx_Zxxx.tiff where <VideoID> is either(LTDB001 to LTDB020 or CS001 to CS018), the suffix xxx after T, C, Z indicates time instant, channel number and depth level respectively and spans from 000 to 999 maximum. Images with a lower bit depth were stored as uint16 without any scaling. If needed, for normalization the bit-depth of each video can be found in (Table 2 (available online only)).

Tracks are provided in the CSV Format described in (Table 8) and named <VideoID>_<TrackID>_GT.csv. For videos with only one cell population <TrackID> is "a", while for videos with two cell populations tracked it is either "a" or "b". These suffixes correspond to the suffixes used in (Table 2 (available online only)).

Table 8 Structure of the CSV track file.

The synthetic example with ID SQUARE was added to the dataset. This provides a test-case for software having different coordinate systems. A parallelogram of 5×5×10um is positioned in the first frame close to the origin used for LTDB videos, corresponding to the bottom(x=0), left(y=0), deepest(z=0) corner of the 3D volume. This parallelogram migrates along the y-axis.

In order to evaluate tracking performances we provide a Matlab script LTDB2CTC.m to export LTDB tracks as the acyclic oriented graph representation16 used in the Cell Tracking Challenge described in14,15. This allows the usage of the accurate methodology and software provided by the aforementioned authors to compare computed tracks vs. ground truth.

For detecting cell populations visible in more than one channel (Table 2 (available online only) and Table 3) we encourage the usage of a co-localization method based on supervised machine learning such as Ilastik23 or Trainable Weka Segmentation24.

For discriminative machine learning models, it is worth noticing that all the cells of the videos LTDB001 to LTDB020 which are expected to be visible in the indicated channels were tracked. Other objects such as background, cell debris or additional cell populations were not tracked.

In the context of big-data analysis, (Fig. 3c) LTDB represents a resource to compare the biological properties of tracks (i.e. speed, directionality) amongst different experimental conditions. A review of the possible measures that could be computed from the tracks is provided in10.

The SQL database ltdb.sql can be installed optionally and for instance using the MySQL database management system. Queries to retrieve videos of interest (i.e. associated to a specific challenge, type of cell or site of imaging) can be addressed to the locally installed database. Additionally, a web interface was set up to facilitate search, preview and download of videos and it is accessible at http://www.ltdb.info/

Additional information

How to cite this article: Pizzagalli, D. U. et al. Leukocyte Tracking Database, a collection of immune cell tracks from intravital 2-photon microscopy videos. Sci. Data 5:180129 doi: 10.1038/sdata.2018.129 (2018).

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.