Figure 2 | Scientific Reports

Figure 2

From: Automated tracking of label-free cells with enhanced recognition of whole tracks

Figure 2

Workflow of AMIT-v1 (shaded area) and AMITv2 (the full scheme). Numbers in brackets correspond to the five steps of AMIT-v1. Input image (a) is segmented with a Gaussian mixture model (b) into background (black), static objects (gray) and mobile objects (white). Mobile objects (c) are further segmented based on object area (d) into noise (dark gray), single cells (white) and cell clusters (light gray). Single cells (e) are tracked by overlap (f), and cell clusters (g) are split by ellipse fitting and added to the single cell tracklets (h). All tracklets are combined into the final AMIT-v1 tracks by graph optimization (i). To extract spreading cells, mobile and static objects are combined into one mask (j), noise is removed based on area (k), and grid lines of the imaging dish are removed via ellipse fitting (l). Remaining objects (l) are compared with the combined mask of static and mobile objects from AMIT-v1 (m), and only newly detected objects are kept (n). To disconnect spreading cells from the grid lines, (j) is processed by morphological opening and closing (o), and cells that do not overlap with any object from (m) or (n) are isolated (p). Images (n) and (p) are combined into the final spreading cell image (q). Spreading cells are tracked (r), and included in the graph optimization step to obtain final AMIT-v2 tracks (s). Panels (f,h,i,r,s) show a schematic representation of tracks, while all the other panels show different processing steps of the real image from (a).

Back to article page