Category-specific patterns of response results from comparison of within- and between-categories. Crucially, in the cross-modal analysis within- and between-categories correlations are calculated on subsets of data across modalities. In the results shown in (a–b-c-left), tasks have been analyzed separately and then averaged before statistical testing. (a) Person/Place specific information is robustly present for both modalities (all p-values < 0.001, Montecarlo cluster-corrected). Names (230 ms to 610 ms) and Pictures (100 ms, earliest statistical point, to 750 ms). (b) Quantifying Places/People related information across modality revealed an early, a medium and a late cluster (all p-values < 0.005, Montecarlo cluster-corrected, orange contours initial threshold p = 0.05, black p = 0.005). Significant category-sensitive conceptual representations are off the diagonal for names, with a mean delay of 90 ms respect to pictures. (c) Left: Using searchlight MVPA, we revealed most informative sensors for each temporal cluster in Right: No significant differences were evident between tasks (deep/shallow) in any of these clusters.