Introduction

One of the most fundamental aspects of the brain is to effectively process multisensory information. All animals—even the simplest ones–are equipped with multiple sensory organs to perceive and interact with their surroundings. To successfully combine signals from different sensory modalities, the brain needs to detect which signals contain related information, that is, solve the correspondence problem, integrate this information and dynamically adapt to spatial or temporal conflicts across the senses as they arise1,2. Spatiotemporal correlation has often been advocated as the main common factor underlying the sensory signals to be integrated3,4,5,6,7,8: when signals from different modalities originate from the same physical event, and hence contain related information that should be integrated, they usually cross-correlate in time and space. Sensory neuroscience has already acknowledged the fundamental role of correlation detection in multisensory processing5,6, and recent studies have demonstrated that multisensory cue integration is statistically optimal only when signals are temporally correlated3,4, although this effect seems to disappear at high temporal frequencies9. To date, there is no single model that can provide a unified explanation for the manifold aspects of early multisensory processing: how does the brain process multisensory signals to detect correlation and temporal lags across the senses? How does it solve the multisensory correspondence problem? And how does it eventually achieve optimal cue integration of redundant signals?

Correlation detection is at the core of any computational principle for combining different sensory signals, and it is widely exploited throughout the animal kingdom. This is the case for binaural hearing10, binocular vision11 and visual motion perception12, to name just a few. In motion perception, for example, within the continuous stream of visual inputs, the brain needs to compare the luminance of two neighbouring receptive fields over time to detect speed and direction of motion. On the basis of the insect oculomotor reflex, Hassenstein and Reichardt12 proposed a biologically plausible cross-correlation model for motion perception. This model, known as the Hassenstein–Reichardt detector (or elementary motion detector), posits the existence of two mirror-symmetric subunits. In its simplest version, each subunit multiplies inputs from two neighbouring visual receptive fields after applying a delay (or low-pass temporal filtering) to one of those signals. The difference between the outputs of these subunits eventually determines the perception of motion and its direction. Over more than five decades, this model has been successfully applied to explain motion perception also in vertebrates and humans13, and neurophysiological studies support the existence of such a mechanism in the insect visual system, such as the fly optic lobe14. To date, the Hassenstein–Reichardt detector is possibly the neural model whose biological substrates are best understood, and whose computational steps have been recently identified even at the level of individual cells15,16.

The basic architecture of the Hassenstein–Reichardt detector displays a number of important properties that would also be useful for multisensory processing. For example, if a multisensory processing unit akin to the Hassenstein–Reichardt detector would receive inputs from different modalities, it could compute the cross-correlation across the senses, and hence solve the correspondence problem3,4. Moreover, Hassenstein–Reichardt detectors are naturally suited to detect the relative time of arrival of two separate signals: the very same mechanism that in visual perception detects the direction of motion could also be used crossmodally to detect temporal lags across the senses. Here we show that a neural mechanism similar to an elementary motion detector can concurrently explain several aspects of multisensory processing, including the detection of simultaneity, correlation and lag across the senses, and Bayesian-optimal multisensory integration.

Results

Model and psychophysical experiment

The structure of the multisensory correlation detector (MCD, see Methods) proposed here closely resembles the Hassenstein–Reichardt detector that was originally developed to explain visual motion perception. However, instead of receiving visual information from neighbouring receptive fields, the MCD receives inputs from spatially aligned receptive fields of different senses. In a first processing stage, multisensory input signals undergo separate low-pass temporal filtering (Fig. 1a, magenta and green filters). This accounts for the impulse response characteristics of each individual sense during transduction, transmission and early unisensory processing6. These filtered signals are then fed into two mirror-symmetric subunits, which multiply the signals after introducing a temporal shift to one of them through another low-pass filter (Fig. 1a, centre, black filters). As a consequence of this additional filtering stage, each subunit is selectively tuned to different temporal order of the signals (that is, vision vs. audition lead). The outputs of the two subunits are then combined in different ways to detect correlation and lag of multisensory signals, respectively (Fig. 1a, right). Specifically, correlation is calculated by multiplying the outputs of the subunits (Fig. 1a, top-right; equations 4, 5, 6; Supplementary Video 1; and Supplementary Fig. 1A), hence producing an output (MCDCorr) whose magnitude represents the correlation between the signals (Fig. 1b, blue lines). Temporal lag is instead detected by subtracting the outputs of the subunits, like in the classic Hassenstein–Reichardt detector (equations 5, 6, 7, Supplementary Video 1 and Supplementary Fig. 1B). This yields an output (MCDLag) with a sign that represents the temporal order of the signals (Fig. 1b, red lines).

Figure 1: MCD model.
figure 1

(a) Schematic representation of the model. The MCD integrates multisensory signals (SV(t), SA(t)) through a set of low-pass temporal filters followed by linear operations. The MCD model yields two outputs, MCDCorr(t) (equation 4) and MCDLag(t) (equation 5), representing, respectively, the temporal correlation and lag across the input signals. (b) Time-averaged impulse response function of the MCD. The y axis represents the response of the model to visual and auditory impulses as a function of the lag across the senses (see inset). Blue line and axis represent the time-averaged response of the correlation detector (, equation 6), red line and axis represent the time-averaged response of the lag detector (, equation 7). Note how the correlation detector output (blue) peaks at low lags, whereas the output of the lag detector (red) changes sign depending on which modality comes first.

Without losing generality, we here focus on the integration of time-varying signals from vision and audition. To probe multisensory correlation detection in humans, and hence to test the MCD model, five human observers performed a psychophysical forced-choice task (Methods). On each trial we presented a complex sequence of five auditory and five visual impulses (that is, sequences of clicks and flashes) with random temporal structures (Fig. 2a,b and Supplementary Video 2). Participants had to report both whether the visual and auditory sequences appeared to share a common cause (causality judgment), and which of the two sequences—vision or audition—came first (temporal order judgments). The temporal structures of the visual and auditory signals were generated independently, and varied randomly across trials (n=1890). Such stimuli were selected because they emphasize the role of cross-correlation for solving the correspondence problem, while the experimental tasks were selected because they directly probe the detection of multisensory correlation (causality judgment) and lag (order judgment). The stochastic nature of the signals implies the lack of a ground truth on which to devise an optimal classifier. Specifically, given that there were five randomly placed impulses per modality on each trial, there was no univocal way to decide about the relative temporal order or about the common causal structure of the signals, hence rendering the task fundamentally subjective.

Figure 2: Stimuli, reverse-correlation analyses and results of the psychophysical experiment.
figure 2

(a) Experimental setup. Participants sat in front of a white fabric disc covering an LED and a speaker. (b) Examples of stimuli used in the experiment (left side), and their cross-correlation (right). Magenta and green lines represent visual (SV(t)) and auditory stimuli (SA(t)), respectively. The top row shows an audiovisual stimulus eliciting high responses; the lower two elicit low and high responses, respectively. Cross-correlation of the first stimulus is high at short lags; in the other two it is higher at negative and positive lags, respectively. (c) Reverse-correlation analyses. Stimuli were classified according to participants’ responses, that is, ‘light’ vs. ‘sound first’ in the temporal order judgment task (or ‘same’ vs. ‘different causes’ in the causality judgment task, not shown). Classification images were calculated by subtracting the average cross-correlation of trials classified as ‘sound first’ from the average cross-correlation of trials classified as ‘light first’, and smoothing the results using a Gaussian kernel (σ=20 ms, red line, see also f). (d,f) Classification images (solid lines represent data, dashed lines the model). Positive values on the y axis represent positive association to ‘same cause’ or ‘sound-first’ responses. Predicted classification images are vertically scaled. (e,g) Model output (equations 6, 7) plotted against human responses. Each dot corresponds to 315 responses, 63 per participant. See Supplementary Fig. 2 for plots of individual observers’ data. LED, light-emitting diode.

Human responses were analysed using psychophysical reverse-correlation techniques (Methods). By measuring how random variations in the visual and auditory stimuli correlate with participants’ responses, this technique allows discriminating which properties of the audiovisual signals (for example, correlation, lag, temporal filtering and so on) selectively determine human perception (that is judging order or causality). Compared with classical psychophysical techniques, reverse-correlation analyses offer a more stringent test for the MCD model, as they allow assessing—without explicit experimental manipulations—whether humans and MCD base their responses on the same stimulus dimensions.

Given that both causality and temporal order judgments rely on the joint temporal properties of visual and auditory stimuli, reverse-correlation analyses were performed on the cross-correlation profile of the signals. Cross-correlation provides a measure of similarity of the signals as a function of lag, and it highlights common features across complex signals. The classification image for causality judgments demonstrates—as might be intuitively predicted—that signals with high correlation at short lags are more likely perceived as sharing a common cause3,4,17 (Fig. 2d). Notably though, it displays a negative lobe on the sound-first side, indicating that the brain has a tendency not to integrate audiovisual information when sound arrives first, a tendency that mirrors natural signal statistics given that light travels faster than sound. Conversely, as expected, the responses in the temporal order judgments were driven by the sign of the lag at maximum cross-correlation (that is, light vs. sound lead; Fig. 2f). More generally, these results demonstrate that the underlying neural processes are sensitive to the correlation of the signals; otherwise no clear pattern would have emerged in the classification images calculated from the cross-correlation.

Having determined the precise shape of the empirical classification images, we can now assess how closely the MCD matches human performance. To do so, we feed the same signals used in the experiment into the model, and perform reverse-correlation analyses on the model responses just as we did with human responses (see Methods). This way, we can determine the temporal constants of the model’s low-pass filters by fitting them to maximize the similarity between empirical and predicted classification images (three free parameters, see Methods). As can be seen in Fig. 2d,f, this produces an excellent agreement between human data and model responses, which demonstrates that the MCD accurately captures some fundamental aspects of the neural computation underlying human perception. The model can near-perfectly reproduce the shapes of both empirical classification images (Fig. 2d,f): Pearson’s correlation between empirical and predicted classification images is ρ=0.97 for the causality judgment, and ρ=0.99 for the temporal order judgment, respectively (see Supplementary Fig. 2A,C for individual participants’ results). Because of the different temporal constants of vision and audition, the model could also reproduce the same negative lobe in the classification image of the causality judgment on the sound-first side. Note that due to additional noise in the neural processing, the empirical classification images were shallower than those predicted by the model. This additional noise was taken into account by scaling the classification images produced by the MCD, thereby highlighting the similarity between the simulation and the empirical findings.

To further test the predictive power of the MCD—besides reverse-correlation analyses—we investigated whether, using the same fitted parameters determined from the classification images, the model (equations 6 and 7) could predict human responses when given the same stimuli as input (Methods). For both tasks we found a strong monotonic mapping between predicted and empirical responses (Fig. 2e,g; Spearman’s rank correlation between model output and human responses: ρ=0.96 for causality judgments; ρ=0.99 for temporal order judgments; see Supplementary Fig. 2B–D for individual participants results).

Previous attempts to model multisensory temporal perception lack the flexibility to concurrently deal with the tasks and the complex signal streams used here. For example, both Sternberg and Knoll18 and Cai et al.19 proposed models that describe human performance in temporal order judgment tasks given the delay between two signals as input. However, it has never been explained how such delays are detected in the first place, particularly in the presence of complex time-varying signal streams (as they occur in the natural world).

Burr et al.6 proposed a model for audiovisual duration discrimination. Although this model can also be used to process the current stimuli, it cannot account for the current results (see Supplementary Note 1). Regarding correlation (but not lag) detection, Fujisaki and Nishida5 proposed a rudimentary descriptive scheme, that hold some similarities to the MCD model, in form of a correlator between the visual and auditory signals (see Supplementary Note 2). However, the working principle of this correlation detector has not been formally specified. In contrast, the MCD model makes all such key computational steps explicit, is flexible enough to provide quantitative predictions for both causality and temporal order judgments, and can meaningfully process stimuli of any complexity, a property that is inevitable for handling real-world situations.

Validation of the MCD through simulation of previous results

Given that there is an extensive literature on multisensory perception of correlation, simultaneity, and lag5,17,20, we can validate the MCD and assess its generalizability by comparing its responses against human performance as determined in earlier experiments. To this end, we selected a series of studies that employed parametric manipulations of the temporal structure of the signals, we simulated the stimuli, and we used the MCD model (with fixed filter parameters) to predict human performance. That is, it is important to stress that in the following predictions we used the temporal constants of the MCD that we determined in the previous experiment, such that now there are no free parameters of the MCD to fit. The output of the model was related to a response probability by assuming that the model response is corrupted by late noise, and that perceptual judgments are based on a decision criterion. As standard practice in psychophysics, we modelled this stage using a general linear model (GLM) with two free parameters (that is, noise and criterion). If the MCD is the basic computational unit for multisensory temporal processing, we should be able to reproduce all of the earlier findings on the perception of multisensory temporal attributes with this constrained MCD.

Audiovisual correspondence detection

To measure the determinants of multisensory correspondence detection, Denison et al.17 presented streams of audiovisual events with random temporal structures that were sometimes correlated, and they systematically varied lag, rhythmicity and rate. In a forced-choice task, participants had to detect audiovisual correspondence (Methods). That is, participants had to report which of two visual stimuli had the same temporal structure as the auditory stimulus. Their results demonstrate that correspondence detection systematically depended on the temporal properties of the signals and their complexity: performance decreased with lag, rhythmicity and rate (Fig. 3a,b, dots, see ref. 17, Experiments 1 and 3). Feeding these stimuli to the model demonstrates that, in line with human behaviour, (equation 6) responses also decrease with lag, rhythmicity and rate, effectively replicating all the patterns in the original data set without the need to adjust any of the time constants (Fig. 3a,b, lines; see Methods and Supplementary Table 1 for details).

Figure 3: Comparison of our model to previously published psychophysical results.
figure 3

Dots represent the empirical data, lines the model prediction. (a) Effects of lag and stimulus rate in a correspondence detection task (data from ref. 17, Experiment 1). (b) Effects of lag and stimulus rate on correspondence detection (data from ref. 17, Experiment 3). (c) Effects of temporal frequency (rate) and phase shift of periodic stimuli on synchrony detection (data from ref. 5, Experiment 1). (d) Synchrony detection for a single pulse (data from ref. 5, Experiment 1) and random sequences of pulses (temporal rate 80 Hz, data from ref. 20, Experiment 1). (e) Temporal order judgment task (data from ref. 21, Experiment 2). (f) Synchrony judgment task (data from ref. 23, Experiment 1).

This result may also explain the previously reported null effects of correlation on multisensory integration of audiovisual stimuli at high temporal rates (between 7 and 15 Hz, see ref. 9). The reason may be that due to its low temporal resolution as a result of the low-pass filtering, the human perceptual system might simply become insensitive to the amount of correlation with increasing temporal rate.

Synchrony detection

Like correspondence detection, multisensory perception of synchrony is also systematically modulated by the temporal structure of the signals. To investigate the determinants of synchrony detection, Fujisaki and Nishida5,20 presented periodic sequences of visual and auditory stimuli while parametrically manipulating temporal frequency and phase shifts of the signals5 (Experiment 1, Fig. 3c), or their lag and complexity20 (Experiment 1, Fig. 3d). The model output again tightly replicates all empirically determined trends (Fig. 3c,d; see Methods and Supplementary Table 1).

Temporal order judgment

The temporal order judgment is a classic paradigm to study temporal aspects of multisensory perception. When participants report on the temporal order of simple visual and auditory signals, response probabilities as a function of physical lag usually follow a sigmoidal distribution (often modelled for simplicity as cumulative Gaussian or logistic functions, see, for example, Spence et al.21, Experiment 2). (equation 7) accurately reproduces the same distribution in the commonly measured range around physical synchrony (0ms lag), and displays the distinctive sigmoidal shape of temporal order judgments responses (Fig. 3e and Supplementary Table 1). However, for longer lags beyond the integration window of the MCD, output would drop to 0 as the filtered input signals are not overlapping and thus no order assignment would be possible. Conversely, human performance clearly becomes easier with longer lags. We would argue though that responses at longer lags are based on other cognitive detection mechanisms and not on the MCD, which models temporal processing in the perceptual range22.

Synchrony judgment

Another popular procedure to investigate temporal aspects of multisensory perception is the synchrony judgment task. Participants are typically presented with one visual and one auditory stimulus, and have to report whether such signals appear to be subjectively synchronous or not. When plotted against lag, synchrony judgments usually display asymmetric bell-shaped response distributions, with the perception of synchrony peaking at short lags (for example, see ref. 23). Once again, (equation 6) faithfully replicates human performance23 (Experiment 1, see Fig. 3f, and Supplementary Table 1), and it displays, due to differences in the filtering of the two input signals, the characteristic asymmetry often found in synchrony judgments. That is, the MCD model can explain the standard shape of synchrony judgments solely based on the response of a simple correlation detector (for example, without the need to assume multiple decision criteria; see ref. 24).

All in all, these simulations show that the MCD can reproduce human multisensory temporal processing under a broad range of experimental manipulations.

MCD and optimal cue integration

Over the past decade, multisensory integration has been predominantly modelled in terms of Bayesian Decision Theory25. The main finding is that humans integrate redundant multisensory information in a statistically optimal fashion, thereby maximizing accuracy and precision of combined sensory estimates26. Being based on multiplicative interactions, the correlation detector of the MCD is naturally suited to implement Bayes-optimal multisensory integration.

To illustrate this in the case of spatial localization of visual and auditory stimuli27, we simulated a population of MCDs, each receiving inputs from spatially tuned visual and auditory units (Fig. 4a). Each input unit has a receptive field that is Gaussian in shape and with a width that is inversely proportional to the reliability of the input (Supplementary Fig. 6, see refs 26, 28). Thus the reliability of a signal’s estimate is the emergent property of neuronal tuning to a particular stimulus, which changes with the type of stimuli used, and neural noise. The output of each MCD unit (, equation 6) is then normalized by dividing it by the sum of the responses of all units. This divisive normalization, which is biologically plausible29,30, eventually provides an estimate of the probability distribution of stimulus location (see, Fig. 4b).

Figure 4: MCD and optimal cue integration.
figure 4

(a) A population of spatially tuned MCDs. Each unit receives information from a limited region of visual and auditory space (b) normalization. The output of each unit (MCDCorr, equation 4) is normalized across units to get a probability distribution of model response over space. (c) MCD and optimal cue integration. Normalized responses of a population of MCDs tuned to a preferred stimulus dimension (for example, space) to visual and auditory stimuli with a spatial offset. The green and magenta lines represent MCD responses to spatially offset unimodal visual and auditory stimuli, respectively. The blue line represents the response of the MCD to the bimodal audiovisual stimuli, while the black dots represent the prediction of Bayes-optimal integration.

To better highlight the weighting behaviour resulting from optimal multisensory integration, we introduced a small offset between the spatial locations of the visual and auditory stimuli, and rendered the auditory input less reliable than vision (wider receptive fields). Stimuli consisted of impulses lasting for one sample (sampling frequency 1 kHz), which were embedded into temporal sequences of samples with constant, near-zero values. The model output was time-averaged over a 2 s window. Figure 4c shows the output of the model for unimodal and bimodal signals (continuous lines), and the prediction of optimal cue integration (that is, the normalized product of the unimodal distributions, represented by the dots25). The combined response predicted by the MCD matches the joint probability distribution on which the optimal percept is based (see Supplementary Note 3, for a description of how this population model can be extended to include weaker forms of coupling across the signals, represent priors, and implement spatial recalibration).

Despite the success of Bayesian models in predicting multisensory integration, it is still unclear how they can account for the breakdown of optimal integration in the presence of temporal conflicts, that is when the signals are not synchronous. Given that MCDCorr responds maximally to synchronous and correlated stimuli, this population model can naturally account for the temporal constraints of optimal integration, as with longer lags the filtered input signals are non-overlapping, and thus no integration would be possible. Hence, going beyond earlier neural models of multisensory integration31,32, this model directly deals with time-varying signals, and it can jointly account for both optimal cue integration and for its breakdown when temporal conflicts occur3.

Discussion

Overall, these results strongly suggest that the MCD may represent the elementary unit for multisensory processing: individual units solve the correspondence problem, by detecting correlation, lags and synchrony across the senses, and integrate only those signals that are likely causally related; larger populations of MCDs perform optimal multisensory integration. So far, these diverse phenomena have only been partially explained with separate ad hoc models5,24,25,26,28,29,33, or have never been explained at all (like the detection of temporal order across the senses). The MCD parsimoniously captures all such perceptual challenges, and provides a common explanation to both the spatial and the temporal aspects of multisensory integration with a simple neural architecture whose biological plausibility is supported by a vast literature on the physiology of motion perception14 and stereoscopic vision11. What is more, the MCD might help bridging the gap between physiology and behaviour, as it can also be used to model the responses of multimodal neurons, just like the Hassenstein–Reichardt detector has been used as a model for motion-sensitive neurons14.

Although correlation detection provides obvious behavioural benefits such as solving the correspondence problem and integrating related signals, it is not equally clear why the nervous system should have a dedicated detector to constantly monitor lags across the senses. Multisensory signals, however, often reach our senses with some relative lags (e.g., due to differences in the generation process of the signals in each modality, the speed of propagation, which is slower for sound, or the neural latencies during transduction and transmission). Hence, the nervous system must quickly detect and actively compensate for such delays (i.e., temporal recalibration, see refs 34, 35). In this context, a lag detector would provide the necessary information to drive temporal recalibration and restore perceptual synchrony. Given the analogies between motion perception and multisensory temporal processing, it would be reasonable to hypothesize that the same mechanisms underlying visual motion adaptation36 might also serve multisensory temporal recalibration19,34 (Supplementary Fig. 8).

Correlation detection has often been advocated as a universal computational mechanism, which simultaneously operates in multiple sensory systems (for example, visual motion, binocular disparity and binaural hearing) throughout the animal kingdom37, including mammals, avians and even invertebrates. The present study suggests that the brain parsimoniously implements analogous principles of sensory processing also to combine signals across the senses, not just within. Because of its biologically inspired nature and its effectiveness in predicting psychophysical results, the MCD provides a unified general theory of multisensory processing—one that is capable of generating quantitative predictions at many different levels, from neurons14,38 to behaviour32.

Methods

Model

The MCD model consists of a first filtering stage, where time-varying visual and auditory signals (SV(t), SA(t)) are independently low-pass filtered, and a subsequent integration stage, where the two signals are combined through linear operations (multiplication or subtraction). Low-pass filters (f) were modelled as exponential functions of the form (cf. Burr et al.6):

τmod is the modality-dependent temporal constant of the filter. On the basis of the empirical results, we estimated these constants to be τV=87 ms and τA=68 ms for the visual and auditory filters, respectively. The second filter, which for simplicity we assumed to be identical in both subunits of the detector, was estimated as τAV=786ms (cf. fitting details below).

Each subunit (u1, u2) of the detector independently combines multisensory information by multiplying the filtered visual and auditory signals as follows:

To this end, the signals are convolved (*) with the low-pass temporal filters. The response of the subunits are eventually multiplied or subtracted.

The resulting time-varying responses represent the local temporal correlation (MCDCorr) and lag (MCDLag) across the signals (Supplementary Fig. 1). To reduce such time-varying responses into a single summary variable representing the amount of evidence from each trial, we simply averaged the output of the detectors over a window of 3 s—three times the duration of each trial:

A Matlab implementation of the MCD model is provided in Supplementary Software 1. Given that we were especially interested in how close the model could reproduce the shape of the empirical classification images, the temporal constants of the model (that is, τV, τA and τVA) were free parameters that we fitted to maximize the correlation between predicted and empirical classification images of the averaged observer (see reverse-correlation analyses). The fitting was based on an optimization algorithm (fminsearch, Matlab) that maximizes the similarity (Pearson’s correlation) between empirical and predicted classification images. To calculate the classification images from MCD responses, we ranked the responses of the model across trials, and divided them into two classes (one for each response category) with the same relative frequency that we determined empirically. For example, if participants classified 40% of the trials as ‘common cause’, the 40% of trials with highest outputs were classified as ‘common cause’—the remaining trials as ‘different causes’.

The main difference between empirical and predicted classification images is that overall the predicted ones have higher amplitudes. This is an expected finding, given that in the model we did not consider the detrimental effects of noise, which would naturally arise at any stages of the detector, including late noise occurring at the decision stage, and which would be additive to the detector’s outputs and . Since noise is unlikely to be correlated across the senses, it would only reduce the cross-correlation between the streams of sensory information, and corrupt perceptual decision-making, hence reducing the overall amplitude of the empirical classification images. Note that the predicted classification images in Fig. 2d–f and Supplementary Fig. 2A–C are vertically scaled for graphical clarity in order to better highlight the similarity in shape of the predicted and empirical results.

To demonstrate how model output can systematically predict human responses, we partitioned all trials into 30 bins (315 responses for each bin, 63 per participant) based on the output of the model ( for the causality judgment, and for the temporal order judgment task). For each bin, we calculated the average response of the model and plotted it against the mean response of our participants in these same trials (Fig. 2e,g and Supplementary Fig. 2b–d), while the monotonicity of the relationship between model and human responses was assessed using Spearman correlation.

Psychophysical task

The experiment consisted of a force-choice dual task, whereby on each trial a train of five impulses was presented to the visual and auditory modalities. The stimuli consisted of sequences of five visual and five auditory impulses randomly presented over an interval of 1 s (average temporal rate 5 Hz, Fig. 2b). Each impulse consisted of a single sample with a value of 1 in an array of samples with a value of 0 (sampling frequency 44.1 kHz). Participants (four naive and C.V.P., age range 22–35 years, one female) observed the stimuli and had to report (1) whether or not the visual and auditory sequences appeared to be causally related and formed perceptual unity (causality judgment, also known as ‘relatedness’ task17), and (2) the relative temporal order of the two sequences (temporal order judgment). To instruct participants on what we meant by ‘causally related’, we told them to imagine that clicks and flashes were all little explosions, and their task was to tell whether the same underlying sequence of blasts caused the clicks and the flashes, or whether light and sound were generated by independent generative processes. Given that we collected a large number of trials (overall n=9,450), and that we performed the analyses both at the group level and at the level of the individual participants, a pool of five participants is large enough to provide reliable estimates of the effects under study. To assess whether the dual-task paradigm could qualitatively alter observers’ responses, we also performed an additional control experiment in which the causality judgment and temporal order judgment tasks were performed in separate sessions (see Supplementary Note 4 and Supplementary Fig. 9). Overall the results from both experiments were in good agreement with those of the dual-task paradigm.

The experiment was performed in a dark anechoic chamber. Visual stimuli consisted of a white disk (rad=6.5°) of sound-transparent fabric backlit by a white-light-emitting diode. Auditory stimuli were presented from the same location as the visual stimuli by means of a small loudspeaker hidden behind the sound-transparent screen. Both the light-emitting diode and the loudspeaker were operated via a computer soundcard to ensure near-perfect timing of the multisensory signals. The experiment was controlled by custom-built software based on the Psychtoolbox39.

The experiment was self-paced, and participants had to press a key to initiate each trial. The temporal structures of the signals were identical across participants. After stimulus presentation, participants responded by pressing one of four keys arranged in a 2 × 2 matrix. The vertical axis of this arrangement represented the response to the causality judgment (top=single cause, bottom=different causes), while the horizontal axis represented the response to the temporal order judgment (left=vision first and right=audition first). Participants were instructed to give an answer even if unsure, taking their time and guessing if necessary. The experiment took place in four sessions of 2 h each (including breaks). Every 45 trials, there was a break and a dim table-light was smoothly turned on. Participants were allowed to rest for as long as they wanted, and to minimize boredom they were entertained with a booklet of short jokes. Participants had to press a key to restart the experiment, after which the table-light smoothly turned off. Throughout the experiment, participants’ head was constrained with a chin- and a head-rest. The experiment was conducted in accordance to the Declatation of Helsinki and was approved by the Ethics committee of the University of Bielefeld. Participants received 6 Euro per hour, and they provided written informed consent before participating to the experiment.

Reverse-correlation analyses

To calculate visual classification images, we first sorted the stimuli presented in the experiment according to participants’ (or model’s, c.f. Modelling) classification responses (single cause vs. multiple causes in the causality judgment, CJ; light vs. sound lead in the temporal order judgment, Fig. 2c; see ref. 40). For each class we calculated the mean cross-correlation across the visual and auditory signals (SV(t)SA(t)) and combined them to obtain the classification images for audiovisual correlation (KCorr) and lag (KLag, Fig. 2c) according to:

Classification images (KCorr and KLag) were temporally smoothed by convolution with a temporal low-pass Gaussian filter (σ=20 ms). Reverse-correlation analyses were performed individually for each participant (Supplementary Fig. 2), and on the averaged observer (Fig. 2d,f). Given that the stimuli were identical across participants, the response of the averaged observer on each trial was calculated as the mode of the individual responses (that is, if the three out of five participants responded ‘vision first’ on a given trial, then the response of the average observer was also ‘vision first’).

Simulation of audiovisual correspondence detection

We simulated visual and auditory signals with statistically the same temporal structures described in Denison et al.17 (Experiments 1 and 3), and fed them into the MCD (equation 4). Both visual and auditory events consisted of 1ms luminance and loudness impulses.

The MCD response to such signals was transformed into categorical responses in the following way: Given that the behavioural task consisted of a two-alternative forced-choice (that is, participants had to report which of two visual stimuli temporally matched the auditory stimulus17), we assumed the underlying decision variable to be the ratio between the (equation 6) response to the two visual stimuli (match/non-match). That is, we divided the response to the matching stimulus by the response of the model to the non-matching stimulus. Such a variable was then transformed into the proportion of correct responses via a general linear model with a probit link function (assuming additive Gaussian noise). Linear coefficients were fitted over the whole data set (proportion of correct responses, which we computed from the original d’ measures) included in Fig. 3a,b.

Given that Denison et al.17 used random temporal structures that changed across trials, we could not faithfully reproduce the exact stimuli used in the original experiment. Therefore for each data-point, we simulated 500,000 random temporal structures for the auditory stimuli, other 500,000 for the matching visual stimuli, and the same amount for non-matching visual stimuli. The decision variable was eventually calculated as the ratio of the median MCD response to the two visual stimuli (that is, median response to the matching visual stimuli divided by the median response to the non-matching visual stimuli).

Simulation of synchrony detection

The experimental task used by Fujisaki and Nishida5,20 consisted of a two-alternative forced-choice task, whereby participants reported on the perceived synchrony of auditory and visual stimuli. Assuming synchrony detection to rely on the same mechanisms of correspondence detection, we used MCD to simulate the results of Fujisaki and Nishida5 (Experiment 1, see Fig. 3c) and Fujisaki and Nishida20 (Experiment 1, see Fig. 3d). This was done by generating stimuli as described in Fujisaki and Nishida5 (Experiment 1) and Fujisaki and Nishida20 (Experiment 1), and calculating (equation 6). As in the previous simulation, we carefully replicated the temporal structures of the stimuli. However, for simplicity the exact visual and auditory events were replaced by impulses. We used standard procedures to transform the responses of the model into binary responses. This was done by assuming that the response on each trial depended on the ratio between the model’s responses for the asynchronous signal, to the model’s response for the synchronous signal. Such a variable was then transformed into the proportion of correct responses via a general linear model with a probit link function (assuming additive Gaussian noise). Linear coefficients were fitted over the whole data set included in Fig. 3c,d.

Simulation of temporal order judgment

To investigate whether (Equation 7) could reproduce the typical shape of the response distribution of audiovisual temporal order judgments, we fed a pair of visual and auditory impulses with the same lags as in Spence et al.21 (Experiment 2) into the model. Again, was related to response probabilities using a general linear model with a probit link function (Fig. 2e). Linear coefficients were fitted over the empirical data from Spence et al.21 (Experiment 2).

Simulation of synchrony judgment

To see whether could replicate the response distribution of synchrony judgment tasks, we generated stimuli as in Slutsky and Recanzone23 and calculated model responses (Fig. 3f). The exact stimuli were replaced by impulses, but the temporal structures in our simulations were the same as in the original study. output (equation 6) was related to response probabilities using a general linear model with a probit link function. Linear coefficients were fitted over the empirical data in Fig. 3f (ref. 23) (Experiment 1, 1 kHz condition).

To assess how well the MCD model could predict previous results, we calculated the Pearson correlation and the coefficient of determination between the empirical and the predicted responses (Supplementary Table 1). To test the statistical significance of a linear regression relationship between empirical and predicted responses, we used the F-test (Supplementary Table 1). The F-test requires the residuals of the linear regression to be normally distributed: the validity of this assumption was corroborated for all tests using a Lilliefors test for normality. Regression analyses were performed separately for each simulated study.

Additional information

How to cite this article: Parise, C. V. et al. Correlation detection as a general mechanism for multisensory integration. Nat. Commun. 7:11543 doi: 10.1038/ncomms11543 (2016).