Fig. 5 | Nature Communications

Fig. 5

From: Feature integration within discrete time windows

Fig. 5

Computational model. Left: At each retinal location there is a memory box, which is activated when a visual feature is presented at this location. Right: When a visual feature appears at a given location, the memory box opens and processes information about the corresponding visual feature, i.e., a vernier with either a right, a left, or an aligned offset. These feature detectors are modeled as leaky integrators. We represent pro-verniers as + 1, anti-verniers as −1, and aligned lines as 0. Once stimulation at this retinal location terminates, the memory box closes, buffering the integrated information21. Thus, information about visual features at each location is preserved throughout the discrete integration window. This processing is “unconscious”. In this example, there are five memory boxes, and the input to each of them is shown at the bottom. At the end of a discrete time window (denoted Treadout), the content of the different memory boxes is combined, yielding the output of stage 1. In the present case, the attended stream of elements is perceived as a single moving object, so the outputs of all memory boxes are summed. Stage 2 receives the outputs of stage 1 and drives the decision. The task is to report vernier offset directions, which we implement using a biologically plausible decision making network proposed by Wong & Wang22. Details of the discrete computational model are provided in the methods

Back to article page