Testing a relational account of search templates in visual foraging

Search templates guide human visual attention toward relevant targets. Templates are often seen as encoding exact target features, but recent studies suggest that templates rather contain “relational properties” (e.g., they facilitate “redder” stimuli instead of specific hues of red). Such relational guidance seems helpful in naturalistic searches where illumination or perspective renders exact feature values unreliable. So far relational guidance has only been demonstrated in rather artificial single-target search tasks with briefly flashed displays. Here, we investigate whether relational guidance also occurs when humans interact with the search environment for longer durations to collect multiple target elements. In a visual foraging task, participants searched for and collected multiple targets among distractors of different relationships to the target colour. Distractors whose colour differed from the environment in the same direction as the targets reduced foraging efficiency to the same amount as distractors whose colour matched the target colour. Distractors that differed by the same colour distance but in the opposite direction of the target colour did not reduce efficiency. These findings provide evidence that search templates encode relational target features in naturalistic search tasks and suggest that attention guidance based on relational features is a common mode in dynamic, real-world search environments.

procedure: They clicked on 30 or more non-targets or distractors over the course of 12 respectively 6 trials. Making this many mistakes seem unlikely if participants performed the task as instructed and without technical issues. Their entire datasets were excluded from the analyses. One further participant was removed because their ITTs were implausibly low: 23 ITTs were smaller than 30 ms. This corresponds to clicking faster than 33 times per second whereas people typically click one to three times per second and a rate of 33 is likely impossible. The minimum ITT of this person was 6 ms, corresponding to more than 166 clicks per second. These values suggest that there was a technical problem or some fraudulent clicking aid was employed. Visualisations of the outliers can be found at https://osf.io/2nbtf.

Removal of trials with interruptions
Participants had been instructed to take breaks only between the blocks. However, given that the participants worked remotely on their own computers, interruptions during the trials (e.g., incoming e-mails, phone calls, etc.) are not unlikely. To detect such interruptions, we filtered the data concerning two criteria: (1) The time until the first selection in a display must not exceed 10 seconds and (2) none of the intervals between subsequent target selections must be larger than 20 seconds. These criteria are quite conservative (the mean time to the first selection was 1.2 seconds and the mean inter-target time was 0.89 seconds). The reason for the more lenient criterion on the inter-target times is that unlike as for the first selection, where the display is filled with 15 targets, inter-target times can get multiple seconds long if participants try to find the last (few) targets in a display. Even though this strategy is inefficient (leaving the patch would be the more efficient choice) the strategy is permissible and hence such trials should not be removed. Given that the described criteria only removed six out of 3,384 trials (0.18 %) and that these came from all conditions, we believe it removes the irregularities without distorting the results. The detailed procedure can be found at https://osf.io/92xkf.

Model-based removal of outliers
When calculating the leave-one-out cross-validation score via Pareto-smoothed importance sampling (PSIS) [2] on the dataset (after removal of the three participants and trials with interruptions, see above), diagnostics revealed that 35 observations (1% of the trials) had Pareto k values [2] larger than 0.7, flagged as "bad" or "very bad" (3,325 [98.4 %] flagged as "good", 18 [0.5 %] as "OK", 27 [0.8 %] as "bad", and 8 [0.2 %] as "very bad"). This means these observations deviate so much from the other data and are so influential that they are not well predicted based on the remaining data in the cross-validation and the model comparison scores might not be reliable. We then removed all observations with k > 0.6 (i.e., with a slight margin to 0.7) and rerun the PSIS. We repeated this procedure iteratively until (after 6 iterations) only observations remained that were at least "OK" (3,302 [99.6 %] flagged as "good", 13 [0.4 %] as "OK", none as "bad" or "very bad"). The implementation of the algorithm and the log of the procedure can be found at https://osf.io/92xkf. In this procedure, a total 1.87 % of the trials that entered it was removed. As can be seen in Fig. S1, removals occurred in all conditions. The dataset resulting from this procedure was used in all analyses.
Note that it might not be a good idea in general to remove influential observations. However, as can be seen in Fig. S1, in the present case the removed observations often seem not to be meaningful with respect to the general strategy the foragers employed. Many of them might be caused by lapses and misclicks on the "next" button (perhaps when chasing a target moving close to it). For instance, participants 4, 5, 15, 26, 33, and 42 all only clicked the next button once in the whole experiment (in a trial the procedure ultimately removed) before they had collected all 15 targets like they did in the remaining 60+ trials. Since the removed trials are most likely no meaningful representations of the patch-leaving strategies, and since they only occur rarely and across all conditions, the removals are unlikely to bias our parameter estimates. However, in future work it might be good to prevent misclicks on the "next" button, and perhaps to amend the model with a parameter for strategy lapses or make it more robust with less conservative priors.

Model structures and priors
Rate of Return (RoR): The model structure and priors for the RoR models are listed below. All priors were chosen to cover large ranges of plausible values and, importantly, not give advantage to any condition. Variable y i,j refers to the RoRs observed for each participant i in each condition j: The common part is used in both parameter estimation and model comparisons. For the parameter estimation, the mean RoRs of all conditions can vary freely and receive the same prior distribution (listed under "Parameter estimation"). For the models used in the model comparison (listed under "Model comparison (Relational/Feature-specific)"), the mean RoRs of conditions with expected low and high RoRs are defined relative (via the Δµ parameters) to the medium condition (or, for Relational, to a virtual midpoint between low and high), to constrain the models to the respective predicted patterns. The implementation of the model can be found at https://osf.io/ek5ya.

Targets Left Behind (TLB):
The model structure and priors for the TLB models are listed below. All priors were chosen to cover large ranges of plausible values and, importantly, not give advantage to any condition. Variable y i,j refers to the TLBs observed for each participant i in each condition j.
The general rationale of this model follows the beta-binomial model described by Albert and Hu (2019; see [3], pp. 381-385). The different versions for parameter estimation and model comparison follow the same logic as described above for the RoR. The implementation of the model can be found at https://osf.io/43xk6. Figure S2. Shows visualisations of the priors on the group level. As can be seen, a wide range of RoRs is covered by the prior distributions, from values close to zero to values larger than two. Since the stimuli were moving, a rate of two items per second already reflects a high rate of return. Wolfe et al. [4] , for instance, found click rates between 0.5 and 1 items/s in their foraging tasks with moving stimuli. The posteriors for the RoR obtained in the present study varied across the different conditions between about 1 to 1.2 items/s (see Figure 4 in the main text). Moreover, these parameters concern the central tendency in the group, and individuals could still deviate substantially from these means. Importantly, in the free-parameter model (Fig. S2a), all four conditions receive the same priors, whereas the priors for relational (Fig. S2a) and the feature-specific versions (Fig. S2b) reflect the rank order of the conditions imposed by these models. This results in small differences in the means that reflect the order (see white points inside the violins). Nevertheless, the priors span over wide ranges ( Supplementary Fig. S3 shows exemplary draws that further visualise the relationships between the conditions). Taken together, this indicates that the prior was sufficiently vague so as not to distort the posteriors.

Rate of return (RoR): Supplementary
Supplementary Figure S2 The lines in Figure S3 connect the points of values from the same draw. As can be seen, the combinations reside at various y-axis levels, from minimal values up to roughly two items per second. The ordering imposed by these model versions can result in differences of very variable magnitudes (whereas the direction is fixed). For instance, one of the exemplary draws in Fig. S3a (lower one of the two orange lines) results in a difference (between [rel, sim] and [nont, opp]) of more than 0.5 items/s, whereas the draw represented in the olive line (at a similar level as the orange one) is almost entirely flat with only a very small difference between the conditions. (Note that the prior can include values slightly below zero, as the means are additive combinations of unbounded normal distributions; such values are overruled when the posterior is sampled, as there are no negative data points.) In Fig. S3b, similar examples can be found but for the pattern associated with the feature-specific model. Given that the differences in the posterior (Figure 4a in the main text) are about 0.15 items/s, the prior (allowing substantially larger and smaller differences) does not restrict the parameters too much, not distorting the outcomes.  Supplementary Figures S4 and S5 below show an assessment of prior distributions for the "targets left behind" variable performed in the same way as above for the rate of return and with the same implications: Distributions and differences are sufficiently vague (e.g., the a priori probabilities are high for proportions anywhere between 0 and 1 in the free-parameter model). Note that the somewhat triangular distribution shapes for the relational and feature-specific model versions result from order constraints and the fact that the variable is bounded in the 0 to 1 range. Figure S4