Trichromatic perception of flower colour improves resource detection among New World monkeys

Many plants use colour to attract pollinators, which often possess colour vision systems well-suited for detecting flowers. Yet, to isolate the role of colour is difficult, as flowers also produce other cues. The study of florivory by Neotropical primates possessing polymorphic colour vision provides an opportunity to investigate the importance of colour directly. Here we determine whether differences in colour vision within a mixed population of wild dichromatic and trichromatic white-faced capuchins (Cebus capucinus imitator) affect flower foraging behaviours. We collected reflectance data for flower foods and modelled their chromatic properties to capuchin colour vision phenotypes. We collected behavioural data over 22 months spanning four years, determined the colour vision phenotype of each monkey based on amino acid variation of the L/M opsin gene from fecal DNA, and compared foraging behaviours of dichromats and trichromats. Most flowers were more conspicuous to trichromats, and trichromats foraged in small flower patches significantly more often. These data demonstrate a difference in wild primate foraging patterns based on colour vision differences, supporting the hypothesis that trichromacy enhances detection of small, ephemeral resources. This advantage, which may also extend to other foods, likely contributes to the maintenance of colour vision polymorphism in Neotropical monkeys.


Supplemental Data S1: Just Noticeable Difference (JND) modelling
Various "noise" effects influence whether the colour difference between two objects is actually discriminable to an individual. The magnitude of the noise effects on the visual system is determined by both the physical structure of the eye and the physiological limitations of the associated neural pathways. In vertebrate visual systems, the largest contributor of noise is due to the fact that the mid-and long-wavelength sensitive photopigments function as both chromatic and luminance signal receptors, leading to "cross-talk" and signal corruption [1][2][3][4][5] .
Estimates of an object's chromaticity to an individual can be achieved by calculating the quantum catch of each photoreceptor present using the following formula 3 : This equation calculates the quantum catch (Q) of an object's reflectance R(λ) for photoreceptor "i" within the range of primate vision capabilities (400-700 nm) based on the irradiance spectrum I(λ) of the environment and the spectral sensitivity of the photoreceptor being measured Si(λ). Irradiance data for this study were obtained by ADM in Santa Rosa forest under shaded conditions 6 .

Just Noticeable Difference (JND) Modelling
To quantify how visible a flower was to each colour vision phenotype, we used Just Noticeable Difference modelling. The minimum chromatic distance at which two objects can be differentiated as discernibly different colours is defined as one just noticeable difference (JND 3 ).
Objects with increasingly higher JND scores are presumed to be visible under less ideal situations. To approximate a long-distance detection scenario where multiple species would be in the field of view, we calculated a mean chromaticity value for upper and lower leaf surfaces for all species, and then calculated the chromatic distance between a flower part and these mean values. In general, the leaves of all species have similar reflectance properties (i.e. all leaves in this study are green) and our results do not differ if we assess flowers against leaves of each species separately. A flower part was defined as detectable if it was at least 1 JND more visible than leaves for a modelled phenotype, and one phenotype was considered to have a detection advantage over another for a given flower part if the flower was >1 JND more visible to it. JND scores are determined by the minimum chromatic distance (ΔS) between a target object (flowers in this instance) and its background (leaves) required for differentiation. With only one chromatic pathway, the minimum chromatic distance for dichromats is calculated using: Trichromats have three different photopigments, therefore their chromatic distance is calculated using: In both equations, ωi represents the Weber fractions (which act as the "noise" calculation) associated with human photopigments (S:0.08, M:0.02, L:0.02). The difference between the natural logs of the quantum catches for the target object and the background for a given photopigment (e.g., ΔfL= ln(QL(FLOWER)) -ln(QL(LEAVES))) is represented by ΔfL.
Supplemental Data S1 modelling, a supervised machine learning classification algorithm 7 , to predict for which flower species trichromats may have an advantage over dichromats in long distance detection against leafy backgrounds. Once quantum catch for each photoreceptor present in a phenotype was known (obtained during Just Noticeable Difference modelling, see Supplemental Data S1), we calculated how an object stimulated the lightness (luminance) vision pathway (QL+QM), the blue-yellow pathway (QS/(QL+QM)), and, for trichromats, the red-green pathway (QL/(QL+QM)).
In our models the spectral sensitivity functions of the three cones were obtained by normalizing the functions for corneal sensitivity. The quantum catch of the three receptors under a given illumination were all set to be 1 for the light emitted from a hypothetical white surface which reflects 100% of the illumination light. This normalization was carried out based on the assumptions of color constancy, under which the monkeys' eyes would adapt to the varying spectral compositions of illumination so that this surface would appear white 8 .
These quantum catch values are then used with the LIBSVM 9 extension of MATLAB (v.2014b). The SVM creates a hyperplane using "leave one out" methodology. The hyperplane created maximizes the chromatic distance between leaf and flower data points using the chromaticity values for all samples (leaf and flower parts) except for one, which it then attempts to classify as a flower or a leaf based off the generated hyperplane. We repeated this for every plant part, for every colour vision phenotype, and determined how successful a given phenotype was at classifying flowers correctly. Because dichromats may use achromatic visual pathways to improve foraging efficiency 6 , we ran each model separately with lightness included to determine if this improved dichromat success.
All Support Vector Machine (SVM) models using trichromat phenotypes were able to correctly categorize a majority of flowers, whereas most flowers were incorrectly categorized in dichromatic models (S2 Figure 1, S2 Table 1). Adding lightness to the SVM models increased the success of dichromats and slightly decreased trichromatic success. The majority of flower parts (12/21) had luminance values that overlapped with sampled leaves (S2 Figures 2 and 3). S2 Table 1. The success of Support Vector Machine (SVM) analysis at categorizing a given flower part for each color vision phenotype, when lightness is included (white columns) and excluded (shaded columns). "Middles" is a category used to denote the middle, non-petal structures of a flower, typically this was the reproductive organs. "Small patch" species are shaded in blue.   +  +  TOTAL CORRECT  1  3  0  8  0  11  17  13  16  15  15  14  PERCENTAGE CORRECT  5%  14%  0%  38%  0%  52%  81%  62%  76%  71%  71%  67%