Introduction

Recent years have seen a renaissance in machine learning and machine vision, led by neural network algorithms that now achieve impressive performance on a variety of challenging object recognition and image understanding tasks1,2,3. Despite this rapid progress, the performance of machine vision algorithms continues to trail humans in many key domains, and tasks that require operating with limited training data or in highly cluttered scenes are particularly difficult for current algorithms4,5,6,7. Moreover, the patterns of errors made by today’s algorithms differ dramatically from those of humans performing the same tasks8,9, and current algorithms can be “fooled” by subtly altering images in ways that are imperceptible to humans, but which lead to arbitrary misclassifications of objects10,11,12. Thus, even when algorithms do well on a particular task, they do so in a way that differs from how humans do it and that is arguably more brittle.

The human brain is a natural frame of reference for machine learning, because it has evolved to operate with extraordinary efficiency and accuracy in complicated and ambiguous environments. Indeed, today’s best algorithms for learning structure in data are artificial neural networks13,14,15, and strategies for decision making that incorporate cognitive models of Bayesian reasoning16 and exemplar learning17 are prevalent. There is also growing overlap between machine learning and the fields of neuroscience and psychology: In one direction, learning algorithms are used for fMRI decoding18,19,20,21, neural response prediction22,23,24,25,26, and hierarchical modeling27,28,29. Concurrently, machine learning algorithms are also leveraging biological concepts like working memory30, experience replay31, and attention32,33 and are being encouraged to borrow more insights from the inner workings of the human brain34. Here we propose an even more direct connection between these fields: we ask if we can improve machine learning algorithms by explicitly guiding their training with measurements of brain activity, with the goal of making the algorithms more human-like.

Our strategy is to bias the solution of a machine learning algorithm so that it more closely matches the internal representations found in visual cortex. Previous studies have constrained learned models via human behavior8,35, and one work introduced a method to determine a mapping from images to “brain-like” features extracted from EEG recordings36. Furthermore, recent advances in machine learning have focused on improving feature representation, often in a biologically-consistent way26, of different kinds of data. However, no study to date has taken advantage of measurements of brain activity to guide the decision making process of machine learning. While our understanding of human cognition and decision making is still limited, we describe a method with which we can leverage the human brain’s robust representations to guide a machine learning algorithm’s decision boundary. Our approach weights how much an algorithm learns from each training exemplar, roughly based on the “ease” with which the human brain appears to recognize the example as a member of a class (i.e., an image in a given object category). This work builds on previous machine learning approaches that weight training8,37, but here we propose to do such weighting using a separate stream of data, derived from human brain activity.

Below, we describe our biologically-informed machine learning paradigm in detail, outline an implementation of the technique, and present results that demonstrate its potential to learn more accurate, biologically-consistent decision-making boundaries. We trained supervised classification models for four, visual object categories (i.e., humans, animals, places, foods), weighting individual training images by values derived from fMRI recordings in human visual cortex viewing those same images; once trained, these models classify images without the benefit of neural data.

Our “neurally-weighted” models were trained on two kinds of image features: 1., histogram of oriented gradients (HOG) features38 and 2., convolutional neural network (CNN) features (i.e., 1000-dimensional, pre-softmax activations from AlexNet13 pre-trained on the ImageNet dataset1). HOG features were the standard, off-the-shelf image feature representation before the 2012 advent of powerful CNNs1, while CNNs pre-trained on large datasets like ImageNet are known to be strong, general image features, which can transfer well to other tasks39,40. While machine vision research has largely focused on improving feature representation in order to make gains in various, challenging visual tasks, another complementary approach, which our paradigm falls under, is to improve the decision making process. Thus, we hypothesized that our decision boundary-biasing paradigm would yield larger gains when coupled with the weaker HOG features, thereby enabling HOG features to be more competitive to the stronger CNN features.

Finally, these models were then evaluated for improvement in baseline performance as well as analyzed to understand which regions of interest (ROIs) in the brain had greater impact on performance.

Results

Visual cortical fMRI data were taken from a previous study conducted by the Gallant lab at Berkeley41. One adult subject viewed 1,386 color 500 × 500 pixel images of natural scenes, while being scanned in a 3.0 Tesla (3T) magnetic resonance imaging (MRI) machine. After fMRI data preprocessing, response amplitude values for 67,600 voxels were available for each image. From this set of voxels, 3,569 were labeled as being part of one of thirteen visual ROIs, including those in the early visual cortex. Seven of these regions were associated with higher-level visual processing; all seven higher-level ROIs were used in object category classification tasks probing the semantic understanding of visual information (only the higher-level ROIs were used due to the semantic nature of the classification tasks; earlier regions typically capture low- to mid-level features like edges in V142,43 and colors and shapes in V444,45): extrastriate body area (EBA), fusiform face area (FFA), lateral occipital cortex (LO), occipital face area (OFA), parahippocampal place area (PPA), retrosplenial cortex (RSC), transverse occipital sulcus (TOS). 1,427 voxels belonged to these regions.

In machine learning, loss functions are used to assign penalties for misclassifying data; then, the objective of the algorithm is to minimize loss. Typically, a hinge loss function (Eq. 1) is used for classic maximum-margin binary classifiers like Support Vector Machine (SVM) models46:

$${\varphi }_{h}(z)=\,{\rm{\max }}\,\mathrm{(0},1-z)$$
(1)

where \(z=y\cdot f(x)\), \(y\in \{-1,+1\}\) is the true label, and \(f(x)\in {\mathbb{R}}\) is the predicted output; thus, \(z\) denotes the correctness of a prediction. The HL function assigns a penalty to all misclassified data that is proportional to how erroneous the prediction is.

However, incorporating brain data into a machine learning model relies on the assumption that the intensity of a pattern of activation in a region represents the neuronal response to a visual stimulus. A strong response signals that a stimulus is more associated with a particular visual area, while a weaker response indicates that the stimulus is less associated with it47. Here, the proposed activity weighted loss (AWL) function (Eq. 2) embodies this strategy by proportionally penalizing misclassified training samples based on any inconsistency with the evidence of human decision making found in the fMRI measurements, in addition to using the typical HL penalty:

$${\varphi }_{\psi }(x,z)=\,{\rm{\max }}\,\mathrm{(0},\mathrm{(1}-z)\cdot M(x,z))$$
(2)

where

$$M(x,z)=\{\begin{array}{cc}1+{c}_{x}, & {\rm{i}}{\rm{f}}\,z < 1\\ 1, & {\rm{o}}{\rm{t}}{\rm{h}}{\rm{e}}{\rm{r}}{\rm{w}}{\rm{i}}{\rm{s}}{\rm{e}}\end{array}$$
(3)

and c x  ≥ 0 is an activity weight derived from fMRI data corresponding to x.

In its general form with an unknown method of generating activity weights, AWL penalizes more aggressively the misclassification of stimuli x with large activity weight c x . The proposed paradigm involves training a binary SVM classifier on fMRI voxel activity and using Platt probability scores as activity weights. With this method, a large activity weight c x denotes that the fMRI activity corresponding to visual stimulus x is predicted with high confidence to be a positive example for a given binary classification task. There are several possible explanations of what a large activity weight c x connotes about visual stimulus x: 1. it corresponds well to a canonical neural response pattern for the positive class, and 2. its highly confident predictive quality suggests that more upstream parts of the visual cortex would recognize its corresponding image with ease. The second explanation is difficult to test without further data of human recognition quality. However, it was qualitatively observed that visual stimuli with large activity weights were clear positive examples (i.e., a single, dominant object of the class of interest in the image, like the one in the bottom right of Panel C in Fig. 1), providing evidence for the first explanation.

Figure 1
figure 1

Experimental workflow for biologically-informed machine learning using fMRI data. (A) fMRI was used to record BOLD voxel responses of one subject viewing 1,386 color images of natural scenes, providing labelled voxels in several conventional, functional ROIs (i.e., EBA, FFA, LO, OFA, PPA, RSC, and TOS)41. (B) For a given binary object category classification task (e.g., whether a stimulus contains an animal), the visual stimuli and voxel activity data were split into training and test sets (not shown). An SVM classifier was trained and tested on voxel activity alone. (C) To generate activity weights, classification scores, which roughly correspond to the distance of a sample from the decision boundary in (B), were transformed into a probability value via a logistic function48. (D,E) The effects of using activity weights were assessed by training and testing two classification models on image features of the visual stimuli: (D) One SVM classifier used a loss function (i.e., hinge loss [HL]) that equally weights the misclassification of all samples as a function of distance from the SVM’s own decision boundary. (E) Another SVM classifier used a modified loss function (i.e., activity weighted loss [AWL]) that penalizes more aggressively the misclassification of samples with large activity weights. In training, these classifiers in (D) and (E) only had access to activity weights generated in (C); in testing, the classifiers used no neural data and made predictions based on image features alone. (Images used in this figure are from69 and are freely available via https://creativecommons.org/publicdomain/zero/1.0/CC01.0).

With this formulation, not all training samples require an fMRI-generated activity weight. Note that c x  = 0 reduces the AWL function to a HL function and can be assigned to samples for which fMRI data is unavailable. AWL is inspired by previous work8, which introduced a loss function that additively scaled misclassification penalty by information derived from behavioral data. AWL replaces the standard HL function (Eq. 1) in the objective of the SVM algorithm, which does not have access to any information other than a feature vector and an arbitrary class label for each training sample in its original form.

Experiments were conducted for the 127 ways that the seven higher-level visual cortical regions could be combined. In each experiment, for a given combination of ROIs and a given object category, the following two-phase procedure was carried out (Fig. 1):

  1. 1.

    Generate activity weights \(\{{c}_{{x}_{i}}\}\) by calibrating the scores of a Radial Basis Function (RBF) kernel SVM binary classifier, e.g., \({f}_{I}:{X}_{I}\to \mathrm{[0,1]}\), trained on the training voxel data for the combination into probabilities via a logistic transformation48 (Fig. 1A–C). xI(i)ϵX I is a vector containing the response amplitudes for all the voxels in a given ROI combination that were recorded when the subject was viewing image x i , and the resulting probability \({c}_{xi}={f}_{I}({x}_{I(i)})\) connotes how likely voxel activity \(x\) was recorded when the subject was viewing an image in a given object category, e.g., humans. fMRI-based activity weights were only generated for voxel activity associated with images that were clear positive and negative examples of a given class. For all other examples (e.g., images that contained multiple classes, such as a person with a pet animal), \({c}_{{x}_{i}}=0\).

  2. 2.

    Create five balanced classification problems (Fig. S1). For each balanced problem and a set of image descriptors, train and test two binary SVM classification models, e.g., \({f}_{II}:{X}_{II}\to \{-1,+1\}\), with an RBF-kernel–one that used the HL function and another that used an AWL function conditioned on the activity weights \(\{{c}_{{x}_{i}}\}\) from the first step (Fig. 1D and E). Two image features were considered: HOG is a handcrafted feature that is approximately V1-like38,49; CNNs are learned feature representations that approximate several additional layers of the ventral stream14,22. xII(i)ϵX II is a vector containing either HOG or CNN features for image x i .

Experiments were performed for four object categories: humans, animals, buildings, and foods; see methods for more details.

We demonstrate that using activity weights derived from all of the higher visual cortical regions significantly improves classification accuracy across all four object categories via paired, one-tailed testing (Fig. 2A and B). A substantial amount of fMRI decoding literature focuses on three ROIs: EBA, FFA, and PPA50,51,52. This is in part because these three regions are thought to respond to visual cues of interest for the study of object recognition: body parts, faces, and places respectively. Given the overlap between these visual cues and the four object categories used, we hypothesized that activity weights derived from brain activity in these three regions would significantly improve classification accuracy for the humans, animals, and buildings categories only in instances where a response would be expected. For example, PPA was expected to improve the buildings category but to have little, if any, effect on the humans category (Fig. 2E). A comparison of models that used activity weights based on brain activity from these three regions and models that used no activity weights aligns well with the neuroimaging literature (Fig. 2C–E). Classification accuracy significantly improved not only when activity weights were derived from voxels in all seven ROIs or from voxels in the individual EBA, FFA, and PPA regions but also when activity weights were derived from voxels in most of the 127 ROI combinations (Fig. S2). We observed that adding fMRI-derived activity weights provided a large improvement to models using HOG features compared to those using CNN features (Fig. 2A and B). These results suggest that improvements in decision making (e.g., the use of salient activity weights based on brain activity) may be able to compensate for poor feature representation (e.g., HOG features). They also imply that some of the information carried by activity weights may already be latently captured in CNN features. Despite their relatively smaller performance gains, activity weighted classifiers for CNN features still demonstrate that the state-of-the-art representation, which is often praised as being inspired by the mammalian ventral steam, does not fully capture all the salient information embedded in internal representations of objects in the human brain.

Figure 2
figure 2

Results showing the effect of conditioning classification models for four visual classes on fMRI data during supervised training (Fig. 1). (A,B) Side-by-side comparisons of the mean classification accuracy between models that were trained using either (A) HOG features or (B) CNN features and either a hinge loss (HL) or activity weighted loss (AWL) function. These graphs show results of experiments that generated activity weights by using voxels from all seven higher-level visual ROIs (i.e., EBA, FFA, LO, OFA, PPA, RSC, and TOS). For each object category and choice of image features, the models trained using AWL were significantly better (p < 0.01 via paired, one-tailed t-testing). While using AWL loss reduces misclassification error using both features, it particularly improves the performance of handcrafted HOG features. (C–E) Mean error reductions gained by switching from HL to AWL loss when using conditioning classifies on brain activity from individual ROIs (i.e., EBA, FFA, or PPA) show that certain areas produce significantly better results for the specific categories they are selective for. Error bars are standard error over 20 trials in all cases.

Additionally, statistical analysis by permutation was carried out to test whether the above-average mean accuracy rates observed in classification experiments for the humans and animals categories that included EBA, as well as in the experiments for the buildings and foods categories that included PPA, were statistically significant or products of random chance. For each object category and set of image features, a null distribution with 1,000,000 samples was generated. Each sample in the null distribution reflects the percentage that a random set of 64 ROI combinations would have an mean classification accuracy (i.e., averaged over 20 samples, 20 = 4 partitions × 5 balanced problems) that is greater than the overall mean classification accuracy averaged over all 127 mean classification accuracies. The aim is to test the significance of individual ROIs in generating salient activity weights that yield above-average classification accuracy rates. Thus, these samples simulate randomly assigning ROI labels to the 127 combinations. If individual ROIs did not significantly contribute to the above-average mean accuracy rates observed, above-average mean accuracy rates of combinations that include specific ROIs falling near the mean of the null distribution should be observed. To generate each of the 1,000,000 samples, 64 of the 127 ROI combinations were randomly selected. Then, a count was taken of how many of those 64 randomly selected combinations have a mean classification accuracy that is greater than the average of that of all 127 sets of experiments corresponding to the 127 total ROI combinations. A sample is normalized to represent a percentage by dividing this count by 64. Finally, the actual percentage of the 64 ROI combinations including a given ROI (e.g., all 64 ROI combinations that include EBA), that yield above-average mean classification accuracy when compared to the overall mean classification accuracy for all 127 combinations is compared to the null distribution.

When using HOG features to train activity-weighted loss SVMs to classify humans, 98.44% of the 64 combinations that include EBA yielded above-average accuracies, which well exceeded the null distribution of probable percentages if EBA did not have a significant effect in improving the classification accuracy. Figures S4 and S5 and accompanying supplementary text further detail how a null sample is generated. Figures 3 and S3 show which ROIs significantly differed from the respective null distributions for each object category. This analysis more rigorously confirms the significance of the EBA region in improving the classification accuracy of the humans and animals categories and of the PPA region in improving the accuracy of the buildings and foods categories. Most notably, the EBA region dramatically exceeds the significance thresholds of the null distributions for humans and animals.

Figure 3
figure 3

Statistical influence of each ROI in binary object classification models using fMRI activity weighted loss (AWL) and HOG features. In each graph, the percentage of the 64 ROI combinations containing a specific ROI that had a mean classification accuracy greater than that of all 127 sets of experiments is plotted. The threshold for the 95% confidence interval (p < 0.0004) is overlaid, showing which ROIs significantly differed from the respective null distribution for each object category. Permutation tests and Bonferroni correction (α = 127) were used. See Figure S3 for a similar plot for CNN features and Figures S4 and S5 for an explanation of how the null distribution was sampled.

Discussion

Our results provide strong evidence that information measured directly from the human brain can help a machine learning algorithm make better, more human-like decisions. As such, this work adds to a growing body to literature that suggests that it is possible to leverage additional “side-stream” data sources to improve machine learning algorithms8,53. However, while measures of human behavior have been used extensively to guide machine learning via active learning54,55,56,57, structured domain knowledge58, and discriminative feature identification59, this study suggests that one can harness measures of the internal representations employed by the brain to guide machine learning. We argue that this approach opens a new wealth of opportunities for fine-grained interaction between machine learning and neuroscience.

While this work focused on visual object recognition and fMRI data, the framework described here need not be specific to any one sensory modality, neuroimaging technique or supervised machine learning algorithm. Indeed, the approach can be applied generally to any sensory modality, and could even potentially be used to study multisensory integration, with appropriate data collection. Similarly, while fMRI has the advantage of measuring patterns of activity over large regions of the brain, one could also imagine applying our paradigm to neural data collected using other imaging methods in animals, including techniques that allow single cell resolution over cortical populations, such as two-photon imaging60. Such approaches may allow more fine-grained constraints to be placed on machine learning, albeit at the expense of allowing the integration of data from smaller fractions of the brain.

There are several limitations with this first instantiation of the paradigm. First, we derived a scalar activity weight from high-dimensional fMRI voxel activity. This simple method yielded impressive performance gains and corresponded well to the notion of ease of recognition; however, much more meaningful information captured by the human brain is inevitably being ignored. Future biologically-informed machine learning research should focus on the development and infusion of low-dimensional activity weights, which may not only preserve more useful data but also reveal other dimensions that are important for various tasks, but are not yet learned by machine learning algorithms or captured in traditional datasets.

Another constraint on our specific experimental set-up was the limited amount and distribution of our data (N = 1260 images), which restricts us to considering broad object categories instead of fine-grained ones. It remains to be seen whether our paradigm would similarly bolster machine learning algorithms tasked to discriminate among fine-grained classes that are less clearly distinguished in the visual cortex (e.g., furniture, tools, sports equipment). Given how robustly humans can distinguish among numerous fine-grained categories that do not necessarily have dedicated visual processing regions, such as EBA for human body parts, we hypothesize that using brain activity from all higher-level ROIs (Fig. 2A and B) would yield similar improvements in performance for fine-grained classification tasks. However, we suspect that using activity from a single, higher-level ROI, such as EBA, FFA, or PPA (Fig. 2C–E), will not confer significant improvements and that no single higher-level ROI will be substantially influential (Fig. 2C–E), but rather, the aggregate semantic information encoded and distributed throughout all higher-level ROIs will be responsible for any observed benefits from biologically-informed training for fine-grain classification tasks.

Furthermore, while we demonstrated our biologically-informed paradigm using support vector machines, there is also flexibility in the choice of the learning algorithm itself. Our method can be applied to any learning algorithm with a loss formulation as well as extended to other tasks in regression and Bayesian inference. An analysis of different algorithms and their baseline and activity weighted performance could elucidate which algorithms are relatively better at capturing salient information encoded in the internal representations of the human brain24.

Our paradigm currently requires access to biological data during training time that corresponds to the input data for a given task. For instance, in this work, we used fMRI recordings of human subjects viewing images to guide learning of object categories. Extending this work to new problem domains will require specific data from those problem domains, and this will in turn require either increased sharing of raw neuroimaging data, or close collaboration between neuroscientists and machine learning researchers. While this investigation used preexisting data to inform a decision boundary, one could imagine even more targeted collaborations between neuroscientists and machine learning researchers that tailor data collection to the needs of the machine learning algorithm. We argue that approaches such as ours provide a framework of common ground for such collaborations, from which both fields stand to benefit.

Methods

fMRI Data Acquisition

The fMRI data used for the machine learning experiments presented in this paper are a subset of the overall data from a published study on scene categorization41. All fMRI data were collected using a 3T Siemens Tim Trio MR scanner. For the functional data collection for one subject, a gradient-echo planar imaging sequence, combined with a custom fat saturation RF pulse, was used. Twenty-five axial slices covered occipital, occipitoparietal, and occipitotemporal cortex. Each slice had a 234 × 234 mm2 field of view, 2.60 mm slice thickness, and 0.39 mm slice gap (matrix size = 104 × 104; TR = 2,009.9 ms; TE = 35 ms; flip angle = 74°; voxel size = 2.25 × 2.25 × 2.99 mm3). While41 recorded fMRI activity for four subjects, in this work, we only use the brain activity from one subject out of the original four. The experimental protocol was approved by the UC Berkeley Committee for the Protection of Human Subjects. All methods were performed in accordance with the relevant guidelines and regulations.

Data Set

The data set consisted of 1,386 500 × 500 color images of natural scenes from41,61, which the subject viewed while his brain activity was being recorded (see41 for more details on the dataset). These images were used as both stimuli for the fMRI data collection, and as training data for the machine learning experiments. Within this collection, the training set consisted of 1,260 images and the testing set of 126 images. Per-pixel object labels in the form of object outlines and semantically meaningful tags were available for each image. A subset of these labels were mapped to one of five object categories: humans, animals, buildings, foods, and vehicles. For each image and for each of the five object categories, if at least 20% of an image’s original pixel labels were part of a given object category, that image was tentatively labeled as a positive sample for that category. We sampled 646 images that were labelled with a single object category. There were 219 humans images, 180 animals images, 151 buildings images, 59 foods images, and 37 vehicles images (a category, due to its small size, that only contributed negative examples).

fMRI Data Preprocessing

To perform motion correction, coregistration, and reslicing of functional images, the SPM8 package62 was used. Custom MATLAB63 software was used to perform all other preprocessing of functional data. To constrain the dimensionality of the fMRI data, the time series recordings for each voxel were reduced to a single response amplitude per voxel, per image by deconvolving each time course from the stimulus design matrix using hemodynamic response function64. See Stansbury et al.41 for additional details about the fMRI data acquisition and preparation that are not directly related to the machine learning experiments we describe in this work.

fMRI Activity Weight Calculation

All of the fMRI data were scaled to bring the value of each dimension within the range of [0, 1] for RBF SVM training. For each voxel, we calculated the minimum and maximum response amplitude across all 1,260 original training samples. All voxels for the 646 images used in our experiments were then scaled using Equation 4, where x ij is the j-th sample’s response amplitude for voxel i, \({\overrightarrow{x}}_{i}\) is a 646-dimensional vector with the response amplitudes of all samples for voxel i, and \({x}_{ij}\text{'}\) is the j-th sample’s rescaled amplitude for voxel i.

$${x}_{ij}\text{'}=\frac{{x}_{ij}-\,{\rm{\min }}({\overrightarrow{x}}_{i})}{{\rm{\max }}({\overrightarrow{x}}_{i})-\,{\rm{\min }}({\overrightarrow{x}}_{i})}$$
(4)

The main challenge of generating weights from brain activity (i.e., activity weights) lies in reducing high-dimensional, nonlinear data to a salient, lower-dimensional signal of “learnability”. The supervised machine learning formulation used in this work requires a single real valued weight per training sample for a loss function (described below). Activity weights were computed by using a logistic transformation48 to calibrate the scores from SVMs with RBF kernels trained on brain activity. For each object category and for all voxels from a given combination of ROIs, we made use of all the positive samples for that object category as well as all the samples that are negative for all object categories; together, these are the aforementioned 646 samples (i.e., clear sample set). Only activity weights for this subset of a partition’s training set, as opposed to annotations for all 1,386 stimuli were generated. This constraint maximized the signal-to-noise ratio in the activity weights and improved the saliency of activity weights for a specific object category by only weighting clear positive and negative samples.

Activity weights for training were generated using a modification of the k-fold cross validation technique. For a given training set that is a subset of the whole 1,386 set, the collection of voxel data for the training set’s images in the 646-stimuli clear sample set was randomly split into five folds. For each of these folds, we held out the current fold as test data and combined the other four folds as training data. With this newly formed training set, a grid search was performed to tune the soft margin penalty parameter C and RBF parameter γ for an RBF SVM classifier using the LibSVM package65. Finally, activity weights were generated by testing the classifier on the held-out fold to produce Platt probability scores48 of class inclusion. This process was repeated for all five folds to generate activity weights for the collection of stimuli in the training set that are part of the clear sample set.

Experimental Design

Each of the original 500 × 500 colored images were down sampled to 250 × 250 grayscale images, with pixel values in the interval [0,1]. A layer of Gaussian noise with a mean of 0 and variance of 0.01 was added to each of these images. For each image, two feature descriptor types were independently generated. Histogram of Oriented Gradients (HOG) descriptors with a cell size of 32 were generated using the VLFeat library’s vl_hog function49, which computes UoCTTI HOG features66. Convolutional neural network (CNN) features were generated using the Caffe library’s BLVC Reference CaffeNet model14, which is AlexNet trained on ILSVRC 20121, with minor differences from the version described by Krizhevsky et al.13. 1000-dimension pre-softmax activations from CaffeNet were used as the CNN image features. Four partitions of training and test data were created. In each partition, 80% of the data was randomly designated as training data and the remaining 20% was designated as test data.

For each partition, experiments were conducted for the 127 ways that the seven higher-level visual cortical regions (i.e., EBA, FFA, LO, OFA, PPA, RSC, and TOS) could be combined. In each experiment, for a given combination of higher-level visual cortical regions and for a given object category, two training steps were followed:

1. Activity weights were generated for a sampling of training stimuli, ones that are part of the 646-stimuli clear sample set, using an RBF-kernel SVM classifier trained on the training voxel data for that combination, following the fMRI activity weight calculation procedure described above.

2. Five balanced classification problems were created from the given partition’s training data. For each balanced classification problem and each set of image descriptors (HOG and CNN features), two SVM classifiers were trained and tested–one that uses a standard hinge loss (HL) function67 and another that uses a activity weighted loss (AWL) function described by Equations 2 and 3. Both classifiers used an RBF-kernel.

The hinge loss function in Equation 1 is solved via Sequential Minimal Optimization68. It is not necessary to assign an activity weight \({c}_{x}\in C\) derived from fMRI data to every training sample; c x can be 0 to preserve the output of the original hinge loss function. In our experiments, \({c}_{x}\in \mathrm{[0},\mathrm{1]}\), where c x corresponds to the probability that x is in the object category in question; this results in penalizing more aggressively the misclassification of strong positive samples. The libSVM package was used to train and test SVM classifiers using a hinge loss function65. To train classifiers using an activity weighted loss function, we modified publicly available code for an alternative additive loss formulation8.

For each object category, combination of higher visual cortical regions, and set of image descriptors, we created five balanced classification problems. For each problem, we created a balanced training set with an equal number of positive and negative examples. For all object categories, because there were more negative than positive samples, all positive samples were used in each balanced problem and the same number of negative samples were randomly selected for each balanced problem. The balanced problems only balanced the training data; each balanced problem used the same test set: the partition’s held-out test set.

For both loss functions, binary SVM classifiers with RBF kernels were trained without any parameter tuning, using parameters C = 1 and γ = 1/number of features. The activity weighted loss function incorporates the calibrated probability scores from the first stage voxel classifiers as activity weights. We assigned these activity weights to the training samples that are members of the 646-stimuli clear samples set. For samples without fMRI-derived activity weights, activity weights of 0.0 are used. Finally, classifiers were tested on the partition’s test set. In experiments using CNN features, RBF-kernel SVM classifiers converged during training, even though the vectors consisted of high-dimensional data.

Statistics for ROI Analysis

Because our analysis of the influence of specific ROIs involves comparing 127 quantities, to avoid multiple comparisons and to control for the family-wise error rate, Bonferroni correction was applied to adjust all confidence intervals. To create m individual confidence intervals with a collective confidence interval of 1−α, the adjusted confidence intervals were calculated calculated via \(1-\frac{\alpha }{m}\). With these adjusted confidence intervals (m = 127, α = 0.05 and α = 0.01), we compared the outputs of the empirical CDF function F X (x) for each null distribution X that corresponded to an object category and set of image features as well as each ROI.

Data availability

The fMRI and image data that support the findings of this study are from Stansbury et al.41 and are available from them on reasonable request.