Abstract
As we make progress towards gender parity in many spheres of life, an important question is whether people place as much value on women’s opinions as they do on men’s opinions, especially when making buying decisions. Using online product opinions (reviews)—an increasingly important source of information in buying decisions—as our context, we investigate whether women’s product opinions are as valuable as those of their male peers. Across three studies—one experimental and two using field data from online review platforms in the United States, we report evidence of implicit gender bias in evaluating online product opinions. In the experimental study, 216 participants (108 men, 108 women, mean age 40.6) participated in an online study where they evaluated reviews written by men and women across different product types. We find that, compared to men, women’s product opinions were rated as less helpful and were less likely to influence people’s buying decisions. For gender-typed products, that is, products highly associated with specific gender groups, men’s product opinions were rated higher than women’s in helpfulness and likelihood to influence buying decisions for male gender-typed products. However, there was no significant difference between men’s product opinions and women’s product opinions for female gender-typed products, indicating that women’s product opinions are not perceived as being more valuable than men’s opinion even for products typically associated with women. In the field data studies, we relied on the internet public’s helpfulness and usefulness votes on reviews contributed by both men and women across both search and experience goods to confirm the findings in the experimental study. We discuss some of the potential reasons and implications of our findings.
Introduction
In certain regions and environments, women are still rather seen than heard. Gender inequality in speaking and opinion sharing as a woman persists. For instance, women are challenged and interrupted more often than their male peers while presenting their arguments (Butler and Geis, 1990; A. Feldman and Gill, 2019; Jacobi and Schweers, 2017), have more of their talk-time taken by the audience while giving academic job talks (Blair-Loy et al., 2017), and make significantly fewer speeches relative to their male peers in parliament (Bäck and Debus, 2019; Bäck et al., 2014). Concomitantly, women think that their opinions are often diminished and do not matter as much (Miller, 2018). Hence, it raises the question of whether women’s opinions are viewed less favorably than those of their male peers, especially in decision-making. This study explores whether there is a difference in how people evaluate women’s opinions—views and experiences shared about goods and services—relative to their male peers when making buying decisions.
Other people’s opinions have long been valuable to individuals when making buying decisions (Chakravarty et al., 2010; Chatterjee, 2001). This is especially true in the active-evaluation phase of buying decisions (Court et al., 2009). Wives have relied on peer’s opinions in making buying decisions for household goods and food products (Arndt, 1967; Katz and Lazarsfeld, 1966), patients have relied on other people’s opinions when choosing physicians for medical care (S. P. Feldman and Spencer, 1965; Pechmann et al., 1989), and moviegoers rely on movie critics and friends’ opinions (Chakravarty et al., 2010). While people have relied on others’ opinions in their buying decisions, and with the steady progress towards gender parity in many areas over the years, it is unclear whether gender still plays a significant role in whose opinions are valued.
Individuals have long used gender as a judgement making heuristic (Fiske, 1998; Kunda and Spencer, 2003; Wheeler and Petty, 2001); and we have reason to believe that gender may still play a role in the evaluation of opinions. Previous research hints that gender-based differences may persist in the evaluation of opinions. For instance, writings by male authors are rated more highly than writings by female authors (Goldberg, 1968; Levenson et al., 1975), identical entrepreneurial pitches were more likely to receive investments when pitched by men than by women (Brooks et al., 2014), same lectures were rated high when they were perceived to be written by male professors versus female professors (Abel and Meltzer, 2007), and male voiced computer-speeches and tutorials were rated highly, considered more credible, and exerted more influence on decision-making than female voiced versions (E.-J. Lee, 2003, 2008; E. J. Lee et al., 2000; Morishima et al., 2001; Nass et al., 1997).
In the work reported here, over a series of three studies on online consumer-generated opinions on products (goods and services) in the United States (US), we test whether gender-based differences exist in the evaluation of product opinions. Particularly, we test (1) whether people are less likely to value a woman’s opinion on a product relative to their male peers; and (2) whether the evaluations are likely to favor a specific gender group if the opinions are for products typically associated with that gender. We investigate the aforementioned in both the search and experience goods contexts, and also check whether any observed differences in the evaluation of the opinions may be driven by in-group bias. If any differences exist, we are not suggesting that they are intentional or stem from a conscious effort to undermine product opinions from any gender, but that it may be due to implicit or unintended biases.
Online consumer-generated product opinions (hereinafter reviews) are well adapted for our research. First, about 97 percent of consumers regularly or occasionally consult reviews, and about 85 percent trust them as much as personal recommendations (Brightlocal, 2017), making it an important source of information in buying decisions. Second, a significant percentage of retail shopping is now done online, with the US sales market share of online shopping now higher than that of general merchandise stores (Ouellette, 2020; Rooney, 2019), making the online shopping environment an ideal context for investigating gender-based differences in the evaluation of opinions.
In the rest of the paper, we first present the three studies with their results. After which, we then conclude with a discussion and conclusion section. Study 1 uses an experimental design, while studies 2 and 3 use field data retrieved from online review platforms—Yelp.com and Amazon.com, respectively.
Study 1
The goal of this study is to experimentally test whether there are any gender-based differences in how individuals evaluate reviews contributed by women relative to those contributed by men, and to determine whether the evaluations are likely to favor a specific gender if the reviews are for products typically associated with that gender. The study was conducted on Amazon Mechanical Turk (MTurk) and had Institutional Review Board (IRB) approval.
Stimulus materials
Preparation of the stimuli involved creating product reviews (see Fig. 1 for examples) similar to what is obtainable on typical e-commerce sites. However, there were slight modifications to suit the nature of the study. First, each review had either a female-looking or male-looking avatar placeholder (as can be seen in Fig. 1) to aid participants in inferring the gender of the review contributor. The female-looking and male-looking avatar placeholders in the stimuli were matched with a corresponding gendered name (e.g., Mary and Grace for the female-looking avatars, Richard and William for the male-looking avatars) which was placed at the top of the review. Although contributors can have their full names, aliases, or just first names appear in a typical product review, we went with the option of only first names in the reviews to reduce ambiguity and facilitate easy gender inference. Second, we had an image of the reviewed product in the stimuli, as shown in Fig. 1 to help participants identify the reviewed product. Typical reviews on sites like Amazon.com may not have the product placed right by the review text like in the stimuli. The products used in the stimuli were a mix of gender-typed (products traditionally associated with a specific gender) and non-gender-typed (products not associated with any specific gender) products. To arrive at this selection of products for the stimuli, 12 participants were asked to rate a list of 12 products on a 7-point scale on whether the product is traditionally associated with men or women to them (1 = extremely associated with men, 4 = gender neutral, 7 = extremely associated with women). The initial list of 12 products and the process of identification were influenced by the literature (Fugate and Phillips, 2010; Morrison and Shaffer, 2003). Products with ratings of 2 or lower were considered to be associated with men, those with ratings of 6 or higher were considered to be associated with women, and those with ratings of between 3.5 and 4.5 were considered gender neutral. Three of the products were selected and used in the stimuli: toothbrush (gender neutral), baby care kit (associated with women), and tool kit (associated with men). The variations for each product followed a 2 (positive review vs. negative review) × 2 (women contributed vs. men contributed) design. Although we did not intend to check for the effect of review valence in this study, we included the positive and negative reviews to rule out any potential effect that may be driven by review valence. We pretested the stimuli to check if participants could infer the gender of the review contributors. In the pretest, 16 participants were asked to identify the gender of the review contributor. There was perfect agreement (Fleiss Kappa = 1.00) on the gender of review contributors on the stimuli.
We created two filler reviews with gender-neutral products—orange juice and flip-flops. The filler reviews had contributor names that were less gendered (e.g. Sam, Remy) and were sandwiched between the treatment reviews in the experiment.
Participants
We recruited 216 adult participants (Meanage = 40.6, SD = 10.8, 50 percent female) on Amazon MTurk to participate in the study for a payment of $2.80. We obtained informed consent from all participants. All study participants were located in the United States, had some experience with online shopping (Meanonline-shopping = 12.6 years, SD = 3.8), and all indicated that they use reviews when making buying decisions. About 75 percent of them had some form of a college education. We specifically set the selection criteria on Amazon MTurk to randomly assign participants into the treatment groups such that we had a balanced sample of male and female participants in each treatment group. We calculated our sample size based on an estimated effect size of d = 0.2, which required a sample size of ~160 participants for a study powered at 90 percent. However, we had 216 participants complete the study task.
Procedure
The participants (n = 216) were each presented with a total of five reviews, one at a time, for each of the five products, but all from the same treatment group. In essence, a participant saw all treated and filler product reviews; however, the reviews they saw will be from one of the following four treatment groups: positive and written by women, positive and written by men, negative and written by women, and negative and written by men. Reviews in positions 2 and 4 were the filler reviews. For each set of five reviews a participant saw, we changed the names on the reviews to avoid all five appearing to be written by the same individual. Thus, a sample participant will see all five reviews written by women but with different names instead of one name. Participants were asked to read and evaluate each of the reviews. After reading each review, participants indicated their perception of the (a) helpfulness of the review and (b) likelihood that the review will influence their purchase decision (all measured on a 9-point Likert scale). The survey items for review helpfulness were adapted from Yin et al. (2014) and can be seen in the Supplementary information. The Cronbach alpha for review helpfulness was 0.96, implying that all three survey items are strongly correlated. After the main task, participants completed an attention check.
Analysis and results
We employed repeated-measures ANOVA in our analyses. Results from the repeated-measures ANOVA analyses, as shown in Fig.2 revealed that reviews contributed by women were rated significantly lower in helpfulness than reviews contributed by men [meanwomen = 5.53, SDwomen = 2.25 vs. meanmen = 6.06, SDmen = 2.10, F(1,214) = 5.14, p < 0.05, \(\eta _{\rm {p}}^2\) = 0.02 (Huynh–Feldt-corrected for nonsphericity)]. Similarly, the likelihood of a review to influencing purchase decision was also significantly lower when contributed by women than when contributed by men [meanwomen = 5.29, SDwomen = 2.63 vs. meanmen = 5.89, SDmen = 2.38, F(1,214) = 4.23, p < 0.05, \(\eta _{\rm {p}}^2\) = 0.01, (Huynh–Feldt-corrected for nonsphericity)]. We also examined if the results hold for gender-typed products. That is, we tested if reviews for products typically associated with specific gender groups are perceived to be more valuable when individuals from the corresponding gender group contributed to them. For the product associated with men (tool kit), the results reveal that reviews contributed by men were perceived as significantly more helpful than those contributed by women [meanmen = 6.98, SDmen = 1.89 vs. meanwomen = 6.39, SDwomen = 1.71, F(1,214) = 5.78, p < 0.05, \(\eta _{\rm {p}}^2\) = 0.03]. However, it was marginally significant for the likelihood to purchase the product [meanmen = 6.58, SDmen = 2.31 vs. meanwomen = 6.04, SDwomen = 2.42, F(1,214) = 2.84, p = 0.09, \(\eta _{\rm {p}}^2\) = 0.01]. For the product associated with women (baby care kit), the results reveal no significant differences in the perceived helpfulness rating between reviews contributed by men and those contributed by women [meanmen = 5.40, SDmen = 2.35 vs. meanwomen = 4.88, SDwomen = 2.32, F(1,214) = 2.72, p > 0.1, \(\eta _{\rm {p}}^2\) = 0.01]. The difference in the likelihood to influence purchase decision was marginally significant for reviews contributed by men and those contributed by women [meanmen = 5.52, SDmen = 2.39 vs. meanwomen = 4.93, SDwomen = 2.65, F(1,214) = 3.01, p = 0.08, \(\eta _{\rm {p}}^2\) = 0.01].
To rule out the possibility that the gender-based differences observed in the results are driven by in-group bias, where men rate reviews contributed by men higher and women rate reviews contributed by women higher, we split the data by participant gender. The results, shown in Fig. 3, reveal that for male participants, there were no significant differences in how they rated reviews contributed by women and men in terms of helpfulness [meanwomen = 5.38, SDwomen = 2.17 vs. meanmen = 5.77, SDmen = 2.06, F(1,106) = 1.46, p = 0.23, \(\eta _{\rm {p}}^2\) = 0.01, (Huynh–Feldt-corrected for nonsphericity)] and likelihood to influence purchase decision [meanwomen = 5.31, SDwomen = 2.56 vs. meanmen = 5.51, SDmen = 2.36, F(1,106) = 0.24, p = 0.62, \(\eta _{\rm {p}}^2\) = 0.00, (Huynh–Feldt-corrected for nonsphericity)]. For the female participants, however, there were significant differences, with reviews contributed by women rated lower than those contributed by men in helpfulness [meanwomen = 5.68, SDwomen = 2.31 vs. meanmen = 6.36, SDmen = 2.10, F(1,106) = 4.09, p < 0.05, \(\eta _{\rm {p}}^2\) = 0.03, (Huynh–Feldt-corrected for nonsphericity)] and likelihood to influence purchase decision [meanwomen = 5.27, SDwomen = 2.70 vs. meanmen = 6.28, SDmen = 2.35, F(1,106) = 5.86, p < 0.05, \(\eta _p^2\) = 0.04, (Huynh–Feldt-corrected for nonsphericity)]. Although the results show no in-group bias, it appears that the gender-based differences manifest more with the women participants.
Study 2
This study aims to examine whether the gender-based difference observed in how individuals evaluate reviews extends to experience goods and services and to provide some external validity to the experimental study. To do this, we collected and analyzed review data from Yelp.com, a website that provides user ratings and textual reviews for businesses in the service industry, including restaurants, auto services, and home services among others.
Data collection
We collected reviews posted between January 2015 and May 2015 in the nightlife category of a major city in the Southeastern United States. In total, we extracted 7626 reviews contributed by 3854 unique individuals. For each review, we collected the contributor’s name, number of “useful” votes, and review rating. We also collected other contributor-specific and review-specific information, including the contributor’s status, number of friends, and length of the review, among others. Table 1 provides the list of all the variables and their description, while Table 2 shows the summary statistics and Pearson correlation.
Inferring review contributor’s gender
To infer the review writer’s gender, we applied machine-learning techniques. We used the machine learning toolkit “genderizeR” (Wais, 2016), and inferred the gender of the review contributor using their first name. Using an individual’s name to infer their gender has been done in extant studies (Atir and Ferguson, 2018; Ruzycki et al., 2019). In our case, the process involved matching a contributor’s first name in our sample to that in an existing names database (Wais, 2016), and extracting the contributor’s name gender-probability estimate. A name’s gender-probability estimate is the probability that the name belongs to a man or a woman. So, a name with an 86 percent man gender-probability estimate implies that there is an 86 percent chance that it belongs to a man. After extracting the name gender-probability estimates for all the reviews, we dropped all reviews whose contributor’s name gender-probability estimate is less than 99 percent from our dataset. About 31.5 percent (n = 2399, nman = 902, nwoman = 1497) of the reviews were retained after this process. We validated the gender labels resulting from the machine-learning technique by manually checking the gender labels on a randomly selected subsample. We coded the review contributor’s gender as a dummy variable, “Women”, with value one if a woman and zero if a man.
Data analysis and results
Given the count nature of our dependent variable and its over-dispersion, we fit a negative binomial regression model in our analyses to determine the effect of gender on the number of “useful” votes received by a review. To account for review heterogeneity among women and men review contributors, we created a matched sample that paired each review written by a woman in our sample to a similar review written by a man and reran our analyses. The results are presented in Table 3. To check for the robustness of the results, we also fit additional models whose results can be found in Table S1 of the Supplementary information.
Looking at the results, we observe an indication of gender differences in the evaluation of reviews. From the coefficient of Women (β = −0.2297, p < 0.001) in column 1 of Table 3, we observe a significant and negative effect of online reviews written by women on the number of useful votes received in the absence of controls. This result is robust to the use of the matched sample as seen in column 2 (β = −0.1805, p < 0.05). With the inclusion of controls in columns 3 (for the full sample) and column 4 (for the matched sample), the results β = −0.2040, p < 0.001 and β = −0.1506, p < 0.01, respectively, remained significant and directionally consistent. The estimated coefficient of the Women variable implied that online reviews written by women received 0.79 less useful votes than online reviews written by men on average. This result supports the finding in study 1, although in the context of service or experience products.
Study 3
In study 3, we test for the presence of gender-based differences in the evaluation of reviews like in study 1. This time, however, we use field data from an e-commerce website (Amazon.com) to further investigate whether the evaluations are likely to favor a specific gender if the reviews are for products typically associated with that gender. This study further lends external validity to the results reported in study 1 with respect to reviews on gender-typed products.
Data collection
Data for study 3 were obtained from Amazon.com, a popular e-commerce website that allows customers to post and rate product reviews. Amazon.com allows individuals to vote on reviews contributed on the platform by customers using the question, “Was this review helpful to you (yes/no)?” We collected data for all reviews posted in the beauty and home-improvement categories between January 1, 2014, and February 28, 2014. We chose these two categories because most products belonging to the categories are gender-typed (beauty for women and home-improvement for men). For each review, we recorded the name of the review contributor, review rating, number of helpfulness votes, total votes, and other review-specific information. Table 4 provides the list of all the variables and their description; Table 5 shows the summary statistics, including the breakdown by gender; and Table 6 shows the Pearson correlation.
Inferring review contributor’s gender and data processing
Like in study 2, we employed machine-learning techniques to determine the writer’s gender and dropped all reviews whose contributor’s name gender-probability estimate was less than 99 percent. Of the 15948 (nbeauty = 8458, nhome-improvement = 7490) reviews in the sample, we further preprocessed the data by removing all reviews that received zero votes, as has been done in extant studies (Mudambi and Schuff, 2010; Salehan and Kim, 2016), leaving us with 3262 reviews (20.5 percent of the sample). The split across the women’s (beauty) and men’s (home-improvement) categories was 1759 and 1503 reviews, respectively.
The gender splits across categories were beauty [women (1280 reviews), men (479 reviews)] and home-improvement [women (359 reviews), men (1144 reviews)] as can be seen in Table 5. In total, there were 1639 reviews contributed by women and 1623 reviews contributed by men. We coded the review contributor’s gender using a dummy variable, “Women”, and it takes the value one if the review contributor is a woman and zero if the contributor is a man. The dependent variable, helpfulness, was measured as the proportion of helpful votes out of the total votes received for each review. Thus, a review that received 70 ‘helpful’ votes out of a total of 100 votes will have a helpfulness value of 0.7.
Data analysis and results
Given that the dependent variable is a proportion between 0 and 1, we estimated a binomial regression with logit transformation in our analyses for the 3262 reviews in our sample. The results are presented in Table 7 in the results section. To check the robustness of the results, we also fit additional models whose results can be found in Table S2 of the Supplementary information.
The results in Table 7 suggest the presence of gender-based differences in the evaluation of product reviews. We observe that the coefficient of Women in column 1 is significant and negative (β = −0.3483, p < 0.001), implying that reviews contributed by women were rated lower than those of men in the absence of controls. The co-efficient (β = −0.3414, p < 0.001) remained directionally consistent and significant with the inclusion of controls, as seen in column 2. Splitting the data along gender-typed product categories, the result in column 3 (βwomen = −0.4127, p < 0.01) suggests that reviews contributed by women were rated as less helpful than those contributed by men in the gender-typed product category associated with men; while the result in column 4 (βwomen = −0.1586, p > 0.05) suggests that there was no significant difference between reviews contributed by women and those contributed by men in the gender-typed product category associated with women. Again, these results support our findings in study 1 and provide some external validity, particularly to the evaluation of reviews on gender-typed products.
Discussion and conclusion
How people’s opinions influence us and what we do with them is often contingent upon our receptivity to their opinions (Wilson and Peterson, 1989). Across three studies using different research methods, we find evidence of gender-based differences in how individuals evaluate and use product opinions provided by men and women. In study 1, we experimentally test whether gender-based differences exist in how individuals evaluate reviews contributed by women relative to those contributed by men, and to determine whether the evaluations are likely to favor a specific gender if the opinions are for products typically associated with that gender. Participants in the study rated reviews from men as more helpful than those from women. They also indicated that reviews from men were more likely to influence their decision to purchase the product than reviews from women. This result reasserts the notion that people are less likely to believe statements made by women compared to men (Miller, 2018; Solnit, 2008), even when it is their opinion about products that they may have purchased, used, or experienced. To the extent that online reviews written by women reflect their experiences about a product, this result aligns with research that has highlighted that women’s experiences are discounted or even considered exaggerated relative to men (Hoffmann and Tarzian, 2001; Zhang et al., 2021). Interestingly, we observe that the gender-based differences observed in the evaluations of reviews are driven more by the female participants than by male participants. One would expect that with the observed gender-based difference being less favorable to women, that the gender-based difference would be driven more by the male participants. Further, the result of the study indicates that while participants rated men’s reviews as more helpful and more likely to influence their purchase decision for products traditionally associated with men than women’s reviews, there was no difference in the helpfulness and likelihood to influence purchase decision ratings between men’s and women’s reviews for products traditionally associated with women. This suggests that people do not ascribe greater weight to the opinions provided by women about their experiences, even for products traditionally associated with them, when compared to the opinions of men about the same product. Women’s opinions have just as much value as men’s for products traditionally associated with women.
The analyses of archival data in study 2 and study 3 provide external validity to the results obtained in study 1. Study 2 investigates gender-based differences in the evaluation of reviews from a services review platform. Although the study looks at reviews in the nightlife category (which may be considered a gender-neutral product), the result is consistent with study 1. Study 3 investigates the same questions as study 1, albeit in an e-commerce website, and reinforces the gender-based differences observed in study 1. Apart from confirming that reviews written by women are considered less valuable than those written by men, study 3 also affirms that reviews written by men are considered equally as valuable as those written by women, even for products traditionally associated with women. Taken together, the results suggest that gender still predicts the way people evaluate others’ opinions, such that women’s opinions are still less valued than men’s opinions in buying decisions.
What might explain the gender bias in the evaluation and use of opinions? First, the stereotyping of men as being more analytical and brilliant, and women as less competent might still persist (Heilman and Eagly, 2008; Moss-Racusin et al., 2012). As such, people may be exhibiting gender-stereotypic responses toward presumed competency levels. Potentially a result of the long history of questioning women’s reasoning capacity (Parks, 2000), including viewing their opinions as “unreflective or immature” (Miles and August, 1990). Second, a preexisting subtle bias against women. Prior studies have highlighted some of its undermining effects in women’s evaluations (Goldberg, 1968; Moss-Racusin et al., 2012; Régner et al., 2019). For instance, Abdul-Ghani et al. (2022) document that participants in their study viewed women’s opinions as highly emotional relative to men’s opinions and therefore discounted them. Third, people may place less value on women’s opinions because women are underrepresented and less visible in expert opinion panels and media (Beaulieu et al., 2016); women’s opinions may, therefore, be more easily discounted.
The evidence reported from the three studies documents a form of implicit gender bias in the evaluation of opinions in buying decisions that might have significant implications. Women may continue to gain less visibility, eminence, and benefits on shopping platforms and environments where the value and quality of one’s opinion matter. For instance, many review service platforms, merchants, businesses, and organizations use the value consumers and users assign to reviews to reward review contributors. Given the above, we should continue to pursue endeavors and interventions that help break gender stereotypes to reduce these forms of bias.
Data availability
The data for studies 2 and 3 is available (https://doi.org/10.6084/m9.figshare.12834617) while the data for study 1 are unavailable due to participant data privacy and confidentiality.
Code availability
The code used to perform the analyses is available from the corresponding author upon request.
References
Abdul-Ghani E, Kim J, Kwon J, Hyde KF, Cui YG (2022) Love or like: gender effects in emotional expression in online reviews. Eur J Mark 56(12):3592–3616
Abel MH, Meltzer AL (2007) Student ratings of a male and female professors’ lecture on sex discrimination in the workforce. Sex Roles 57(3–4):173–180
Arndt J (1967) Role of product-related conversations in the diffusion of a new product. J Mark Res 4(3):291–295
Atir S, Ferguson MJ (2018) How gender determines the way we speak about professionals. Proc Natl Acad Sci USA 115(28):7278–7283
Bäck H, Debus M (2019) When do women speak? A comparative analysis of the role of gender in legislative debates. Political Stud 67(3):576–596
Bäck H, Debus M, Müller J (2014) Who takes the parliamentary floor? The role of gender in speech-making in the Swedish Riksdag. Political Res Q 67(3):504–518
Beaulieu E, Boydstun A, Brown N, Dionne KY, Gillespie A, Klar S et al. (2016) Experts weigh. In: Women also know stuff. https://www.huffpost.com/entry/experts-weigh-in-womenal_b_9404388
Blair-Loy M, Rogers LE, Glaser D, Wong Y, Abraham D, Cosman PC (2017) Gender in engineering departments: are there gender differences in interruptions of academic job talks? Soc Sci 6(1):29
Brightlocal (2017) Local consumer review survey. https://www.brightlocal.com/learn/local-consumer-review-survey/. Accessed 12 Jun 2018
Brooks AW, Huang L, Kearney SW, Murray FE (2014) Investors prefer entrepreneurial ventures pitched by attractive men. Proc Natl Acad Sci USA 111(12):4427–4431
Butler D, Geis FL (1990) Nonverbal affect responses to male and female leaders: Implications for leadership evaluations. J Pers Soc Psychol 58(1):48
Chakravarty A, Liu Y, Mazumdar T (2010) The differential effects of online word-of-mouth and critics’ reviews on pre-release movie evaluation. J Interact Mark 24(3):185–197
Chatterjee P (2001) Online reviews: do consumers use them? In Gilly MC, Meyers-Levy J (eds) NA—advances in consumer eesearch, vol 28. Association for Consumer Research, pp. 129–133.
Court D, Elzinga D, Mulder S, Vetvik OJ (2009) The consumer decision journey. [online] McKinsey & Company. Available at: https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/theconsumer-decision-journey [Accessed 20 march, 2020]
Feldman A, Gill RD (2019) Power dynamics in Supreme Court Oral Arguments: the relationship between gender and justice-to-justice interruptions. Justice Syst J 40(3):173–195
Feldman SP, Spencer MC (1965) The effect of personal influence on the selection of consumer services. Center for Regional Studies.
Fiske ST (1998) Stereotyping, prejudice, and discrimination. In Gilbert, DT, Fiske, ST, Lindzey G (Eds.), The handbook of social psychology (pp. 357–411). McGraw-Hill
Fugate DL, Phillips J (2010) Product gender perceptions and antecedents of product gender congruence. J Consumer Market 27(3):251–261
Goldberg P (1968) Are women prejudiced against women? Transaction 5(5):28–30
Heilman ME, Eagly AH (2008) Gender stereotypes are alive, well, and busy producing workplace discrimination. Ind Organ Psychol 1(4):393–398
Hoffmann DE, Tarzian AJ (2001) The girl who cried pain: a bias against women in the treatment of pain. J Law Med Eth 29(1):13–27
Jacobi T, Schweers D (2017) Justice, interrupted: the effect of gender, ideology, and seniority at Supreme Court oral arguments. Va Law Rev 103:1379
Katz E, Lazarsfeld PF (1966) Personal influence, the part played by people in the flow of mass communications. Transaction publishers
Kunda Z, Spencer SJ (2003) When do stereotypes come to mind and when do they color judgment? A goal-based theoretical framework for stereotype activation and application. Psychol Bull 129(4):522
Lee E-J (2003) Effects of “gender” of the computer on informational social influence: the moderating role of task type. Int J Hum–Comput Stud 58(4):347–362
Lee E-J (2008) Flattery may get computers somewhere, sometimes: the moderating role of output modality, computer gender, and user gender. Int J Hum–Comput Stud 66(11):789–800
Lee EJ, Nass C, Brave S (2000) Can computer-generated speech have gender? An experimental test of gender stereotype. Paper presented at the CHI'00 extended abstracts on Human factors in computing systems
Levenson H, Burford B, Bonno B, Davis L (1975) Are women still prejudiced against women? A replication and extension of Goldberg’s study. J Psychol 89(1):67–71
Miles SH, August A (1990) Courts, gender and “the right to die”. Law Med Healthc 18(1-2):85–95
Miller J (2018) US organizations need to prove they value women. https://www.gallup.com/workplace/232961/organizations-need-prove-value-women.aspx. Accepted 20 Feb 2019
Morishima Y, Nass C, Bennett C (2001) Effects of” Gender” of Computer-Generated Speech on Credibility Perception. Paper presented at the Technical Report of IEICE TL2001-16.
Morrison MM, Shaffer DR (2003) Gender-role congruence and self-referencing as determinants of advertising effectiveness. Sex Roles 49(5-6):265–275
Moss-Racusin CA, Dovidio JF, Brescoll VL, Graham MJ, Handelsman J (2012) Science faculty’s subtle gender biases favor male students. Proc Natl Acad Sci USA 109(41):16474–16479
Mudambi SM, Schuff D (2010) What makes a helpful review? A study of customer reviews on Amazon. com. MIS Q 34(1):185–200
Nass C, Moon Y, Green N (1997) Are machines gender neutral? Gender‐stereotypic responses to computers with voices. J Appl Soc Psychol 27(10):864–876
Ouellette C (2020) Online shopping statistics you need to know in 2020. https://optinmonster.com/online-shopping-statistics/
Parks JA (2000) Why gender matters to the euthanasia debate: on decisional capacity and the rejection of women’s death requests. Hastings Center Rep 30(1):30–36
Pechmann C, Stewart D, Hickson G, Koslow S, Altemeier WA (1989) Information search and decision making in the selection of family health care. J Health Care Mark 9(2):29–39
Régner I, Thinus-Blanc C, Netter A, Schmader T, Huguet P (2019) Committees with implicit biases promote fewer women when they do not believe gender bias exists. Nat Hum Behav 3(11):1171–1179
Rooney K (2019) Online shopping overtakes a major part of retail for the first time ever. https://www.cnbc.com/2019/04/02/online-shopping-officially-overtakes-brick-and-mortar-retail-for-the-first-time-ever.html
Ruzycki SM, Fletcher S, Earp M, Bharwani A, Lithgow KC (2019) Trends in the proportion of female speakers at medical conferences in the United States and in Canada 2007 to 2017. JAMA Netw Open 2(4):e192103–e192103
Salehan M, Kim DJ (2016) Predicting the performance of online consumer reviews: a sentiment mining approach to big data analytics. Decision Support Systems 81:30–40
Solnit R (2008) Men explain things to me facts didn’t get in their way [electronic version]. https://tomdispatch.com/rebecca-solnit-the-archipelago-of-arrogance/. Accessed 25 Oct 2022
Wais K (2016) Gender prediction methods based on first names with genderizeR. R J 8(1):17–37
Wheeler SC, Petty RE (2001) The effects of stereotype activation on behavior: a review of possible mechanisms. Psychol Bull 127(6):797
Wilson WR, Peterson RA (1989) Some limits on the potency of word-of-mouth information. In Skrull TK (ed) NA—advances in consumer research, vol 16. Association for Consumer Research, pp. 23–29
Yin D, Bond S, Zhang H (2014) Anxious or angry? Effects of discrete emotions on the perceived helpfulness of online reviews. MIS Q 38(2):539–560
Zhang L, Losin EAR, Ashar YK, Koban L, Wager TD (2021) Gender biases in estimation of others’ pain. J Pain 22(9):1048–1059
Acknowledgements
This research was supported by the College of Business & Economics (CoBE), University of Wisconsin, Whitewater under the CoBE 2019 Summer Research Grant.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Ethical approval
Approval was obtained from the Institutional Review Board (IRB) of the University of Wisconsin, Whitewater. All ethical and IRB guidelines were followed.
Informed consent
Informed consent was obtained from all participants in the study.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fan-Osuala, O. Women’s online opinions are still not as influential as those of their male peers in buying decisions. Humanit Soc Sci Commun 10, 40 (2023). https://doi.org/10.1057/s41599-023-01504-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-023-01504-5