Experimental psychologists might point to two methodological flaws in Robert Ewers’ (admittedly, tongue-in-cheek) analysis of boring scientific talks (Nature 561, 464; 2018): experimenter bias and item-specific effects.
Experimenter bias arises from the easily bored listener, who might be more likely than less-impatient audience members to be stimulated by rapid delivery of key information (Nature 529, 146–148; 2016). To avoid such bias, several independent judges should use objective boredom-measurement scales to classify correlated items.
Item-specific effects result from uncontrolled sequential dependencies. If highlights are delivered early in a talk, boredom is likely to rise midway through; presenters perceiving this boredom might adjust their delivery. In timed sessions, over-length talks at the start mean that subsequent presenters need to shorten their talks — artificially leading to denser, and perhaps more interesting, presentations. And the second of two talks with similar content that are presented equally well will inevitably seem less interesting than the first.
Counterbalancing the order of items and de-correlating form and content by combining relevant features (such as voice, rate, number and order of propositions, visualization quality) would allow experimenters to identify the techniques that generate the greatest interest. Endorsing these could make meetings more stimulating.
Nature 565, 294 (2019)