Although the physician survey has become an important tool for oncology-focused health services research, such surveys often achieve low response rates. This mini-review reports the results of a structured review of the literature relating to increasing response rates for physician surveys, as well as our own experience from a survey of physicians as to their referral practices for suspected haematologic malignancy in the United States. PubMed and PsychINFO databases were used to identify methodological articles assessing factors that influence response rates for physician surveys; the results were tabulated and reviewed for trends. We also analysed the impact of a follow-up telephone call by a physician investigator to initial non-responders in our own mailed physician survey, comparing the characteristics of those who responded before vs after the call. The systematic review suggested that monetary incentives and paper (vs web or email) surveys increase response rates. In our own survey, follow-up telephone calls increased the response rate from 43.7% to 70.5%, with little discernible difference in the characteristics of early vs later responders. We conclude that in addition to monetary incentives and paper surveys, physician-to-physician follow-up telephone calls are an effective method to increase response rates in oncology-focused physician surveys.
Although surveys of practicing physicians are a valuable source of data to help guide health care policy, they typically have poor response rates (Cummings et al, 2001; Kellerman and Herold, 2001; VanGeest et al, 2007). Specifically, the average response rate of physicians to mailed surveys has traditionally been demonstrated to be only 54% to 58% (Martin, 1974; Asch et al, 1997; Cook et al, 2009) – 14% lower than that of non-physicians – and appears to be getting worse in the era of modern multimedia communications (Cull et al, 2005; Cook et al, 2009). In addition to their potential impact on population-based health care decisions, physician surveys often form the cornerstone of quality improvement efforts, as such efforts cannot take place without a reliable assessment of providers’ current practices and attitudes. If physician non-response leads to survey bias, resulting policy and practice decisions may not accurately represent the views and practices of the target population being sampled.
In oncology, as in other areas of medicine, physician surveys are increasingly being undertaken to assess patterns and quality of cancer care. Recent oncology-related physician surveys published in high-profile journals include an assessment of attitudes of American vs Canadian oncologists as to the cost-effectiveness of new cancer drugs (response rate 59% (Berry et al, 2010)), primary care physicians’ (PCPs’)(Del Giudice et al, 2009) and oncologists’ (Greenfield et al, 2009) views of appropriate follow-up care for cancer survivors (response rate 52% for the former and 36% for the latter), an assessment of surgeons’, medical oncologists’ and radiation oncologists’ involvement in clinical trials (response rate 61% (Klabunde et al, 2011)) and a survey of oncologists’ views regarding communicating the costs of chemotherapy to patients (response rate 31.5% (Schrag and Hanger, 2007)). As can be seen, physician response rates in these studies vary widely; clearly, those with higher response rates have the potential to be much more influential in informing these diverse areas of cancer-related health care policy.
In their comprehensive literature review, Edwards et al (2009) found that successful methods for increasing response rates to postal surveys include monetary incentives, use of shorter questionnaires, follow-up contact, and reply envelopes that contain a stamp rather than metered postage. Responses to electronic surveys were found to be increased with non-monetary incentives, shorter surveys, a lottery with instant notification of results, and exclusion of the word ‘survey’ in the email invitation subject line. Although Edwards et al (2009) analysis is informative, <10% of the 513 survey studies reviewed included physician respondents. Indeed, relatively few studies have specifically examined strategies to increase physicians’ responses to surveys (Field et al, 2002), and the last major review of the literature was published in 2007 (VanGeest et al, 2007).
Our goal was to review the current literature relating to obtaining high physician-survey response rates, with an eye towards improving such efforts in oncology-related research. We then aimed to present our own experience increasing the response rate of a survey targeting primary care physician behaviours with respect to referral for suspected haematologic cancers. We specifically desired to present our data to an audience of clinicians and investigators likely to undertake such surveys as part of their clinical quality improvement efforts or cancer-related health services research.
Structured literature review
English language experimental studies and literature reviews of methods to improve physician response rates were identified through searches of PubMed and PsycINFO databases, focusing on the years 2000 to 2010. We chose this time frame because we felt that prior reviews had sufficiently examined older work, and because we wanted to focus our analysis on more recent studies that would be most likely to assess both postal and electronic approaches.
Keywords used in binary combinations included: ‘physician survey’, ‘response rate’, ‘improved’, ‘questionnaire’, ‘incentives’, ‘Internet’, ‘web’, ‘mail’, and ‘postal’. Eight prior review articles regarding physician surveys (Cummings et al, 2001; Kellerman and Herold, 2001; McColl et al, 2001; Field et al, 2002; Braithwaite et al, 2003; Cull et al, 2005; VanGeest et al, 2007; Cook et al, 2009) were also assessed to identify additional primary papers. A review of relevant abstracts of primary and secondary searches revealed 38 that focused on experimental studies specifically examining factors that affect physician response rates; the full texts of these were obtained for further review (Table 1). Despite the fact that many surveys of physicians both within and outside of cancer medicine have achieved excellent response rates, we rejected articles that did not specifically compare methods for improving response rates using the same survey and physician sample. We did so because we did not feel we could rigorously compare the methods used across different analyses, given the many disparate topics and samples studied.
Physician survey case study
From April to August 2010, we surveyed PCPs in Massachusetts regarding their practice patterns with respect to the diagnosis and referral of patients with suspected haematologic malignancy. Our survey was designed to detrmine the approximate number of patients seen in the last year that PCPs suspected might have haematologic malignancy, the frequency of formal specialty referral for those patients, and the frequency of informal curbside consultation. PCPs were also queried about the factors that influence their choice of specialist, and about the information exchange with the specialist.
The names of 6836 Massachusetts physicians were obtained from the American Medical Association; 375 of these were randomly selected for inclusion in the survey. We then searched the Massachusetts Board of Registration in Medicine online directory to verify that the physicians met the study's eligibility criteria: (1) currently in practice at Massachusetts; (2) graduated from medical school in 2005 or earlier; (3) listed specialty or board-certified in internal medicine, general medicine, family medicine or geriatrics; and (4) no non-primary care subspecialty listed. Investigating each name on the initial list took approximately 3.6 min, for a total of 22.5 h spent on cleaning procedures. The final pre-contact eligible sample consisted of 250 physicians. Of these, 60 reported upon contact that they did not engage in primary care and were reclassified as ineligible. The final eligible sample included 190 PCPs.
Each physician received a package delivered to her/his office using FedEx courier services, identifying the study physician-investigator (GAA) as the sender. The package included a letter inviting the physician to participate, a printed survey, opt-out card, and a pre-paid, self-addressed return envelope. The letter directed participants to either fill out and return the paper survey, or log-on to a secure website to complete the survey over the Internet. The opt-out card allowed physicians to report that they either declined to participate or that they were ineligible because they did not engage in primary care. Reminder postcards were sent to those physicians who had not yet completed the survey 2 weeks later. Three weeks after that, a second package containing the same materials and instructions was sent to all physicians who had not responded. Physicians who responded after these first three solicitations were termed ‘early responders’.
Telephone calls were made to each physician who had not yet responded by the study's principal investigator (GAA), a medical oncologist, 7 weeks after the initial package was sent. If a potential physician respondent was not available, the study physician either left a message asking for his call to be returned, or, if directed to a voicemail system, a more detailed message regarding the survey itself. Potential physician respondents who were not reached during the first round of telephone calls were called again approximately 2–3 weeks after the initial call. Physicians who responded after the telephone calls were termed ‘late responders’. Regardless of recruitment methodology, those who completed the survey received a $100 VISA gift card by mail.
After recruitment was complete, we assessed the overall response rate, as well response rates before and after the follow-up telephone calls. Next, using χ2 analysis or the Fischer's Exact test depending upon on how many subjects were available for each category, we analysed whether there were differences in self-reported characteristics among early vs late responders (gender, age, race, ethnicity, years post residency and practice type) and whether there were difference in characteristics obtained from the Massachusetts Board of Registration in Medicine website (gender, practice type, medical school location and years since graduation) among responders vs non-responders.
Structured literature review
We found that studies of physician response rates generally have tested the effects of the mode of survey, type of incentive, or other interventions (Table 1). The interventions examined varied greatly, but monetary incentives were generally effective (9/11 positive studies), and paper surveys engendered more responses than surveys delivered in other formats such as email (7/8 positive studies). Interestingly, one study demonstrated that response rates were even better with a mailed survey that had an option to respond by email, a so-called ‘mixed methods’ approach (Seguin et al, 2004).
When using an incentive, the studies suggested that it is better to ‘pre-pay’ by sending the incentive with the survey itself vs ‘post-pay’ after completion (Leung et al, 2002), and that cash is preferable to a gift (such as a pen (Clark et al, 2001b)). In addition, a personalised cover letter stressing the importance of that individual physician's reply was shown to result in a better response rate (Leece et al, 2006). Data on the use of enrollment in a lottery as an incentive was more complex. One study suggested that enrollment in a lottery in exchange for completing a survey ($500 Canadian) was better than nothing at all (Baron et al, 2001), but another found that even a small incentive given to all ($2 US upfront) was better than the chance of enrollment in a lottery for a bigger prize ($250; Tamayo-Sarver and Baker, 2004).
Interestingly, some factors that one might assume would lead to a better response rate did not always help and could even be detrimental; for example, one study demonstrated that the addition of a letter featuring the endorsement of the survey by an expert lead to significantly lower primary response rates (Bhandari et al, 2003). Other factors that may have a positive effect included shorter survey word length (Jepson et al, 2005) and inclusion of a stamped return envelope vs a business return envelope (Streiff et al, 2001). This final analysis was the only one to present methodological data from a study of haematologists or oncologists (a mailed survey of 3000 members of the American Society of Hematology to assess their approach to diagnosis and treatment of polycythemia vera; response rate was 38% with the stamped envelope).
Physician survey case study
In our own survey, follow-up telephone calls from the physician investigator increased physician response rates from 43.7% to 70.5%. In total, these phone calls took approximately 20 h, for an average of 23.5 min of physician effort required to recruit each additional participant. Early and late responders did not differ in age, race, ethnicity, years since residency or practice type (Table 2; all P>0.05). In contrast, female physicians were more likely to be early responders (P<0.01, Table 2).
Comparing responders to non-responders, we found similar proportions trained in foreign medical schools (24% for responders vs 27% for non-responders; χ2=0.17, ns) and a similar distribution among the two groups of family medicine, general practice, and internal medicine practices (χ2=4.45, ns). The proportions of males and females were reversed between responders and non-responders (χ2=6.62, P<0.05) such that 40% of responders were female and 60% were male, whereas non-responders were 61% female and 39% male. Finally, those who had graduated from medical school within the past 10 years were significantly more likely to respond (91%) than those who graduated more than 11 years before (65% to 67% across for those 11 to 20 years, 21 to 30 years, or 31+ years post graduation; χ2=8.02, P<0.05).
Our literature review revealed that several factors have the potential to increase response rates to physician surveys, such as the inclusion of monetary incentives and the use of paper vs web or email formats. Several other items – from shorter survey word length to the use of a personalised cover letter – were also demonstrated to help. In addition, our case study suggested that telephone calls made by a physician investigator to potential physician respondents may greatly increase response rates among initial non-responders.
We found little difference between early and late responders with respect to most socio-demographic dimensions in our survey. This finding is reassuring, as it suggests that medical peer follow-up calls may not greatly change the characteristics of those who ultimately respond. On the other hand, although effective, personal calls by physician researchers are costly, and unlikely to be feasible for the large samples that are sometimes encountered in oncology-related health services research. Unfortunately, whether or not a follow-up telephone call by a research assistant or other non-peer clinician can capture some of that benefit with respect to increasing response rates remains unclear.
Two older studies (before 2000 so not included in our literature review above) assessed the effect of direct follow-up contact from a medical peer on physician survey response rates. The first found that follow-up telephone calls by investigating physicians to PCPs improved response rates from 62% to 92% (Bostick et al, 1992). In the other – a study of PCPs regarding their oncology consultation practices – response rates were increased from 44% to 78% after follow-up telephone calls from a medical peer to initial non-responders (Heywood et al, 1995). Our case study demonstrated a slightly smaller increase in response rate (27% vs 30% and 34% in the before studies); however, we may conclude that despite the modern milieu of email, text messaging, and social media, a follow-up peer-to-peer telephone call still has an important role in terms of assuring high physician response rates. Our results also correspond to the broader survey literature that suggests follow-up contact is essential (Edwards et al, 2009).
We found that overall, respondents (early and late) were more likely to be recent graduates and also to be male. Although the former finding is consistent with prior studies – perhaps because as more recent licensees, specialty and contact information for younger graduates obtained from public sources is more likely to be accurate (Kellerman and Herold, 2001; Barclay et al, 2002; Cull et al, 2005) – these same studies have shown that female physicians are generally more likely to respond. On the other hand, our own gender results are consistent with another large survey that used the American Medical Association physician file (McFarlane et al, 2007), which suggests that our source of respondents may have had a role.
Other than one analysis (Streiff et al, 2001), we found no other examples of methodological studies specifically assessing how to increase response rates for surveys of oncology specialty physicians. Although oncology-related surveys of PCPs can make use of the general literature on surveying physicians (as we ourselves did in our case example), additional strategies may be important to increase response rates from oncology specialists. With respect to the latter, empirical research is clearly needed (e.g., focus groups, key informant interviews or even surveys of oncologists). Possible strategies that may emerge include having the survey endorsed by an oncology specialty society (ours was not) or administered at a national oncology meeting (ours was not). Still, it may be that a ‘one size fits all’ strategy will not be the answer in oncology, and that tailoring the approach to the specific target physician population and investigative aims will dictate the best method.
Our own survey experience illustrates the importance of using a ‘clean’ sample, where attempts at verification of eligibility are made before contacting potential respondents. Indeed, despite our extensive efforts, we still contacted some physicians that were ultimately ineligible. Although such sample cleaning is time consuming and expensive, it is necessary to ensure that a pool of respondents representative of the target population is obtained, irrespective of the sample size. In addition, this can help the ultimate response rate, because, when an ineligible physician does not respond, unless that status is confirmed, he or she must be included in the response rate denominator, which has the effect of lowering the ultimate response rate. Our study also speaks to the utility of the mixed-methods approach (both postal and electronic options for reply), which may be the best way to obtain a high response rate from physicians (Beebe et al, 2007; Sprague et al, 2009) especially as Internet-only (Leece et al, 2004) and email-only (McMahon et al, 2003) approaches have been suffering from lower response rates compared with mailed surveys.
We recognise limitations to our work. First, it is conceivable that some analyses of factors that impact physician survey response rates may have been missed in our structured literature review. Indeed, our search terms were broad, and deciding which studies to include as primarily ‘methodological’ was necessarily subjective. Second, in our case study, the principal investigator was an oncologist telephoning PCPs, and it may be that increases in response rates would have differed if he were also a PCP (possibly better) or if he were telephoning fellow oncologists (possibly better). Third, it was not possible to determine whether the difference in response rates found between early and late responders in our case study was statistically significant (although the magnitude of the difference suggests it was), because the latter group included the former, and thus the two were not independent groups for which there is an appropriate statistical test. Finally, Massachusetts is a state with universal health care and a dense network of hospitals and physicians. Certainly, follow-up calls from a study physician may have different effects on physician response rates in states or countries with a different health care environment.
In summary, as the landscape of clinical practice, health insurance and health care policy evolves, it is likely that physicians will be solicited more often to complete surveys. The use of survey methods that include physicians will also likely increase in cancer medicine, a field with many health services issues ripe for study using such methods. Our work results in several recommendations for the oncology-focused physician survey. First, using a mailed survey (usually by a courier company such as FedEx) makes sense, with an option to be filled out via email or Internet. Second, we recommend a personalised letter including an upfront monetary incentive if possible. Third, paying attention to details such as shorter survey word length and stamped returned postage vs business reply envelope may be important. Finally, follow-up contact should proceed on a regular schedule, and a follow-up call by a peer physician-investigator, when feasible, may be a particularly effective tool.
Asch DA, Jedrziewski MK, Christakis NA ( 1997 ) Response rates to mail surveys published in medical journals . J Clin Epidemiol 50 (10) : 1129 – 1136
Barclay S, Todd C, Finlay I, Grande G, Wyatt P ( 2002 ) Not another questionnaire! maximizing the response rate, predicting non-response and assessing non-response bias in postal questionnaire studies of GPs . Fam Pract 19 (1) : 105 – 111
Baron G, De Wals P, Milord F ( 2001 ) Cost-effectiveness of a lottery for increasing physicians’ responses to a mail survey . Eval Health Prof 24 (1) : 47 – 52
Beebe TJ, Locke III GR, Barnes SA, Davern ME, Anderson KJ ( 2007 ) Mixing web and mail methods in a survey of physicians . Health Serv Res 42 (3 Part 1) : 1219 – 1234
Bergk V, Gasse C, Schnell R, Haefeli WE ( 2005 ) Mail surveys: obsolescent model or valuable instrument in general practice research? Swiss Med Wkly 135 (13-14) : 189 – 191
Berry SR, Bell CM, Ubel PA, Evans WK, Nadler E, Strevel EL, Neumann PJ ( 2010 ) Continental divide? the attitudes of US and Canadian oncologists on the costs, cost-effectiveness, and health policies associated with new cancer drugs . J Clin Oncol 28 (27) : 4149 – 4153
Bhandari M, Devereaux PJ, Swiontkowski MF, Schemitsch EH, Shankardass K, Sprague S, Guyatt GH ( 2003 ) A randomized trial of opinion leader endorsement in a survey of orthopaedic surgeons: effect on primary response rates . Int J Epidemiol 32 (4) : 634 – 636
Bostick RM, Pirie PH, Luepker RV, Kofron PM ( 1992 ) Using physician caller follow-UPs to improve the response rate to a physician telephone survey . Eval Health Prof 15 : 420 – 433
Braithwaite D, Emery J, De Lusignan S, Sutton S ( 2003 ) Using the internet to conduct surveys of health professionals: a valid alternative? Fam Pract 20 (5) : 545 – 551
Brehaut JC, Graham ID, Visentin L, Stiell IG ( 2006 ) Print format and sender recognition were related to survey completion rate . J Clin Epidemiol 59 (6) : 635 – 641
Burt CW, Woodwell D ( 2003 ) Tests of methods to improve response to physician surveys . Paper presented at the 2005 Federal Committee on Statistical Methodology, Arlington, VA (available at http://www.fcsm.gov/05papers/Burt_Woodwell_VIIB.pdf )
Clark TJ, Khan KS, Gupta JK ( 2001a ) Effect of paper quality on the response rate to a postal survey: a randomised controlled trial. Isrctn 32032031 . Bmc Med Res Methodol 1 : 12
Clark TJ, Khan KS, Gupta JK ( 2001b ) Provision of pen along with questionnaire does not increase the response rate to a postal survey: a randomised controlled trial . J Epidemiol Community Health 55 (8) : 595 – 596
Cook JV, Dickinson HO, Eccles MP ( 2009 ) Response rates in postal surveys of healthcare professionals between 1996 and 2005: an observational study . Bmc Health Serv Res 9 (160) : 160
Cull WL, O’connor KG, Sharp S, Tang SF ( 2005 ) Response rates and response bias for 50 surveys of pediatricians . Health Serv Res 40 (1) : 213 – 226
Cummings SM, Savitz LA, Konrad TR ( 2001 ) Reported response rates to mailed physician questionnaires . Health Serv Res 35 (6) : 1347 – 1355
Del Giudice ME, Grunfeld E, Harvey BJ, Piliotis E, Verma S ( 2009 ) Primary care physicians’ views of routine follow-up care of cancer survivors . J Clin Oncol 27 (20) : 3338 – 3345
Delnevo CD, Abatemarco DJ, Steinberg MB ( 2004 ) Physician response rates to a mail survey by specialty and timing of incentive . Am J Prev Med 26 (3) : 234 – 236
Drummond FJ, Sharp L, Carsin AE, Kelleher T, Comber H ( 2008 ) Questionnaire order significantly increased response to a postal survey sent to primary care physicians . J Clin Epidemiol 61 (2) : 177 – 185
Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S ( 2009 ) Methods to increase response to postal and electronic questionnaires . Cochrane Database Syst Rev 8 (3) : Mr000008
Field TS, Cadoret CA, Brown ML, Ford M, Greene SM, Hill D, Hornbrook MC, Meenan RT, White MJ, Zapka JM ( 2002 ) Surveying physicians: do components of the ‘total design approach’ to optimizing survey response rates apply to physicians? Med Care 40 (7) : 596 – 605
Gattellari M, Ward JE ( 2001 ) Will donations to their learned college increase surgeons’ participation in surveys? a randomized trial . J Clin Epidemiol 54 (6) : 645 – 649
Grava-Gubins I, Scott S ( 2008 ) Effects of various methodologic strategies: survey response rates among canadian physicians and physicians-in-training . Can Fam Physician 54 (10) : 1424 – 1430
Greenfield DM, Absolom K, Eiser C, Walters SJ, Michel G, Hancock BW, Snowden JA, Coleman RE ( 2009 ) Follow-up care for cancer survivors: the views of clinicians . Br J Cancer 101 (4) : 568 – 574
Halpern SD, Ubel PA, Berlin JA, Asch DA ( 2002 ) Randomized trial of 5 dollars versus 10 dollars monetary incentives, envelope size, and candy to increase physician response rates to mailed questionnaires . Med Care 40 (9) : 834 – 839
Heywood A, Mudge P, Ring I, Sanson-Fisher R ( 1995 ) Reducing systematic bias in studies of general practitioners: the use of a medical peer in the recruitment of general practitioners in research . Fam Pract 12 (2) : 227 – 231
Hocking JS, Lim MS, Read T, Hellard M ( 2006 ) Postal surveys of physicians gave superior response rates over telephone interviews in a randomized trial . J Clin Epidemiol 59 (5) : 521 – 524
Jepson C, Asch DA, Hershey JC, Ubel PA ( 2005 ) In a mailed physician survey, questionnaire length had a threshold effect on response rate . J Clin Epidemiol 58 (1) : 103 – 105
Jiwa M, Coker AE, Bagley J, Freeman J, Coleman M ( 2004 ) Surveying general practitioners: a new avenue . Curr Med Res Opin 20 (3) : 319 – 324
Keating NL, Zaslavsky AM, Goldstein J, West DW, Ayanian JZ ( 2008 ) Randomized trial of $20 versus $50 incentives to increase physician survey response rates . Med Care 46 (8) : 878 – 881
Kellerman SE, Herold J ( 2001 ) Physician response to surveys. a review of the literature . Am J Prev Med 20 (1) : 61 – 67
Klabunde CN, Keating NL, Potosky AL, Ambs A, He Y, Hornbrook MC, Ganz PA ( 2011 ) A population-based assessment of specialty physician involvement in cancer clinical trials . J Natl Cancer Inst 103 (5) : 384 – 397
Leece P, Bhandari M, Sprague S, Swiontkowski MF, Schemitsch EH, Tornetta P ( 2006 ) Does flattery work? a comparison of 2 different cover letters for an international survey of orthopedic surgeons . Can J Surg 49 (2) : 90 – 95
Leece P, Bhandari M, Sprague S, Swiontkowski MF, Schemitsch EH, Tornetta P, Devereaux PJ, Guyatt GH ( 2004 ) Internet versus mailed questionnaires: a randomized comparison (2) . J Med Internet Res 6 (3) : E30
Lensing SY, Gillaspy SR, Simpson PM, Jones SM, James JM ( 2000 ) Encouraging physicians to respond to surveys through the use of fax technology . Eval Health Prof 23 (3) : 349 – 360
Leung GM, Ho LM, Chan MF, Jm MJ, Wong FK ( 2002 ) The effects of cash and lottery incentives on mailed surveys to physicians: a randomized trial . J Clin Epidemiol 55 (8) : 801 – 807
Leung GM, Johnston JM, Saing H, Tin KY, Wong IO, Ho LM ( 2004 ) Prepayment was superior to postpayment cash incentives in a randomized postal survey among physicians . J Clin Epidemiol 57 (8) : 777 – 784
Martin BC ( 1974 ) Don’t Survey Physicians! Center For Health Services Research And Development, American Medical Association: Chicago, IL
Mccoll E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, Thomas R, Harvey E, Garratt A, Bond J ( 2001 ) Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients . Health Technol Assess 5 (31) : 1 – 256
Mcdermott MM, Greenland P, Hahn EA, Brogan D, Cella D, Ockene J, Pearce WH, Criqui MH, Hirsch A, Lipsky M, Odom L, Hanley K, Khan S ( 2003 ) The effects of continuing medical education credits on physician response rates to a mailed questionnaire . Health Mark Q 20 (4) : 27 – 42
Mcfarlane E, Olmsted MG, Murphy J, Hill CA ( 2007 ) Nonresponse bias in a mail survey of physicians . Eval Health Prof 30 (2) : 170 – 185
Mckenzie-Mcharg K, Tully L, Gates S, Ayers S, Brocklehurst P ( 2005 ) Effect on survey response rate of hand written versus printed signature on a covering letter: randomised controlled trial [Isrctn67566265] . Bmc Health Serv Res 5 : 52
Mcmahon SR, Iwamoto M, Massoudi MS, Yusuf HR, Stevenson JM, David F, Chu SY, Pickering LK ( 2003 ) Comparison of e-mail, fax, and postal surveys of pediatricians . Pediatrics 111 (4 Part 1) : E299 – E303
Moses SH, Clark TJ ( 2004 ) Effect of prize draw incentive on the response rate to a postal survey of obstetricians and gynaecologists: a randomised controlled trial. [Isrctn32823119] . Bmc Health Serv Res 4 (1) : 14
Puleo E, Zapka J, White MJ, Mouchawar J, Somkin C, Taplin S ( 2002 ) Caffeine, cajoling, and other strategies to maximize clinician survey response rates . Eval Health Prof 25 (2) : 169 – 184
Raziano DB, Jayadevappa R, Valenzula D, Weiner M, Lavizzo-Mourey R ( 2001 ) E-mail versus conventional postal mail survey of geriatric chiefs . Gerontologist 41 (6) : 799 – 804
Recklitis CJ, Campbell EG, Kutner JS, Bober SL ( 2009 ) Money talks: non-monetary incentive and internet administration fail to increase response rates to a physician survey . J Clin Epidemiol 62 (2) : 224 – 226
Robertson J, Walkom EJ, Mcgettigan P ( 2005 ) Response rates and representativeness: a lottery incentive improves physician survey return rates . Pharmacoepidemiol Drug Saf 14 (8) : 571 – 577
Schrag D, Hanger M ( 2007 ) Medical oncologists’ views on communicating with patients about chemotherapy costs: a pilot survey . J Clin Oncol 25 (2) : 233 – 237
Seguin R, Godwin M, Macdonald S, Mccall M ( 2004 ) E-mail or snail mail? randomized controlled trial on which works better for surveys . Can Fam Physician 50 : 414 – 419
Sprague S, Quigley L, Bhandari M ( 2009 ) Survey design in orthopaedic surgery: getting surgeons to respond . J Bone Joint Surg Am 91 (Suppl 3) : 27 – 34
Streiff MB, Dundes L, Spivak JL ( 2001 ) A mail survey of united states hematologists and oncologists: a comparison of business reply versus stamped return envelopes . J Clin Epidemiol 54 (4) : 430 – 432
Tamayo-Sarver JH, Baker DW ( 2004 ) Comparison of responses to a US 2 dollar bill versus a chance to win 250 US dollars in a mail survey of emergency physicians . Acad Emerg Med 11 (8) : 888 – 891
Thomson CE, Paterson-Brown S, Russell D, Mccaldin D, Russell IT ( 2004 ) Short report: encouraging GPs to complete postal questionnaires--one big prize or many small prizes? A randomized controlled trial . Fam Pract 21 (6) : 697 – 698
Thorpe C, Ryan B, Mclean SL, Burt A, Stewart M, Brown JB, Reid GJ, Harris S ( 2009 ) How to obtain excellent response rates when surveying physicians . Fam Pract 26 (1) : 65 – 68
Vandenkerkhof EG, Parlow JL, Goldstein DH, Milne B ( 2004 ) In Canada, anesthesiologists are less likely to respond to an electronic, compared to a paper questionnaire . Can J Anaesth 51 (5) : 449 – 454
Vangeest JB, Johnson TP, Welch VL ( 2007 ) Methodologies for improving response rates in surveys of physicians: a systematic review . Eval Health Prof 30 (4) : 303 – 321
Vangeest JB, Wynia MK, Cummins DS, Wilson IB ( 2001 ) Effects of different monetary incentives on the return rate of a national mail survey of physicians . Med Care 39 (2) : 197 – 201
This work was partially supported by a contract from the Division of Cancer Prevention and Control of the Centers for Disease Control and Prevention (CDC) in Atlanta GA. The findings and conclusions do not necessarily represent the CDC's views.
This work is published under the standard license to publish agreement. After 12 months the work will become freely available and the license terms will switch to a Creative Commons Attribution-NonCommercial-Share Alike 3.0 Unported License.
About this article
Cite this article
Martins, Y., Lederman, R., Lowenstein, C. et al. Increasing response rates from physicians in oncology research: a structured literature review and data from a recent physician survey. Br J Cancer 106, 1021–1026 (2012). https://doi.org/10.1038/bjc.2012.28
This article is cited by
How should multiple myeloma research change in a patient-oriented world? Findings and lessons from the pan-Canadian myeloma priority setting partnership
Research Involvement and Engagement (2023)
Association of Women Leaders with Women Program Director and Trainee Representation Across US Academic Internal Medicine
Journal of General Internal Medicine (2023)
Patient-centered dosing: oncologists’ perspectives about treatment-related side effects and individualized dosing for patients with metastatic breast cancer (MBC)
Breast Cancer Research and Treatment (2022)
Does advance contact with research participants increase response to questionnaires: an updated systematic review and meta-analysis
BMC Medical Research Methodology (2021)
Assessment of Healthcare Professionals’ Knowledge and Understanding of the Risk of Blood Typing Interference with Daratumumab: A Survey of 12 European Countries
Advances in Therapy (2021)