Main

The scientific abstract program of the United States and Canadian Academy of Pathology (USCAP) annual meeting is an important forum for sharing recent advances in pathology, publishing more than 1500 abstracts each year. Abstracts are a valuable source of information; however, articles in peer-reviewed journals undergo more scrutiny and are easier to access through PubMed and other citation indices. Numerous studies presented as abstracts at scientific meetings are never published in peer-reviewed journals.1 There are many drawbacks associated with this phenomenon. Abstracts that are not subsequently published as articles may not reach their maximal potential readership, limiting their benefits to the medical community and potentially causing duplicated studies. Abstracts may lack details in the description of materials and methods, making it difficult for other researchers to reproduce and validate the results. Knowing the final outcome of meeting abstracts is therefore of interest to not only attendees but also meeting organizers, as it could serve as a quality assurance measure for their abstract selection process.

We retrospectively analyzed the outcome of abstracts presented at the USCAP annual meetings in recent years. The rate of publication in peer-reviewed journals was determined. We also analyzed several factors that may correlate with the outcome, including the subspecialty, presentation format, country of origin, and the use of statistical methods.

Materials and methods

All USCAP abstracts presented at the 2005 (1576), 2006 (1588), and 2007 (1660) annual meetings were retrieved from the corresponding Modern Pathology supplemental issues at the journal's website. These abstracts were compiled in PDF files by subspecialty. We developed a PERL computer program that automatically parsed these PDF files and extracted each abstract by the following fields: identifier, title, authors, institutions, and main text. These data fields were outputted into temporary text files and manually checked to ensure the program was running correctly.

For each abstract, a custom PERL computer program searched PubMed by the names of the first and last authors, using the NCBI (National Center for Biotechnology Information) Entrez Programming Utilities. We limited the search to a 3-year time frame starting from the October before the USCAP meeting (approximately abstract submission deadline). The search results, if any, were then retrieved by their unique PMIDs in XML format.

All PubMed search results were manually reviewed to determine whether an article was the true match of its query USCAP abstract. Only articles that described the same study, with consistent results and conclusions were accepted as true matches. Unrelated articles and articles inconsistent with query abstracts were rejected. This review process was greatly facilitated by a custom computer program that highlighted all matched keywords between the articles and the query abstract in colors. The program also sorted the articles by their probabilities of being true matches, calculated from the number and frequency of unique matched keywords.

Statistical analysis of potentially relevant variables, including the format of presentation, country of origin, and the use of statistical methods, was conducted with Fisher's exact tests. The format of presentation (either platform or poster) was manually retrieved from the USCAP website (http://www.uscap.org/). An author's country of origin was determined by his/her affiliated institution. US institutions were identified by the names or abbreviations of the United States and individual states. To determine whether an abstract had used statistical methods, keywords containing ‘statistic*,’ ‘P-value,’ ‘P=,’ ‘P<’ and ‘P>’ were searched, and abstracts containing these keywords were manually checked for the use of statistical methods.

Results

Overall, 36% (1725/4824) of USCAP abstracts from 2005 to 2007 resulted in publications in peer-reviewed journals indexed by PubMed. The publication rate varied greatly among different subspecialties, ranging from 10 to 62% on a yearly basis (Table 1). The format of presentation seemed to be a significant predictor of the outcome. Abstracts categorized as platform (oral) presentations had higher publication rates than did abstracts presented as posters in all 3 years (Table 2).

Table 1 Publication rates of 2005–2007 USCAP abstracts by subspecialty
Table 2 The format of presentation is a significant predictor of abstract outcome

The other two variables did not seem to affect the outcome significantly. In terms of the country of origin, abstracts with at least one author affiliated with US institutions had a publication rate of 35.4% (1386/3916), which was similar to that of abstracts without US authors (37.3% or 339/908, P=0.28). Abstracts using statistical methods had a 36.8% (637/1733) overall publication rate, whereas the others had a similar rate of 35.2% (1088/3091, P=0.27).

The 1725 published articles appeared in 255 pathology, clinical science, and basic science journals. Journals with >1% of total publications are listed in Table 3. These 19 journals together accounted for 67%, and the top 7 of them accounted for over half of all published articles. The average time interval between abstract submission and article publication was 18 months, with 25% of them published within the first 12 months, and the remaining 75% published in the first 24 months.

Table 3 Journals with the largest numbers of published articles (≥1% of total)

Discussion

Overall, 36% of abstracts presented at the 2005–2007 USCAP annual meetings resulted in peer-reviewed publications during a 3-year follow-up. This rate is comparable with that of other medical societies with published reports, which typically range from 30 to 50%.1, 2, 3, 4, 5, 6 The variation in reported rates may be partly due to different follow-up times, because some studies included publications as far as 5 years after abstract presentation. In a large-scale study, von Elm et al1 analyzed 19 123 abstracts presented in 234 biomedical meetings from 1957 to 1998, and found the overall publication rates after 1, 2, 3, 4, and 6 years to be 12, 27, 37, 41, and 44% respectively. They also found that abstracts from smaller meetings, US-held meetings, and those involving basic science, with positive outcomes, or presented orally were more likely to be published.1 Similar studies specific for pathology meetings are very rare. Ciesla and Wojcik2 analyzed cytopathology abstracts presented in 1998 and found a 33% publication rate. Our study included all USCAP abstracts regardless of their subspecialty or category, and we chose a follow-up time of 3 years. On the basis of our data, more than half of the USCAP abstracts may never lead to publications in peer-reviewed journals.

Abstract publication rates varied considerably by subspecialty, ranging from 10 to 62% on a yearly basis (Table 1). This variation may suggest non-uniformity in the abstract selection process. Presentation format (platform vs poster) was the only other factor that seemed to predict the outcome. Although statistically significant, platform presentations only showed a modestly (10–15%) higher publication rate than did posters. In terms of country of origin, abstracts with authors affiliated with US institutions had essentially the same publication rate as did those without an affiliation. This is in contrast to some other studies that showed a more favorable outcome of abstracts with US origin.1, 6 Pathology studies involving diagnostic and prognostic markers frequently require statistical analysis. Our data showed that abstracts explicitly mentioning statistical methods had approximately the same publication rate as did those that did not. There are other variables that were described in previous studies, such as the type of institution (university vs community hospital setting), positive vs negative outcome, prospective vs retrospective study, and the domain of study (such as etiology, diagnosis, prognosis, or treatment). These variables were not investigated in our study.

One question raised by our study is the explanation for why the majority of abstracts never resulted in publication. Previous studies addressing this issue have identified several reasons, such as lack of time or other resources, change in priority, rejection by journals, and incomplete results.7 The source of funding could be a factor too. It has been shown that a majority of research published by pathologists is not funded by external agencies, such as the NIH.8, 9 This unfunded research has a pivotal role in advancing the field of pathology. However, it is not known whether the lack of external funding has any impact on the ultimate publication of abstracts. It is also important to point out that the purpose of meeting abstracts is not only to share findings but also to stimulate discussion and an exchange of ideas, comments, and even criticisms. It is possible that after presenting abstracts, some authors decided not to pursue their projects further. Although many abstracts did not result in publication, it has been shown that abstracts presented at meetings had a much better chance of getting published than did those rejected by meetings.1 Finally, the percentage of USCAP abstracts with pathologists in training as first authors for 2005–2007 ranges from 49 to 53% (Fred Silva, personal communication). It is unclear how this figure compares with other meetings, and its impact on the publication rate was beyond the scope of this study; however, this emphasizes the important role of USCAP meetings as an educational platform.

Our data showed that the average time between a USCAP abstract and its publication was 18 months, which is again comparable with those of other medical societies.2, 6 It is not uncommon to have a delay of several months between the acceptance and the publication of an article, although the use of electronic publication system has ameliorated this problem to some extent.

There are some limitations and potential biases in our study. The 3-year follow-up time is arbitrary, and a longer time range will certainly result in higher publication rates. However, our data showed that publications peaked between the 14th and 20th months, and only a very small number occurred after the 30th month. The initial PubMed search was carried out using two authors (first and last) as the query; therefore, publications with dramatic changes in authorship were likely missed. For quality assurance, we reviewed some randomly selected abstracts that failed the initial PubMed search, and showed that <2% were false negatives. Therefore, our data constitute a reasonably accurate presentation for the outcome of USCAP abstracts.

During this study, we developed a software system composed of several PERL programs that automatically parse abstracts and perform PubMed searches. This system sorts the returned articles by their probabilities of being a true match, calculated by the number and frequency of matched keywords. It puts the abstract and returned articles in a formatted webpage with all matched keywords highlighted in various colors. We found that this tool was extremely useful in facilitating human reviewers. It made it possible to search thousands of abstracts in a relatively short period of time. There is another potential use for this software system. At present, efficient retrieval and usage of meeting abstracts is hindered because abstracts are not archived in citation indices, such as PubMed, and they lack references to provide background information. Our software system can greatly enhance utilization of abstracts by building a user-friendly, searchable database. Within the database, abstracts will be searchable by various types of keywords and related articles can be automatically pulled from a PubMed search. A test database was built and can be accessed through intranet or Internet. Source codes are available on request.

In summary, we have determined that the 36% of the USCAP abstracts from 2005 to 2007 were published in peer-reviewed journals, and the outcome is affected by subspecialty and presentation format.