INTRODUCTION

At the 2006 ACNP Annual Meeting, the ethics plenary focused on helping scientists to ‘work ethically with industry’. In 2007, the spotlight was more on industry and how schools, scientists, and journals could be protected from perceived conflicts of interest (COIs). The sessions engendered vigorous discussion and contentious debate. One strong current was whether the mere fact of a corporate relationship leads to irremediable bias. Obviously, the broad COI issue is crucially important as it may corrupt government, regulatory process, industries, political activities, and more. Further, it can erode social solidarity by promoting public distrust. It is unlikely a single approach can remedy all contexts. Greater transparency has been recommended although the focus and degree of transparency is often obscure (DeAngelis, 2006).

The public judges credibility on the basis of socially approved and warranted authority. If a properly credentialed authority states that so and so is true, then it's usually accepted. Clinical scientists have been considered disinterested, objective, factual reporters, in contrast to the advertising media whose manifest goal is to competitively display a product's virtues. Sadly, some scientists have served as the hired hands of marketing, eroding belief in scientific objectivity. This is particularly problematic regarding pharmaceutical products. Further, this issue plays out amid concerns about high prices and profiteering that are not germane to the issue of scientific COI, but arouse a general distrust of industry practices.

In this context, scientific editors are concerned that peer-reviewed journals' trustworthiness has been impugned. What are adequate remedies? Many scientific journals now require disclosures from authors concerning industry support for years past and put severe restrictions on an editor's financial interests. The current authorship disclosure form for Neuropsychopharmacology states that:

‘following the financial disclosure statement to this work, I have listed separately in the ‘disclosure/conflict of interest’ section of the manuscript … all organizations, institutions, companies and individuals from whom I have received compensation for professional services in any of the previous three years, from whom I anticipate receiving such compensation in the near future, whether or not these affiliations appear to have any relevance to the work covered in the submission.' (ACNP)

The hope is that this will ‘ensure the integrity of medical science’ and ‘convince readers about the integrity of the data and analyses presented’ (DeAngelis et al., 2001). Given its positive intent, the question remains if this works or backfires.

The editors of the JAMA have been outspoken leaders who early recognized the hazards of the science/industry relationships. The remedy they advanced has been widely accepted. A recent editorial persuasively argues their case (DeAngelis and Fontanarosa, 2008).

‘As another mechanism to help assure complete reporting of study outcomes, the editors may request and review the original study protocol for any research investigation. These approaches should help convince readers about the integrity of the data and analyses presented, and should help eliminate uncertainty that some readers might have because of the sponsor's involvement in the research.

In an ideal world, physicians, patients, and the public would not have to be concerned about conflicts of interest related to medical research or have questions about the role of sponsors in industry-funded research. However, to respond to these current real-world concerns, THE JOURNAL will require clear reporting of authors’ financial conflicts of interest and clear description of the involvement of sponsors in medical research. Even though we recognize that these efforts are not fail-safe, we hope that such reporting will help to ensure the integrity of medical science, enable readers to interpret the results of scientific studies appropriately, and maintain public confidence in biomedical research.'

Unfortunately, this practice results in long lists of industrial relationships and investments appended to articles. This is not surprising as the lack of federal support for clinical pharmacological research drives many leading scientists to further their work by seeking industrial support. Further, by necessity, academics have always worked with industry psychopharmacologists to develop medicines for psychiatric disorders. That it is actually National Institutes of Health (NIH) policy to foster investigator relations with industry, because of limited federal research funds, is ignored. This mutually helpful collaboration is desirable, but appears shady to those looking for misfeasance.

As an unintended by-product, such lists do not raise public trust, but rather support the conviction that scientists are hired hands, and that ‘he who pays the piper calls the tune’. The reader is left with the burden of judging a report's objectivity without any improved factual basis. This journal method, an effectively ad hominem approach, has boomeranged, in our view. Further, this tactic does not address the uproar stirred by the revelation of concealed negative studies.

An assortment of attempts at ‘transparency’, refers solely to declarations of financial interests by journals, professional organizations, federal, and state research institutes. As discussed below we believe that these demands for financial transparency do not do the job. Further, journal access to submitted studies should go beyond optional requests for original protocols.

CONFLICT OF INTEREST: A MISLEADING TERM

In the clinical psychopharmacology context, the salient public health issue is whether there has been product misrepresentation, in terms of efficacy or safety, for financial gain. Unfortunately, there have been several well-publicized instances, both by the pharmaceutical industry and individual scientists, where misrepresentations have occurred. Some are simple deceptions such as withholding information on a toxic side effect. A more complex process is where corporate interests influence study design, data analysis and interpretation. Product misrepresentation is greatly enhanced when fallacious claims derive from supposedly objective scientific methods, validated by the revered process of independent peer review in prestigious journals. The media suggest a tremendous fall in public trust in scientific effort, although factual evidence does not support this (National Science Board, 2008). However, scientists must demand a trustworthy process for scientific communication. The entire nexus of concern has developed the unfortunate glib label of ‘COI’ deflecting concern from the verifiable issues of factual accuracy and justifiable interpretation into the foggier realm of discerning self-serving motivation by specifying any potential for financial gain. This is a simple ad hominem aspersion, casting doubt on honest professional behavior on no other grounds than a potential for income earned through deception. This misleading term has led to a miscarried repair, for example, the exhaustive listing by journal authors or public speakers of any remunerative link to pharmaceutical industry, for example, consultantships, research support, speakers' bureau, stock ownership by family members, and so on.

HAS THIS TACTIC RESTORED PUBLIC FAITH IN OUR SCIENCE?

The tactic of listing multiple income sources from industry does not restore confidence. The journals and scientific organization, by demanding this sort of ‘transparency’ actually affirm that public suspicion of misrepresentation is warranted.

Perversely, it leads to ridiculous counterproductive demands that only people with no industry contact should conduct studies, serve as peer reviewers, review grants, be on the DSM-V Board, and so on. This plays directly into the hands of the antipsychiatry cult who are given the opportunity to continually shift the goalposts for moral purity. For instance, the issue is now raised that journal editors have no mechanism for assuring that the authors' declaration of financial involvements is accurate. Therefore, no matter how stringent the editorial requirements for financial transparency, as there is no independent mechanism ascertaining completeness or accuracy, the suspicious reader remains suspicious. Further, as journal income is largely due to pharmaceutical advertising, the argument has been made—how trustworthy are editorial staff and so on?

There is no evidence whatsoever that this attempt at financial transparency has had any beneficial effects in terms of restoration of public trust. The limited utility of a declaration of possible remunerative gain occurs in presentations affirming the efficacy or safety of a specific product. However, the important issue is whether the presentation truthfully reflects the known scientific facts and draws justifiable conclusions (Stossel, 2005; Davis et al., 2008). If that is assured, any financial benefit is of no public health importance. The focus should not be the restoration of public trust, but rather making product misrepresentation either impossible or quickly detected. Addressing the cause of mistrust rather than highlighting assumed motivations gets directly to the issue.

MANAGEMENT OF MISREPRESENTATION

There are guidelines to improve designing, conducting, and reporting trials. Experimental design can be succinctly reported. However, a central problem remains. Journals do not get all the necessary material required to verify the submitted article's conclusions. There is little editorial emphasis on ensuring the complete data access that allows independent review and acts as a strong deterrent against biased reports.

The recent requirement by a group of leading journals for clinical trials to publicly register, before starting the trial, detailed subject, design, primary outcome measures, and so on as a precondition for publication is a positive step. Even more powerful is a legislative mandatory requirement. In the recent FDA act (January 2007 ‘FDAAA’), under Title 8, Section 801, ‘The Expanded Clinical Trials Registry Data Bank’, there is a detailed description of a mandated publicly available clinical trial registry. Further, not later than 18 months after date of the enactment, the Director of NIH shall ensure the public may search the entries of the registry data bank (of all) ‘clinical trials primary or secondary outcomes’… ‘as the Director deems necessary on an ongoing basis’.

Further, ‘for those clinical trials that for the primary basis of an efficacy claim’ there must be included ‘the primary and secondary measures and the tables of values for each the primary and secondary outcome measures for each arm of the clinical trial, including the results of the appropriate tests of the statistical significance.’

This act is clearly a major step forward. However the level of detail is unclear and as usual the devil is in the details. Does the centrally important Table of Values refer to the anonymized raw values for every individual who enters the study, or just the summary statistics of each experimental group? It is standard scientific peer-reviewed journals' practice to publish only summary descriptive statistics or even worse, only inferential statistics. However there are well-documented cases where initial, apparently substantial analyses were later contradicted by more powerful analytic procedures made possible only by raw data access (Klein and Ross, 1993).

Further, the requirement for a ‘scientifically appropriate test of statistical significance’ does not recognize the broad range of tests and the controversies regarding their appropriateness for various data sets. Further, the analyses should follow clear preexisting hypotheses. Nor are there any critical requirements about clinically significant differences from placebo, which are central to effectiveness claims relevant to the public health. Critical power assessments with regard to infrequent serious side effects are needed.

Printed journals are paper bound. It is impractical to publish reams of raw data, so journals are forced to use means, standard deviation, and so on. However, this implies that reported means, standard deviations, and so on, are sufficient shorthand for the raw data. It is well known that, ‘if the data are tortured long enough it will confess’. Unfortunately, data manipulation and abuse of statistical significance can yield interpretations having little to do with actual clinical or physiological relevance. Currently, there is no way for the current peer reviewer to know the actual case. Peer review certainly advances the level of scientific exposition, but being limited to the provided paper, it is foolish to blame reviewers or editors for not being clairvoyant. However, the Internet has freed journals from the paper problem so that the actual anonymized raw data for each subject can be provided for alternative analyses. Further, public access to such data becomes possible and may even be legally mandated.

Can journal's peer review rise to the challenge of adequate statistical expertise? Many current articles far exceed the statistical capabilities of so-called peer reviewers (and the readers). Some journals request reviewers to state if a statistical review is necessary, but usually this only means a transfer to another ‘no-pay’ volunteer. Scientific peer review must keep pace with the complexities of science. How to meet the expenses of engaging expert statistical and scientific reviewers is certainly a problem, but honesty, and public trust, demands it.

A quite serious problem that broadened data access generates is parasitic data analysis. ‘Currently, data developers are entitled to continue analyses, prepare papers, write relevant grants, and so on, justifying academic and career advancement, rather than making a free gift of their hard work to competitors’ (Klein, 2002). If a published article requires a broad public accessible database, it may be plumbed for competitive purposes concerning new topics. This amounts to parasitism. It should be met by a publisher's embargo for a fixed period, say 3 years, for analyses addressing new areas. However, reanalyses that directly address the original conclusions should be welcome. Raw data might be mined for variables or analyses that favor a particular drug or embarrass competing compounds, in the hope of marketing advantage. It seems likely this will occur, however, the NIH release of raw data concerning the diabetes and hypertension trials resulted in close critical assessment rather than a partisan brawl.

The apparently rigorous FDA requirements remain at the NIH Director's discretion as to just what outcome data is provided. For example, the National Institute of Mental Health is committed to make all the baselines and outcome data from the CATIE comparative trials of antipsychotic effectiveness available shortly after 1 January 2008. The current NIMH view is that all qualified investigators may obtain a CD-ROM copy of this set. This has been delayed but the reason has not been publicized.

Others are concerned that as the FDA promotes safety policies, such as ‘black box warnings’, on the basis of ‘signals’, (that is, scientifically questionable trends that still arouse regulatory policy), dredging public access competitive data may well result in a safety ‘signals’ flood. The FDA will be in the unhappy position of having to either promote a wave of poorly justified restrictive precautions or forced to defend inaction against irate commercial, public, and media pressures. Still other concerns are whether plaintiff lawyers might sift the data set to support product liability claims. Again the context of the public release of raw data by the NIH decreases apprehension. Controversy based on available scientific facts is positively desirable, especially when compared with controversy stirred by nonfactual ideology.

SUMMARY

In summary, the journal editors' effort to restore faith in science by financial disclosure has been inadequate to the task. The editors could improve matters by demanding access to the raw data supporting claims for product safety and effectiveness. The recent emphasis on a detailed clinical trials registry anteceding the trial is clearly a breakthrough. The mandated clinical trial registries that include outcome data are even better. How well this works in terms of detailed public knowledge remains to be seen.

We reiterate, the complex COI issue cannot be dealt with by an editorial fix. The basic informational problem is transparent access to the relevant data. This issue is closely related to academic–industry collaboration. The distinction lies between being a hired hand or a colleague. Hired hands follow orders or quit. Colleagues require independent participation in, or at least full information, at each stage of protocol development, realization, analysis, write-up, and dissemination—or they should quit. Our hope is that as academic–industry–federal relationships evolve, scientific collaborations will be more transparent with regard to the primary data and therefore more ethical.