The rigorous evaluation of research projects and programmes is in increasingly common demand across the world. Attempts have been made to implement it in Europe, Japan and the United States — but until the calibre of these efforts improves, scientists will continue, justifiably, to view them with suspicion.

Policy-makers have talked for years about the need to rigorously evaluate research programmes that consume billions of dollars of taxpayers' money. Researchers — especially those doing basic research that can't be readily tied to concrete outcomes — have tended to be sceptical. Nonetheless, evaluation is now under way on a significant scale in every major economy.

Yet nowhere is the circle between research programmes, evaluation and research funding decisions quite complete. A process for measuring ‘performance’ is firmly in place at many agencies. Yet few research managers genuinely believe that the outcomes of these assessments are really driving funding decisions.

Take the United States, where the Government Performance and Results Act demands significant qualitative assessment of all federal programmes. The Bush administration has also proactively pursued research evaluation through the all-powerful White House Office of Management and Budget (OMB), which sets the president's annual budget proposal. A lot of evaluation is now taking place. And to its credit, the OMB seems to have focused much of its attention on programmes that are not properly peer reviewed.

But in the end, what is the evidence that anyone in the government is listening? Where are the examples of programmes that the administration doesn't like being revived because they perform well — or ones that it intuitively favours being cut back?

Evaluation that doesn't work is, according to a discussion at the annual meeting of the American Association for the Advancement of Science (AAAS) last month, worse than none at all. It costs a substantial amount of money — anything from 0.25% to 2% of the cost of the programme under scrutiny — and it exhausts and sometimes demoralizes the researchers obliged to participate. Some argue that this process is of inherent value in lending direction to projects and programmes, but that is a minority view.

Even so, demands for accountability will not go away. The systems in place, flawed as they may be, are unlikely to be dismantled. In Japan, hard-done-by researchers are in rebellious mood (see Nature 438, 1051–1052; 200510.1038/4381051b). And in the United States, the process contains little of the transparency Americans expect of their government.

The OMB is almost as secretive as it is powerful — but researchers need to be convinced that all of its evaluation is leading somewhere. Congress should therefore ask the Government Accountability Office to report on the OMB evaluation process as it relates to science and technology. Such a study, from a watchdog of established integrity, might reassure research managers that they are not being sent each year on a wild goose chase.