In the next few weeks, the government of Japan will announce its budget for the fiscal year starting in April 2006. The slow economy and tight overall budget situation may finally have caught up with research, and this year, for the first time in fifteen years, science spending could be cut.

Economies and budgets wax and wane, and scientists cannot expect increased funds as a birthright. But they do have a right to expect fair and transparent evaluation as a guide to good budget management. Japan's national system is letting them down. For decades after the Second World War, spending on science was distributed evenly among about a hundred national universities. But since the mid-1990s, Japan has taken a more selective approach, as befits one of the world's leading scientific powers.

The Council for Science and Technology Policy was established in 2001 to advise the prime minister. Its 15-member council, chaired by the prime minister and including five other ministers of state, industry representatives and a few scientists, carries out an annual evaluation of every science project funded by government agencies. It uses subcommittees to prioritize according to four grades — S (for superior), A, B and C — on the basis of scientific innovation, international competitiveness and degree of social contribution.

Increasingly, and this year in particular by all accounts, the system bears little resemblance to an objective, independent assessment. This can be a serious problem for major initiatives involving numerous laboratories and hundreds of millions of yen. One problem is a quota system for grades that can be arbitrary and unfair. Such grade quotas need not be a problem if they are applied on a sliding scale that takes into account objective, well-based judgements of achievement across disciplines. But that is not what happens. Too often, judgements, often based on a single day's visit to a project's group leader, don't do more than scratch the surface of a project's significance.

Another problem is that the committee is entirely Japanese. There is of course a limit to how much international experts can be involved. But an international perspective would seem obligatory, particularly when assessing large projects, some of which depend on international collaboration and represent a world-class effort costing many billions of yen.

But the worst failing of the system is a progressive distortion of supposedly objective assessment by the priorities and preferences of the committee and government. After discussions in closed rooms, ratings emerge that in many cases bear no relationship to scientific achievement or potential, and seem to defy explanation. A major project may be graded ‘S’ for two years in a row and then be graded A despite maintaining its performance. Even worse, some cutting-edge projects, after many years of top-level grades, have this year been graded ‘C’ for no conceivable scientific reason.

Some might argue that scientific spending, like other funding, must follow government priorities and so be subject to abrupt changes. No one would suggest that national priorities should remain fixed. But for the process to be nationally and internationally credible, and for top-notch scientists to believe that Japan is a good place to spend their best years, the system of evaluation must be revised. Many researchers see it as opaque and apparently arbitrary. Japan may not be unique — other leading countries also lack a clear evaluation process — but this does not make it acceptable.

Scientific assessment should be objective, well considered and transparent to those being assessed.

Scientific assessment should be objective, well considered and transparent to those being assessed. It should be kept distinct from the process of priority setting, which should itself be open, and should involve greater participation of researchers before final decisions are reached.