Britain last carried out an underground nuclear test ten years ago. The 1998 Strategic Defence Review confirmed the need for nuclear weapons until security can be assured without them. Britain signed the Comprehensive Nuclear Test Ban Treaty in 1996, ratified it in 1998, and is planning to support its nuclear weapon stockpile without further underground nuclear tests (Box 1). In the past, such testing has been fundamental to the process used for assuring warhead designs.

Now a new scientific methodology is being developed, without further nuclear tests, aimed at underwriting the safety and performance of the ageing Trident stockpile with continued high confidence. The approach builds upon previous nuclear test experience and seeks to replace the requirements for further empirical test data by developing a deeper theoretical and experimental understanding of the relevant fundamental science. This must then be drawn together and applied to the nuclear warhead system using intensive numerical modelling.

This new approach will continue to demand high calibre scientists and engineers, supported by modern experimental techniques and diagnostics, underpinned by state-of-art supercomputing and visualization facilities. This article describes the challenge and Britain's response to it.

Britain and nuclear weapons

The research for Britain's nuclear warheads and warhead design has been conducted principally at the Atomic Weapons Establishment (AWE) at Aldermaston. Since the establishment of AWE in April 19501, Britain has had a series of warheads in service. These were designed and proved on the basis of a relatively small number of nuclear tests — 45 tests have been conducted, only 19 of which have been fired in the past 35 years. The latter were conducted underground in the Nevada desert, under a collaborative agreement signed in July 1958 between Britain and the United States.

That such a comparatively small number of tests has been so effective is attributed to the scientific process and design methods adopted, and to the exchange relationship with the United States which brought the benefits of access to their much greater experience and larger nuclear test database.

Today the sole weapon system in Britain's deterrent arsenal is the Royal Navy Trident submarine-launched ballistic missile system, equipped with British warheads, which went into service in 1994. The credibility of the national deterrent depends on this one system, still relatively young and underwritten for performance and safety by relevant underground test data. Because it will remain Britain's only strategic defence system for the foreseeable future, the ability to predict change to the system through normal ageing processes becomes crucial. So too do the abilities to underwrite reliability and safety of this changing stockpile through its service life.

The nuclear warhead

Although details of specific warhead designs remain classified to prevent proliferation, the broad principles are widely understood and recorded2. A modern thermonuclear warhead comprises two main elements, conventionally referred to as the primary and secondary stages. The conditions generated in a nuclear warhead are indicated in Fig. 1.

Figure 1: Conditions generated in a nuclear warhead.
figure 1

The green and blue areas indicate the temperatures and densities reached during the phases of operation of a nuclear warhead. Temperatures are conventionally expressed in electron volts, where 1 eV corresponds to 1.160 × 104 K. On this scale, room temperature corresponds to about 3 × 10−2 eV and 1 keV to approximately 107 K, a temperature found in the central region of the Sun. Also indicated are the mid-points of the regions that can be accessed currently by explosive experiments and AWE's HELEN 1-TW neodymium–glass laser. The 600-TW US National Ignition Facility (NIF) will enable much higher temperatures to be accessed. Short-pulse lasers in the petawatt power range may offer a practical means in future of generating plasma over a wide range of temperatures.

In the primary stage, chemical high explosive is used to compress a core containing plutonium-239 into a state of nuclear supercriticality. The subsequent escalating fission process results in temperatures and pressures that allow the energy generation, or yield, to be augmented by the fusion of a deuterium–tritium mixture — a process known as 'boosting'. The exploding primary stage releases copious X-rays, which can then be used to implode a secondary stage with immense force. It is from the fissionable and fusionable materials which constitute the secondary that the bulk of the overall warhead yield is derived.

Nuclear warheads are made from materials chosen for their special properties. They are often complex and their fundamental properties and ageing characteristics can be difficult to understand. The various components are integrated into a system, which brings into play concerns about compatibility and corrosion. The whole must remain safe and serviceable within its operational environment, potentially for decades.

The scope of the necessary scientific investigation is immense. An example is provided in Box 2, which focuses on the unique metallurgical and ageing characteristics exhibited by plutonium. The ultimate questions concerning warhead safety and reliability must now be answered without the benefit of direct evidence from nuclear tests.

Confidence and uncertainty

The overall process by which confidence in the safety and performance of the warhead stockpile is to be assured without underground nuclear tests is shown diagrammatically in Fig. 2.

Figure 2: Confidence in the safety and performance of the nuclear stockpile.
figure 2

Confidence is based ultimately on predictions from high-fidelity numerical models, with experimental data on the performance of materials and components also being used for model validation. Historical nuclear test data and information from the examination of surveillance rounds withdrawn from the stockpile provides further information for the process.

It is an iterative process, the central and pivotal feature of which is a suite of high-fidelity numerical models run on supercomputers. A series of hydrodynamic experiments probe the phenomenology of the primary stage, and experiments done at very high energy densities are essential to studies of both stages. Lasers and pulsed power machines are able to achieve relevant densities and temperatures and also produce the only source of data on X-radiation flows. The experimental data are used to improve both basic theory and the algorithms used in the computational models. The improved models are in turn validated by experiment. Finally, the predictions of warhead performance from these models are compared with the historical archive of nuclear test data and variations are used for further refinement of new models. Data from a surveillance programme, in which warheads are withdrawn from the stockpile and subjected to forensic examination, are similarly fed into the prediction processes.

The design of nuclear weapons has always been first and foremost a theoretical undertaking, with nuclear testing used to validate and refine the models used. As designs became more sophisticated, and the mathematical models more complex, the interdependence of design, underground nuclear testing and model development became firmly established. Once nuclear testing was no longer possible, Britain was left with a suite of multidimensional computer codes incorporating a wide range of physics models and supporting material properties databases, which on their own were not fully reliable as a predictive tool.

The scientific challenge is therefore to develop a suite of enhanced numerical models of the warhead based on a more comprehensive understanding of the processes taking place within it. The models must be based on a further understanding of the properties of warhead materials such as high explosive and plutonium, under very wide ranges of physical conditions, and on knowledge of how these properties change with age.

The development and refinement of science-based models to match the demands of a steadily ageing stockpile will be undertaken over many years. However, the broad programme and requirements for facilities are now established. Of particular importance is that, in future years, the work must be done without the support and knowledge of the staff who actually designed, tested and put into service the British Trident warhead.

Of course the same challenges face other nuclear states. The United States, for example, has developed a science-based stockpile stewardship programme3, which includes the provision of major facilities such as the National Ignition Facility (NIF), a 600-terawatt laser currently being built at the Lawrence Livermore National Laboratory (LLNL) in California, and the Dual-Axis Radiographic Hydrodynamic Test facility at the Los Alamos National Laboratory in New Mexico. A significant investment in supercomputing is in place in the Accelerated Strategic Computing Initiative programme in which supercomputers are being developed for US weapons laboratories. The US approach differs to some degree from that of Britain, although this illustrates one advantage of the collaborative relationship agreed in 1958, enabling independent peer processes to consolidate confidence in the respective scientific methodologies.

The warhead science programme

The British science programme includes elements that map directly onto the methodology for assuring stockpile confidence (Fig. 2). Assurance of nuclear safety and performance will rely fundamentally on the computer models and it is essential that they are validated against previous test data, and linked to modern laboratory experiments on hydrodynamics and high-energy-density physics.

Computational modelling The approach taken to achieve high-fidelity simulation is to develop improved models of the basic physics and materials properties, coupled, where necessary, with higher accuracy algorithms. Accurate simulation will require much greater three-dimensional engineering detail than currently achieved, and the physics of turbulent mixing, particle transport and material properties must be treated at a fundamental level.

These developments will drive requirements for increased computational power. At present, the only architecture that offers the orders of magnitude increase in computing power required is the massively parallel processor approach. Many of the models of interest will continue to be limited by the available computational power.

Verification and validation of the new codes is an essential element in providing the required confidence. Verification is the process of confirming that the codes are indeed performing the intended tasks without error and that the various mathematical equations are being solved to sufficient accuracy. Validation is the process of confirming that the physics models and material properties are indeed sufficient to answer the stockpile questions, with suitable laboratory experiments to test specific modelling aspects. Comparison of code predictions with integrated trials gives a more quantitative measure of capability and a clear indication of areas for further improvement.

Hydrodynamics Britain's approach has always been to emphasize laboratory experimentation to help underpin theory and computational modelling. This will continue, but with yet higher demand on the fidelity of diagnostics. From the earliest days, it has been possible to study the physics of primary operation, using simulant materials, up to the point where a real weapon would become nuclear critical. Experiments focus specifically on how materials behave at high strain rates and how compression and shock waves develop inside components.

This field is conventionally termed 'hydrodynamics', because even solid materials exhibit fluid properties when subjected to explosively driven shocks. Most experiments use non-fissile materials such as tantalum, lead or depleted uranium to simulate plutonium, but a small number of experiments have necessarily used plutonium itself. In these cases, the amounts of fissile material involved were far below anything that could produce nuclear yield.

AWE has a number of facilities to contain explosive experiments. They have internal volumes of the order of 1,000 m3, with armour-plated walls and ceilings that are constructed of reinforced concrete some 0.6-m thick and that can accommodate repeated firings of high explosive without incurring structural damage. Three of the chambers are specially constructed to allow the conduct of experiments involving toxic materials. On the occasions when fissile material is used, the experiments are additionally contained within leak-tight spherical vessels, about 1 m in diameter, made of thick submarine steel. These massively robust vessels completely contain the products of the test assembly following the explosion (Fig. 3). In addition to future tests planned at AWE, complementary experiments are being carried out in collaboration with the US weapons laboratories, including some at their U1A facility in Nevada.

Figure 3: Diagrammatic representation of an AWE firing chamber and special containment vessel.
figure 3

In the firing chamber (lower figure), two X-ray generators (shown in green) produce the sharply focused X-ray beams that converge on the experiment (not visible at this scale) within the chamber. Above this is a cut-away view of one of the massively robust, leak-tight vessels used to contain experiments with fissile materials.

A number of diagnostic techniques are available (Fig. 4). The oldest and simplest uses arrays of fine pin probes to reveal details of the early motion, but new diagnostics combining fibre optics, lasers and streak cameras are being developed to measure velocities and accelerations to better than 1 per cent (ref. 4).

Figure 4: Diagnostics for hydrodynamic experiments.
figure 4

A pin-probe assembly is used to reveal details of motion by detecting the time of arrival of a metal surface in an experimental assembly after detonation of an explosive. The electrically charged metal pins are discharged by the arriving surface. Fabry–Perot techniques are used to measure the surface velocity. Four probes, indicated by arrows, are connected by fibre optics to an external Fabry–Perot interferometer, the output of which is captured by a streak camera. Velocities can be derived from analysis of the resulting fringe pattern.

Surface break-up of a material5,6, following the passage of a shock wave, can be studied using piezo-electric crystal probes. The technique developed at AWE measures the momentum of material ejected or spalled from the surface, and masses of a fraction of a microgram moving with a velocity of about 1 km s–1 can be determined.

These techniques can only provide motion or surface information. AWE has been active since the early 1960s7 in pioneering tools to investigate material compression and the transmission of shock waves through materials, using short pulses from high-energy X-ray machines. The largest radiographic machine generates a 10-MV electron beam of 30 kA with a duration of less than 100 ns, which is focused into a 5-mm tantalum target to produce the X-ray source necessary for flash radiography8.

Uniquely at AWE, the radiographic machines are used in pairs (Fig. 3). Simultaneous images of an experiment from two different directions provide scope for three-dimensional resolution, while two radiographs taken at different times enable the development of shock waves or compression fields to be followed. Various analytical techniques are used to interpret the radiographic evidence that in its raw state is blurred and degraded by scattered X-rays, the finite size of the X-ray source, and quantum effects in the recording films.

Although the current facilities are powerful, they are not capable of providing data of an accuracy sufficient to meet future programme needs. Additional X-ray views are required to adequately capture three-dimensional phenomena for validation of the computer models now being created. A new hydrodynamics research facility is therefore being planned. It will be able to contain experiments with both non-fissile and fissile material and will have advanced radiographic capabilities giving improved image resolution and multiple views. Computer tomography will unfold shock waves and compression fields to give direct comparisons with computer predictions.

Interfaces between components inside a functioning warhead warrant special attention. Unstable conditions can exist where small perturbations grow rapidly and can cause mixing between materials9. The break-up of a material interface could have a profound effect on warhead performance and it is essential that we develop an improved understanding of possible instabilities. Work on fundamental theory and modelling algorithms10 is under way in parallel with experimentation.

High-energy-density physics The key to a material model is the equation of state (EOS) — the relationship between the density, internal energy, temperature and pressure. Much data have been acquired using techniques described above, and the 1-TW HELEN laser at AWE has also been used successfully for acquiring EOS data (ref. 11; and Fig. 5).

Figure 5: Determination of material equation of state using the HELEN laser.
figure 5

a, A hohlraum (1 × 1-mm laser-heated cavity) generates multi-Mbar pressure shocks by X-ray ablation of thin foils. The shocks travel through the foil to the steps where the shock transit times are measured using optical streak cameras. Data points at pressures of up to ˜20 Mbar have been obtained on HELEN. b, A streak record of shock emission from an aluminium and copper step target. Time runs from left to right, the aluminium step is below, the aluminium base is in the middle and the copper step is at the top. Comparison of transit times for steps of pairs of materials, the properties of one of which are known, enables data for the other material to be derived.

In the very hot matter of a nuclear warhead, thermal radiation is particularly important. The crucial parameter is the radiative opacity, which quantifies how thermal radiation interacts with matter by absorption, emission and scattering. It is sensitive to the composition, temperature and density of the material and expresses the degree to which a material impedes radiation flow. In common with the other material properties, it must be known accurately at the very high temperatures and pressures typical of a functioning warhead.

Because of the inherent difficulty of carrying out systematic measurements of the opacity of hot plasmas, there is a heavy reliance on modelling and calculation12. The quantum mechanical processes are well understood in principle, but applying this powerful theory to the behaviour of, say, 1024 atoms in a hot plasma is complex, to say the least.

All opacity computer codes necessarily contain significant approximations, making it essential to validate the accuracy of their predictions. Comparisons with data produced by codes developed at other laboratories can provide much useful information, but ultimately comparisons must be made with experimental measurements. Over the past 15 years, AWE has been engaged in an active experimental programme to measure opacities13. Powerful lasers such as HELEN at AWE and Nova14 at LLNL have been used to create plasmas at temperatures of approximately 106 K, and quantitative techniques to measure the transmission of radiation have been developed. Figure 6 describes the techniques used and shows a comparison of an aluminium opacity experiment with the corresponding calculations.

Figure 6: Opacity measurement and calculations.
figure 6

Laboratory measurements of plasma opacity can be made using high-power lasers such as HELEN. The subject material is heated indirectly using a foil radiator or hohlraum, and allowed to expand against a plastic tamper. In this way, uniform plasmas can be created. A laser-irradiated fibre behind the target acts as a point source of X-rays, which is viewed both directly and through the target with an X-ray spectrometer, allowing the absorption spectrum to be inferred. The figure shows a comparison between measured16 and calculated12 K-shell transmission values for an aluminium plasma at a temperature and density of 40 eV (about 5 × 105 K) and 0.014 g ml–1 respectively. The good agreement provides a strong quantitative check on the calculated opacities.

More experiments are planned for the future to validate opacity predictions at temperatures and densities not accessible with current experimental facilities. The use of short-pulse lasers and shock-compressed targets offer the possibility of achieving both higher temperatures and higher densities than have hitherto been possible (ref. 15; and Fig. 1). Pulsed power machines such as Sandia National Laboratory's 'Z' machine will facilitate measurements at much lower densities and NIF will make still deeper inroads into warhead physics.

As well as opacity and radiation flow, laser experiments can be designed to test theoretical models of complex radiation/ hydrodynamic phenomena (Fig. 7). Numerical methods have advanced significantly with supercomputers, and their predictive capabilities are impressive. But it is experimentation that can reveal fallibilities and indicate areas for further research and development.

Figure 7: Experimental validation of models of complex phenomena.
figure 7

Here a laser is used to heat a 1.6 × 1.2-mm hohlraum, which in turn heats a piece of aluminium (shown in blue). The resulting jet of aluminium penetrates a piece of polystyrene, which is radiographed by an X-ray backlighter also driven by the laser. The results from two numerical codes are shown together with the X-ray record from the experiment. Both codes reproduce the main features of the flow but show different development of the jet tip. Analysis of the detail will indicate where the theory and algorithms must be improved17.

AWE plans a continuing experimental programme using the facilities at Aldermaston, as well as those available in the United States as part of the collaborative arrangement. In addition, British investment in the US NIF programme has ensured that experimental time will be available when the facility becomes operational.

The way ahead

The British science programme aims to integrate all elements of warhead science to create a coherent, long-term plan that matches stockpile management requirements. It builds on existing foundations and expertise that retains knowledge of the nuclear testing regime.

The new approach demands state-of-the-art supercomputing facilities. A 3-teraflop machine is to be installed at AWE in 2002. Physics computer modelling and three-dimensional dynamics and transport codes will be advanced, as will the introduction of improved numerical algorithms to take advantage of modern supercomputer architectures. These theoretical and computational developments must be validated by high-fidelity laboratory experimentation. A new hydrodynamics research facility with unique multi-axis, high-power radiographic diagnostics is planned to provide the necessary data on pre-nuclear aspects of warhead behaviour. British lasers, together with access to US facilities, will enable laboratory examination and validation of very high temperature phenomena.

The strategy for safety and performance assurance recognizes that these scientific developments are unlikely to replace totally the ultimate proof afforded in the past by nuclear tests. However, by continuing to recruit and retain staff of the highest intellectual calibre, working ever more closely with British academic and industrial communities, and benefiting mutually through international collaboration, the programme should achieve the necessary levels of confidence in the continuing reliability and safety of Britain's independent nuclear warhead.