Ann Finkbeiner examines two books on the cold war's ethical and material legacies.
A decommissioned missile in a silo at the Titan Missile Museum in Green Valley, Arizona.
War is good for science. Countries require their defence industries to invent military technologies, which are often based on science, sending money to researchers. So how does this intersection affect the course of research? Two books discuss the extent to which scientists change — or must change — what they do in response to national emergencies.
The cold war is an excellent case study. It saw the continuation of the extraordinary development of nuclear weapons, ballistic missiles and radar begun during the Second World War. Science and Technology in the Global Cold War, an essay collection edited by science historians Naomi Oreskes and John Krige, addresses the question: were scientists guided by curiosity, or did national funding redirect them towards military technological applications? Its answer: although redirection is inevitable and powerful, so is curiosity.
The balance differed from field to field and place to place. China and the Soviet Union erased the distinction between pure and applied science and directed their researchers towards national priorities — an isolated self-reliance in China, and big industry in the Soviet Union. In China, horse breeding was given the status of scientific experimentation; in the Soviet Union the Sputnik satellite, launched in 1957, was deemed “Soviet science”. But the balance never stayed put. Chinese researchers and students looked towards international science: between around 1980 and 2000, at least 10,000 went abroad to work and study. Soviet nuclear scientists outmanoeuvred the state and, by relabelling pure science as applied, succeeded in creating a reactor design based more on technical feasibility than on cheapness.
The West experienced a similar shifting balance. The US response to Sputnik led to NASA's Apollo human-spaceflight programme, but as that project slowly ground down, NASA adopted its technologies for the Mission to Planet Earth observation system, which gathered data for climate scientists. Radar, developed by Britain and the United States to track aircraft and missiles, was used in the 1960s by US physicist Irwin Shapiro to test Einstein's general theory of relativity. A cold war US surveillance system that used underwater sound recordings to trace the movements of submarines was recycled around the year 2000 by scientists at the Scripps Institution of Oceanography in San Diego, California, to map ocean temperatures and global warming. If national priorities bend science towards application, scientists bend it back towards pure research.
Written mostly by historians of science, Science and Technology in the Global Cold War is an academic conversation with no grand conclusions. But one commonality that emerges, writes Krige, is that “he who paid the piper didn't so much call the tune as provide the instruments, the hardware, and the logistical support”. Changing the metaphor, national attempts to direct science look like a magnetic field aligning iron filings — until the filings go off on their own in all directions.
Unmaking the Bomb presents a more complex relationship between scientists and war, arguing that researchers tasked with creating extraordinarily lethal applications have a responsibility to control them. Specifically, the authors — physicists and nuclear-policy experts Harold Feiveson, Alexander Glaser, Zia Mian and Frank von Hippel — present the case for controlling the materials that make a nuclear bomb nuclear.
National mandates drove the nuclear bomb's development during the 1940s. By the peak of the cold war, 10 countries — including the United States, the Soviet Union, Britain and China — had built 65,000 nuclear warheads. But before the first bomb had been built, nuclear scientists had been lobbying politicians to change the mandate from building nuclear weapons to controlling them. The lobbying, partly through avenues such as the Pugwash Conferences on Science and World Affairs, was fairly successful: the number of nuclear weapons in those 10 countries has fallen to around 17,000.
But the fuel — fissionable plutonium or uranium enriched in a rare isotope of uranium — is still with us. Neither occurs naturally, so bomb-builders manufactured them. At the end of the Second World War, 100 kilograms of weapons-grade material had been made; now, it is 1,900 tonnes, enough for 100,000 bombs. As the authors show, material from dismantled bombs can be downblended to a less fissionable form and stored or used in power plants, but it cannot be destroyed, and it remains available for nuclear weapons or for low-tech radiological weapons. In 1945, only the United States could build a nuclear warhead; now, 35–40 countries can, and the margin of security is “too slim for comfort”, says a former director-general of the International Atomic Energy Agency.
Feiveson, Glaser, Mian and von Hippel convincingly argue that this problem demands a real and immediate solution. Along with the history of nuclear weapons, they cover attempts to control the weapons' spread, including the 1970 Treaty on the Non-Proliferation of Nuclear Weapons; the physics and technology of producing, downblending and storing fuel; and the complexities of convincing nations to agree to be supervised and controlled by an international agency.
The authors' suggested long-term policy is to reduce the amount of fissionable material in military and civilian stockpiles, and to regulate it “as if the world is preparing for complete nuclear disarmament”. Countries should stop hiding the sizes of their stockpiles, the authors write, and stop manufacturing weapons-grade uranium and plutonium; they should also downgrade or bury all fissionable material, even if they must give up nuclear energy. Finally, they should agree to international verification of declarations about weapons production — even if that means relying on nuclear scientists rather than politicians to tell the truth.