Almost every technological process depends in some way on temperature measurement and control — think intercontinental flights, fresh bread, or reliable electricity. They all depend on a sophisticated measurement infrastructure that allows temperature measurements to be traced back to the SI unit of temperature, the kelvin, via the International Temperature Scale of 1990 (ITS-90)1.

The direct measurement of temperature requires a primary thermometer based on a well-understood physical system, the temperature of which can be derived from measurements of other quantities. But primary thermometry methods2 are complicated and time-consuming — in other words, not very practical.

A practical thermometer can be anything that has some temperature-dependent property, such as resistivity or the thermoelectric effect. Such thermometers exhibit rich physics — so rich in fact that deriving their temperature dependence from first principles is intractable: a calibration is required, which necessitates what is known as a temperature scale.

Temperature scales have two components: defined temperature values, determined by primary thermometry and associated with a set of highly reproducible states of matter like phase transitions (freezing, melting or triple points of pure substances), and specified interpolating instruments with defined interpolating or extrapolating equations3.

One of the first temperature scales can be traced back to Santorio in 1612, who used the expansion of a gas for an arbitrary, but reproducible, scale. In the mid-seventeenth century, Hooke and Newton developed practical liquid-in-glass thermometers and the concept of a scale based on reproducible states of matter. In 1702, Rømer suggested using the freezing and boiling points of water as fixed points for a scale. This then ultimately led to the centigrade (later Celsius) scale in 1742.

The nineteenth century saw the development of more reliable thermometers and a better understanding of thermodynamics. In 1887, the first modern temperature scale, called the normal hydrogen scale, was introduced, which enabled practical use of the fledgling gas thermometer to assign the temperature, closely followed by a nitrogen scale, which extended the range up to the boiling point of sulfur at about 440 °C. These two scales were not just practical and arbitrary, but were linked to the emerging concept of absolute temperature. In combination with better thermometers, such as the platinum resistance thermometer of Griffiths and Callendar and the platinum/rhodium thermocouple of Le Châtelier, these scales led to a flurry of measurements of freezing and boiling points of different materials4. Thanks to Callendar's persistence, these endeavours led to the International Temperature Scale of 1927 (ITS-27). Although the details of the ITS-27 were revised and extended in subsequent standardized scales in 1948, 1968 and 1990, conceptually, little has changed since.

Practical thermometry is still evolving. Above the freezing temperature of silver (961.78 °C), the ITS-90 relies on radiation thermometry, which makes use of Planck's law and the ratio of measured spectral radiance between a fixed-point blackbody (having a defined temperature) and that of an unknown radiance source at the temperature of interest1. Minimizing the measurement uncertainty with respect to thermodynamic temperature is severely limited by the uncertainty of the temperatures assigned to the fixed points, which are also at inconveniently low temperatures. So why not just use higher-temperature fixed points?

The answer lies in metallurgy: the fixed points mentioned so far necessarily take the form of a pure metal ingot encased in graphite (the picture shows a cleaved fixed-point cell of this type). But, at temperatures above the freezing point of copper, suitable metals dissolve carbon from the graphite crucible, forming a metal–carbon alloy. The phase diagram of such alloys is characterized by a eutectic point: the composition and temperature corresponding to the lowest melting point of a mixture of components. The eutectic temperature is much lower than the freezing temperature of the pure metal. For a long time it was thought that this depression of the melting temperature prohibited the use of fixed points above the copper point.

This unfortunate situation persisted until 1999 when Yamada5 realized that since the ingot takes exactly the required amount of carbon from the graphite crucible to form the eutectic composition, metal–carbon alloys could be used as fixed points, because the composition — and hence temperature — at the eutectic point is invariant. In a case of history repeating itself, a concerted effort followed to characterize the melting temperatures of a wide range of metal–carbon alloys up to almost 3,200 °C (ref. 6). These high-temperature fixed points have proven to be extremely reproducible and now play a major role in the ongoing uncertainty reduction in temperature measurement above the silver point, with direct benefits for end users already being realized through lower thermocouple and radiation thermometer measurement uncertainties.