Richard Feynman once made a statement to the effect that the history of mathematics is largely the history of improvements in notation — the progressive invention of ever more efficient means for describing logical relationships and making them easier to grasp and manipulate. The Romans were stymied in their efforts to advance mathematics by the clumsiness of Roman numerals for arithmetic calculations. After Euclid, geometry stagnated for nearly 2,000 years until Descartes invented a new notation with his coordinates, which made it easy to represent points and lines in space algebraically.
Feynman himself, of course, introduced into physics a profound change in notation with his space–time diagrams for quantum field theory. Previously, writing out the terms in an infinite series for a probability amplitude involved a laborious algebraic procedure, which Feynman replaced with simple pictures and explicit rules to translate them into mathematical expressions. This was an advance in housekeeping, if you will, but also among the most important advances in twentiethcentury mathematical physics.
However, one of the most important and elegant advances in mathematical notation has perhaps not yet achieved the wide recognition it deserves. In 1873, the English mathematician and philosopher William Clifford invented a deceptively simple algebraic system unifying Cartesian coordinates with complex numbers, and offering a compact representation of lines, areas and volumes, as well as rotations, in 3space. In more advanced physics, Clifford's algebra — he called it 'geometric algebra' — is now well recognized as the natural algebra for describing physics in 3space, but it hasn't yet caught on in engineering, or even in standard treatments of electricity and magnetism or fluid dynamics, where vector analysis with its ugly cross product still holds sway.
One day, perhaps, Clifford's algebra will be taught routinely to students in place of vector analysis.
Clifford's geometric algebra begins with the three coordinate vectors e_{1}, e_{2}, e_{3} inherited from Descartes for the three independent directions in space. These satisfy the usual rules of orthonormality, e_{i} • e_{j} = δ_{ij,}; they are mutually perpendicular and of unit length. Clifford then introduced another kind of multiplication between vectors, denoted as e_{i}e_{j}. His key point was to assume that this kind of multiplication would be anticommutative for i not equal to j, that is, e_{i}e_{j} = −e_{j}e_{i}. Another way to put it is that multiplication between parallel vectors is commutative, whereas it is anticommutative for orthogonal vectors.
These rules are enough to define the algebra, and it's then easy to work out various implications. For example, (e_{1}e_{2})^{2} = (e_{2}e_{3})^{2} = (e_{3}e_{1})^{2} = −1. Something like e_{1}e_{2} is called a bivector, but isn't a vector at all; rather it is a novel thing in its own right. Similarly, e_{1}e_{2}e_{3} also isn't a vector, or a bivector, but a trivector, another totally new thing, the square of which also comes to −1. Within this algebra, the most general object is a multivector — the sum of a scalar, vector, bivector and trivector. In a sense, this is an advance over Descartes in that it provides a way to combine lines, areas and volumes within one formalism.
The resulting algebra has remarkable richness within it. The bivectors e_{1}e_{2}, e_{2}e_{3}, e_{3}e_{1}, for example, can be thought of as oriented areas. They are linked to rotations respectively about the e_{3}, e_{1}, e_{2} axes, and act identically to the basis elements of William Rowan's quaternions, which he introduced in 1843 in an attempt to generalize complex numbers to three dimensions. The trivector — for simplicity, we can denote it as ĭ — acts analogously to i = √−1; it's square is −1 and it commutes with all the basis vectors. Using this shorthand, the bivectors and trivectors together satisfy the Pauli algebra e_{i}e_{j} = ĭɛ_{ijk}e_{k} central to the description of rotations in three dimensions (here k is summed over, and ɛ_{123} = 1 changes sign for any permutation of indices, and vanishes if any two are equal).
For example, the rotation of any vector about the e_{3} axis is generated by multiplying the vector from the left by e_{2}e_{1}; this bivector is a 'rotor' that acts as an operator generating a rotation through the arc defined as e_{1} sweeps through to e_{2}. For any two unit vectors e_{a} and e_{b}, e_{b}e_{a} generates a similar rotation in the plane defined by the two vectors. Of course, these rotations satisfy a noncommutative algebra as must be true if they are to represent the consequences of rotations in 3space faithfully.
It is also completely natural not only to add or subtract multivectors, but to multiply or divide them — something not possible with ordinary vectors. The result is always another multivector. In the particular case of a multivector that is an ordinary vector V, the inverse turns out to be V/v^{2}, where v^{2} is the squared magnitude of V. It's a vector in the same direction but of reciprocal magnitude.
Write out the components for the product of two vectors U and V, and you find the result UV = U•V + ĭU × V, with • and × being the usual dot and cross product of vector analysis. Hence, geometric algebra blends both operations in a natural way.
For nearly 40 years, physicist David Hestenes of Arizona State University has waged a oneman crusade to advertise Clifford's geometric algebra and to lift it up to what he sees as its rightful place in physics. It hasn't worked yet. The standard techniques of vector analysis as originally introduced by Gibbs remain dominant instead, which is too bad.
Maxwell's equations in vector notation are often cited as a prime example of the beauty of physics, but the elegance is only enhanced in geometric algebra. It's natural to combine the electric and magnetic fields into one field quantity: F = E + ĭcB. The full equations then take the simple form ∇F = J where ∇ is the fourgradient 1/c ∂_{t} + ∂_{r} and J the fourcurrent 1/ε_{0} – cμ_{0}J. By combining the dot and cross products, Maxwell's four equations collapse into one (of course, this can also be achieved in tensor notation).
The improvement is even more startling for the Dirac equation, which actually takes the form of a simple generalization of Maxwell's equations in which the field F becomes a full multivector. This and other examples are explored in more detail in a short review of geometric algebra (J. M. Chapell et al., http://arxiv.org/abs/1101.3619; 2011), and Hestenes has created a wide variety of introductory materials (http://geocalc.clas.asu.edu).
One day, perhaps, Clifford's geometric algebra will be taught routinely to students in place of vector analysis. It would probably eliminate a great deal of confusion, and improve the geometric intuition of many practising scientists.
Change history
28 October 2011
In the version of this Thesis originally published, the final equation quoted in the article was incorrect. The text has been rectified for the HTML and PDF versions.
Rights and permissions
About this article
Cite this article
Buchanan, M. Geometric intuition. Nature Phys 7, 442 (2011). https://doi.org/10.1038/nphys2011
Published:
Issue Date:
DOI: https://doi.org/10.1038/nphys2011
This article is cited by

Electromagnetism according to geometric algebra: An appropriate and comprehensive formulation
Pramana (2022)

A geometric algebra reformulation and interpretation of Steinmetz’s symbolic method and his power expression in alternating current electrical circuits
Electrical Engineering (2015)

Correction
Nature Physics (2011)