Kevin Kelly argues compellingly that technology is taking on a life of its own, finds Zaheer Baber.
In What Technology Wants, writer Kevin Kelly radically rethinks the relationship between humans and technology. Scientific inventions have become so complex and interwoven with our lives, he says, that humans have less and less sway over how mechanical systems evolve. Nor can we stop the spread of technologies. Consequently, when assessing future risks, we should adopt a proactive approach of trial and error and revision, rather than strict precaution.
To make his point, Kelly introduces the concept of the 'technium' to embody the vast techno-social system. Distinct from individual innovations such as radar or plastic polymer, the technium includes all the machines, processes, society, culture and philosophies associated with technologies. The sheer complexity of interactions between the various layers and loops of the technium gives it a degree of autonomy. As it evolves, it develops its own dynamics.
Technologies such as this drone are becoming increasingly independent of humans.
According to Kelly, an autonomous system displays traits of self-repair, self-defence, self-maintenance, self-control and self-improvement. No current system has all these properties, he admits, but many technologies exhibit some of them. Aeroplane drones can self-steer and stay aloft for hours, for instance, but cannot repair themselves. Communication networks can repair themselves but cannot self-reproduce. Computer viruses can self-reproduce but cannot improve themselves. As technologies multiply and become more adaptive, the technium is becoming increasingly autonomous.
For example, the vast global communications network incorporates 170 quadrillion computer chips (a quadrillion is 1015) wired up into one giant computing platform, with a density of links approaching that of synapses in the human brain. Scientists can trace the majority of traffic flowing through the networks, but occasional bits are lost or transformed during transmission. Most of these mutations of information are attributable to causes such as hacking and machine error, but a few per cent are not — these changes originate not from humans, but from vagaries in the system itself. The flow of bits through the telephone network has, in the past decade, become statistically similar to the fractal pattern found in self-organized systems. This suggests that it is developing behaviour of its own.
Although the technium has neither an idea of self nor conscious desires, it develops mechanical tendencies, or 'wants', through its complex behaviour. Its millions of amplifying relationships and circuits of influence push the technium in certain directions. For example, some personal robots can navigate obstacles to seek out power outlets and plug themselves in to be recharged. For Kelly, these robots are like bacteria drifting towards nutrients with no conscious awareness of that goal. As frontier technologies increase in sophistication, these 'wants' gain in both complexity and force. Moreover, the tendencies become increasingly independent of human designers and users.
As Kelly points out, technophobes and technophiles alike agree that the technium is spinning beyond human control. They disagree only on what should be done about it: whether the technium should be stopped, modified or embraced. Kelly respects all sides in the polarized debates about technology. He accepts the unease that the technium can unleash, devoting chapters to the anti-technology manifesto of the Unabomber, Ted Kaczynski, and the selective uptake of innovations by Amish people. He recognizes those who have positions in between, including the proponents of indigenous knowledge and inventors themselves.
“Every technology produces degrees of good, harm and risk, and the evolution of each is uncertain.”
Apprehension about the technium assuming a life of its own continues to grow with the rise of genomics, robotics, informatics and nanotechnology. Cautious states and publics often turn to the precautionary principle, which holds that any technology must be shown to do no harm before it can be embraced. Kelly argues that this approach is impractical, unfeasible and unattainable. Every technology produces degrees of good, harm and risk, and the evolution of each is uncertain — none can ever be said to be decisively safe.
As an alternative, Kelly draws on philosopher Max More's 'proactionary principle', which states that the only way to evaluate new technologies is to try them out as prototypes and then refine them. To evaluate risk we must continually assess new technologies in the context of use. Kelly pares More's principle down to five elements: anticipation; continual assessment; prioritization of risks; rapid correction of harm; and redirection.
Owing to the autonomy of the technium, Kelly contends, it is pointless to ban risky technologies. Attempts to put a moratorium on them will only ensure that the emergent ones will be even more impervious to human control — exhibiting a form of natural selection. Instead, we should strive to produce technology that is 'more convivial' — that is, more compatible with life. Kelly believes that every technology can be channelled towards uses that promote greater transparency and more collaboration, flexibility and openness across society.
He draws extensively on other studies, particularly Langdon Winner's groundbreaking book Autonomous Technology (MIT Press, 1977). Winner famously discussed uncontrollable “technological drift” as one of the most disturbing features of modern life. He also extensively used the phrase 'socio-technical system' rather than 'social system' to capture the seamless amalgamation of humans and technology. But Kelly's concept of the technium and his description of how it attains autonomy are original and timely.