A year ago this month, Nature Electronics launched with an aim to connect the work of scientists, engineers and industry. Since then we have published research from all across electronics: this includes the building of field-effect transistors from new two-dimensional materials, the creation of artificial neurons and neural networks from memristors, and the testing of ingestible sensors that can provide gas profiles of the gut. We have featured Comment, Perspective and Review articles on topics that range from sweat sensors and metrology methods to research policy in the semiconductor industry and the importance of frugal labware. We also launched our Reverse Engineering series, dedicated to the history of influential technologies. And to mark our first anniversary, we have put together an interactive timeline to explore these articles, which includes pieces from the inventors of the microprocessor, dynamic random access memory (DRAM) and Ethernet.

Looking forward, this month we also announce our technology of the year. Intended as an annual feature of our January issue, and representing the thoughts of the editorial team, our aim is to highlight an emerging area that we believe has achieved a key breakthrough or reached an important moment of development. This could be related to advances that are fundamental or applied, and that have occurred in academia or industry. It, in part, reflects what has happened in the previous year and, in part, what we expect to see in the coming year. For 2019, we have chosen edge computing.

Edge computing — in which data is processed on distributed nodes or devices, near to where it is being generated, that is, at the edge of the network — is not a new idea. As Mahadev Satyanarayanan of Carnegie Mellon University explains in our Reverse Engineering column in this issue, its development can be traced back to a paper from 20091. (Though like many technologies, its origin story has many branches, which can be followed even further back2.) The writing of the paper, which emerged from a meeting in October 2008 between Satyanarayanan and other researchers in mobile computing, was driven by a shared concern about the potential limitations of centralized cloud computing in handling resource-intensive applications that might emerge in the future. It thus proposed a dispersed computing infrastructure they termed cloudlets, which are, in essence, ‘data centres in a box’.

Cloud computing currently underpins numerous everyday applications and services. But for those that consume a lot of bandwidth such as image processing, or where fast response times are a critical factor such as for self-driving cars, the round trip to the cloud can be problematic. Edge computing offers potential solutions. The ability to process large amounts of data near its source also means that edge computing can play a valuable role in the Internet of Things (IoT). And this role, which appears poised to become increasingly important in the continuing development of IoT applications, is a key reason why we highlight the field now.

Edge computing has already generated considerable interest and investment from both start-ups and established companies3. In an interview in this issue, for example, Victor Bahl of Microsoft Research explains that Microsoft has a big bet on the technology, highlighting a recent US$5 billion investment in IoT and edge computing4. In the interview, which focuses on the future of the field, Bahl also suggests that the technology will be indispensable in a range of industries including telecommunications, manufacturing, agriculture, transportation and healthcare. In a Comment article elsewhere in this issue, Yang Yang of ShanghaiTech University also considers the future direction of edge computing. Here though he tackles the question of how potential intelligent IoT applications can be delivered, suggesting that a multi-tier approach, which integrates technologies from the cloud to the edge, will be required.

Our choice of edge computing for technology of the year also reflects what is happening within the race to create chips and devices specifically designed for machine learning and artificial intelligence. Implementing such methods on mobile devices and embedded platforms, such as smart sensors or wearable devices, presents a particular challenge because area and power resources are limited. (Precise definitions of edge computing vary; here, at least, we include data processing on such systems within our definition.) However, across the spectrum from commercial devices to fundamental research, capabilities are building. Apple’s latest iPhones, for example, come equipped with AI-specific hardware5, and the distributive potential of AI chips has also awakened interest in chip start-ups6. On the fundamental side, novel approaches to analogue computing using memristors have, for instance, been demonstrated, which could lead to more energy-efficient forms of edge computing7.

The computational demands of state-of-the-art machine learning methods are though considerable and, as was recently highlighted in Nature Electronics8,9, architecture, circuit and device innovations will be required in order to meet their increasing needs. We are thus at a critical moment for the field, which creates exciting challenges and opportunities for researchers across academia and industry.

As such technologies develop, it is also important to consider their broader context. In particular, as the data they generate grows, data protection becomes an increasingly critical factor. In a Comment article in this issue, Sandra Wachter of the University of Oxford explains how data protection laws in Europe need to evolve in order to guard against the predictions about personal behaviours and private lives that the technologies could generate. And as she explains, “the future of edge computing requires a dialogue between developers and society that does not only focus on what is technically possible, but also on what is reasonable.”