Artificial intelligence (AI) is transforming various fields, such as clinical diagnosis, autonomous driving and speech translation. However, the quickly increasing volume of data in modern society poses great challenges for the electronic computing hardware used in AI, in terms of both computing speed and power consumption. Such issues have become a major bottleneck for AI. Writing in Nature, Xu et al.1 and Feldmann et al.2 report photonic processors that accelerate AI processing by harnessing the distinctive properties of light. These demonstrations could inspire a renaissance of optical computing.
With the rise of AI, conventional electronic computing approaches are gradually reaching their performance limits and lagging behind the rapid growth of data available for processing. Among the various types of AI, artificial neural networks are widely used for AI tasks because of their excellent performance. These networks perform complex mathematical operations using many layers of interconnected artificial neurons3. The fundamental operation that uses most of the computational resources is called matrix–vector multiplication.
Various efforts have been made to design and implement specific electronic computing systems to accelerate processing in artificial neural networks. In particular, considerable success has been achieved using custom chips known as application-specific integrated circuits4, brain-inspired computing5 and in-memory computing6, whereby processing is performed in situ with an array of memory devices called memristors.
Electrons are the carriers of information in electronic computing, but photons have long been considered an alternative option. Because the spectrum of light covers a wide range of wavelengths, photons of many different wavelengths can be multiplexed (transmitted in parallel) and modulated (altered in such a way that they can carry information) simultaneously without the optical signals interfering with each other. This propagation of information at the speed of light results in minimal time delays. Moreover, passive transmission (in which no input power is required) aids ultralow power consumption7, and phase modulation (whereby the quantum-mechanical phases of light waves are varied) enables light to be easily modulated and detected at frequencies greater than 40 gigahertz (ref. 8)8.
In the past few decades, great success has been attained in optical-fibre communication. However, it remains challenging to use photons for computing, especially at a scale and performance level comparable to those of state-of-the-art electronic processors. This difficulty arises from a lack of suitable parallel-computing mechanisms, materials that permit high-speed nonlinear (complex) responses of artificial neurons and scalable photonic devices for integration into computing hardware.
Fortunately, developments over the past few years in devices called optical frequency combs9 brought new opportunities for integrated photonic processors. Optical frequency combs are sets of light sources with emission spectra that consist of thousands or millions of sharp spectral lines that are uniformly and closely spaced in frequency. These devices have achieved substantial success in various fields, such as spectroscopy, optical-clock metrology and telecommunication, and were recognized with the 2005 Nobel Prize in Physics. Optical frequency combs can be integrated into a computer chip9 and used as power-efficient energy sources for optical computing. This system is well suited for data parallelization by wavelength multiplexing.
Xu and colleagues used such a set-up to produce a versatile integrated photonic processor. This device performs a type of matrix–vector multiplication known as a convolution for image-processing applications. The authors implemented an ingenious method to carry out the convolution. They first used chromatic dispersion — whereby the speed of transmitted light depends on its wavelength — to produce different time delays for wavelength-multiplexed optical signals. They then combined these signals along the dimension associated with the wavelength of the light.
By fully exploiting the wide range of photon wavelengths, Xu et al. achieved intrinsically parallel computing for different convolution operations. The optical-computing speed was beyond ten trillion operations per second using a single processing core and was limited only by the data throughput. Another welcome feature of this work is that the authors identify the entry point of their photonic convolution processor in practical applications. In particular, they suggest that the processor could be used in a hybrid optical–electronic framework, such as for in situ computations during optical-fibre communications.
Feldmann and colleagues independently made an integrated photonic processor that performs a convolution involving optical signals that span two dimensions. The device uses optical frequency combs in an ‘in-memory’ computing architecture that is based on a phase-change material (a material that can switch between an amorphous phase and a crystalline phase). The authors fully parallelized the input data through wavelength multiplexing and conducted analogue matrix–vector multiplication using an array of integrated cells of the phase-change material.
Such a highly parallelized framework can potentially process an entire image in a single step and at high speed. Moreover, in principle, the system can be substantially scaled up using commercial manufacturing procedures and aid in situ machine learning in the near future. Because the convolution process involves passive transmission, the calculations of the photonic processing core can, in theory, be performed at the speed of light and with low power consumption. This ability would be extremely valuable for energy-intensive applications, such as cloud computing.
Given the challenges facing conventional electronic computing approaches, it is exciting to see the emergence of integrated photonics as a potential successor to achieve unprecedented performance for future computing architectures. However, building a practical optical computer will require extensive interdisciplinary efforts and collaborations between researchers in materials science, photonics, electronics and so on. Although the reported photonic processors have high computing power per unit area and potential scalability, the all-optical computing scale (the number of optical artificial neurons) remains small. Moreover, the energy efficiency is limited by the presence of computing elements that inherently absorb light and because electrical and optical signals frequently need to be interconverted.
Another avenue of research is the development of advanced nonlinear integrated photonic computing architectures, rather than one- or two-dimensional linear convolutions. By integrating electronic circuits and thousands or millions of photonic processors into a suitable architecture, a hybrid optical–electronic framework that takes advantage of both photonic and electronic processors could revolutionize AI hardware in the near future. Such hardware would have important applications in areas such as communication, data-centre operation and cloud computing.
Nature 589, 25-26 (2021)