Main

Over the past decade, billions of sensors from connected devices have been used to translate physical signals and information to the digital world. Due to their limited computing power, sensors integrated into embedded remote devices often transmit raw and unprocessed data to their hosts. However, the high energy cost of wireless data transmission1 affects device autonomy and data transmission bandwidth. Improving their energy efficiency could open a new range of applications and reduce their environmental footprint. Furthermore, data processing will move from remote hosts to local sensor nodes; therefore, data transmission would be limited to structured and valuable data, which is desirable for such purposes.

The von Neumann architecture—in which memory and logic units are separate—is seen as the critical factor limiting the efficiency of computing systems in general devices and particularly in edge-based devices. The separation between processing and memory imposed by the von Neumann architecture requires the data to be sent back and forth between the two during data and signal processing or inference in neural networks. This data communication between memory and processing units already accounts for one-third of the energy spent in scientific computing2.

To overcome the von Neumann communication bottleneck3,4, in-memory computing architectures—in which memory, logic and processing operations are collocated—are being explored. Processing-in-memory devices are especially suitable for performing vector–matrix multiplication, which is a key operation for data processing and the most intensive calculation in machine-learning algorithms. By taking advantage of the memory’s physical layer to perform the multiply–accumulate (MAC) operation, this architecture overcomes the von Neumann communication bottleneck. So far, this processing strategy has been used in applications such as solving linear5,6 and differential equations7, signal and image processing8 and artificial neural network accelerators9,10,11,12. However, the search for the best materials and devices for this type of processor is still ongoing.

Several devices have been studied for in-memory computing, including standard flash memories, emerging resistive random-access memories and ferroelectric memories3,13,14,15,16,17,18. More recently, two-dimensional (2D) materials have shown promise in the field of beyond-complementary metal–oxide–semiconductor (CMOS) devices19,20,21,22,23,24, as well as in-memory and in-sensor computing25,26,27,28. Due to their atomic-scale thickness, floating-gate field-effect transistors (FGFETs) based on monolayer molybdenum disulfide (MoS2) offer high sensitivity to charge variations in the floating gate and reduced cell-to-cell interference. Such devices could be scaled down to sub-100 nm lengths without loss of performance27,29,30. Moreover, the van der Waals nature of MoS2 allows devices based on these materials to be integrated into the back-end-of-line31. This would allow processors to be fabricated with multiple levels of memory cores directly integrated with the required interfaces, creating dense in-memory networks.

FGFETs based on MoS2 have been used in logic-in-memory32 and in-memory computing as well as as the main building blocks of perceptron layers27,33 where they are projected to offer more than an order of magnitude improvement in power efficiency compared with CMOS-based circuits30. These demonstrations have highlighted the promise of 2D materials for in-memory computing, but further progress and practical applications require wafer-scale fabrication and large-scale or very-large-scale system integration. Currently, demonstrations of the wafer-scale and large-scale integration of 2D-semiconducting-materials-based circuits have been limited to photodetectors34,35,36,37 or traditional analogue and digital integrated circuits38,39,40,41,42; hardware implementations43 with full-wafer and large-scale system integration involving 2D-materials-based non-volatile memories that can perform computation are missing.

In this Article, we report a chip containing a 32 × 32 FGFET matrix with 1,024 memory devices per chip and an 83.1% yield. The working devices show similar IDS versus VG characteristics and hysteresis. During fabrication, we use wafer-scale metal–organic chemical-vapour-deposited (MOCVD) monolayer MoS2 as the channel material, and the entire fabrication process is carried out in a 4-inch line cleanroom. We also demonstrate multibit data storage in each device with a single programming pulse. Finally, we show that our devices can be used in in-memory computing by performing discrete signal processing with different kernels in a highly parallelized manner.

Memory matrix

We approach in-memory computing by exploiting charge-based memories using monolayer MoS2 as a channel material. Specifically, we fabricated FGFETs to take advantage of the electrostatic sensitivity of 2D semiconductors19. To enable the realization of larger arrays, we organized our FGFETs in a matrix in which we can address individual memory elements by carefully choosing the corresponding row and column. Figure 1a,b shows a three-dimensional rendering of the memory matrix and the detailed structure of each FGFET, respectively. The use of a matrix configuration allows a denser topology and directly corresponds to performing vector–matrix multiplications. Our memories are controlled by local 2 nm/40 nm Cr/Pt gates fabricated in a gate-first approach. This allows us to improve the growth of the dielectric by atomic layer deposition38 and to minimize the number of processing steps that the 2D channel is exposed to, resulting in an improved yield. The floating gate is a 5 nm Pt layer sandwiched between 30 nm HfO2 (block oxide) and 7 nm HfO2 (tunnel oxide). Next, we etch vias on the HfO2 to electrically connect the bottom metal (M1) and top metal (M2) layers. This is required for routing the source and drain signals without an overlap. Wafer-scale MOCVD-grown MoS2 is transferred on top of the gate stack and etched to form the transistors’ channels. Supplementary Figs. 1 and 2 provide details about the material quality and characterization. Finally, 2 nm/60 nm Ti/Au is patterned and evaporated on top, forming the transistors’ drain–source contacts as well as the second metal layer. Methods provides further details about the fabrication and Supplementary Figs. 38 show the characterization details. Figure 1c shows the optical image of the fabricated chip containing 32 rows and 32 columns for a total of 1,024 memories. In the image, source channels are accessed from the bottom; drain channels, from the right; and gate channels, from the left.

Fig. 1: Device and matrix description and characterization.
figure 1

a, Three-dimensional rendering of the FGFETs connected into a matrix array. Both gate and drain contacts are organized in rows and the source signal is applied to the columns. The gate signals are applied on the left side and drain signals, on the right. The drain–source current is read from the column. The inset shows the correspondence between signals and vector–matrix multiplication. b, Three-dimensional rendering of the FGFET cross section. It shows the different device parts. c, Optical image of the memory matrix configuration. Scale bar, 500 µm d, IDS versus VG hysteresis curves of the 851 working devices; the red curve describes the highlighted behaviour of one of the 851 memory devices. The curves coloured in grey correspond to the remaining devices. e, Three-dimensional plot shows the mapping of the ON and OFF currents on the 32 × 32 chip. Devices in orange are disconnected.

Our memories are based on standard flash memories. The memory mechanism relies on shifting the neutral threshold voltage (VTH0) by changing the number of charges in the trapping layer (ΔQ), that is, the platinum floating gate in our case. When a high positive/negative bias is applied to the gate, the band alignment starts favouring the tunnelling in/out of electrons from the semiconductor to the floating gate, changing the carrier concentration in the trapping layer. We define our memory window (ΔVTH) by taking the difference between the threshold voltage from the forward and reverse paths, which are taken at a constant current level. Our previous work verified the programming mechanism by fitting our experimental curves in a device simulation model27,29. Since the memory effect entirely relies on a charge-based process, flash memories tend to have better reliability and reproducibility than emerging memories that are material dependent such as resistive random-access memories and phase-change memories3. We designed and manufactured a custom device interface board to facilitate the characterization of the memory array (Supplementary Figs. 9 and 10 provide a detailed description). Figure 1d shows the IDS versus VG sweeps performed for each device. The fabrication presents a yield of 83.1% and the devices are statistically similar (Supplementary Section 4). The relatively high OFF-state current is due to a lack of resolution of the analogue-to-digital converters used in the setup. High-resolution single-device measurements confirm the typical OFF-state currents on the order of picoamperes. Figure 1e shows the ON and OFF current distribution over the memory matrix. Both ON and OFF currents are taken at VDS = 100 mV, forming two distinct planes. The ON and OFF current shows a good distribution over the entire matrix. Supplementary Figs. 13 and 14 show further detailed single-device characterization, confirming the performance of devices as memories with good retention and endurance stabilities. We show that the devices have a statistically similar memory window ΔVTH = 4.30 ± 0.25 V. This value is smaller compared with the one extracted from single-device measurements due to the higher slew rates (5 V s–1) required for the time-effective characterization of 1,024 devices in the matrix.

Open-loop programming

The similarity of the devices motivates us to pursue a statistical study of the memories’ programming behaviour. In the context of in-memory computing, an open-loop programming analysis is fundamental. Standard write–verify approaches may be too time-consuming when programming a large flash memory array. A statistical understanding of memory states in an open loop is essential to improve the performance and speed.

We perform the experiment such that each device is independently excited by selecting the corresponding row (i) and column (j). Analogue switches in the device interface board keep a low-impedance path in the selected row (i)/column (j) and high impedance in the remaining rows and columns. This ensures that a potential difference is applied only to the desired device, avoiding unwanted programming. For the same reason, we divide the device programming and reading into two independent stages. During the programming phase, the corresponding gate line (row) and the corresponding source line (column) are selected and programming pulses with parameters TPULSE and VPULSE are applied in the gate. Due to the tunnelling nature of the device, only two terminals are required to generate the band bending needed for charge injection into the floating gate. After the pulse, the gate voltage is changed to VREAD, which is low enough to prevent reprogramming the memory state. In the reading phase, the drain line is also connected, and the conductance value is probed by applying voltage VDS to the drain. This two-stage procedure is required because we are using a three-terminal device; therefore, both gate and drain share the same row, and consequently, the entire row is biased when the gate and drain lines are engaged. If high voltages in the gate were applied when the drain line is connected, the whole row would be reprogrammed, causing a loss of information in the memories. Figure 2a shows the description of this two-stage programming procedure.

Fig. 2: Open-loop programming.
figure 2

a, Schematic of the two-state operation of the open-loop programming scheme. In the programming phase, the interface board is used to set the gate and source lines to the low-impedance state and the drain line to the high-impedance state, whereas in the reading phase, all three lines are set to the low-impedance state. b, Distribution of output states (wOUT) in the linear scale. The data are fitted with a gamma distribution. c, Distribution of output states (wOUT) in the log10 scale. The distributions are fitted with a Gaussian distribution. d, Three-dimensional map of log10 value of wOUT as a function of device position and different programming voltages. e, Empirical cumulative distribution function (ECDF) as a function of the programmed states in the log10 scale.

For the subsequent measurements, we used VREAD = −3 V, VDS = 1 V and TPULSE = 100 ms. Before each measurement, we reset the memories by applying a positive 10 V pulse, which puts the devices into a low-conductance state. Due to parasitic resistances in the matrix, a linear compensation in the digital gains is applied (Supplementary Figs. 17 and 18 provide further details). The compensation method improves the programming reliability of the devices by an order of magnitude. We estimate a programming error of 500 errors per million for programming one bit and having one error per million for programming the erase state. Figure 2b,c shows the distribution of memory states after different pulse intensities, namely, VPULSE = +10 V, −4 V, −6 V, −8 V and −10 V, in both linear and logarithmic representations. We observe that on a linear scale, the increase in the pulse amplitude is accompanied by a higher memory state value and a larger spread. On the other hand, by analysing the logarithm of the state value, we can see that the memory has well-defined storage states. This leads us to conclude that this memory has the potential for multivalued storage without write–verify algorithms, especially when used on a logarithmic scale.

Figure 2d shows the spatial distribution of the states on the entire chip. We observe that the memory states create a constant plane value for the different programming voltages, VPULSE. Finally, Fig. 2e shows the empirical cumulative distribution function (ECDF) of the logarithmic representation. These results support the possibility of multivalued programming, as discussed previously, and indicate that the memory elements can be used for storing analogue weights for in-memory computing.

States and vector–matrix multiplications

With the open-loop analysis completed (Fig. 3a), we plot the memory states (<w>) as a function of the programming voltage (VPROG). We define four equally distributed states (two-bit resolution) to be programmed as discrete weights in the matrix for the vector–matrix multiplication (Supplementary Fig. 20). To analyse the effectiveness of the processor for performing vector–matrix operations, we compare (Fig. 3b) the normalized theoretical (yTHEORY) value with the normalized experimental (yEXP) value obtained on several dot-product operations. The linear regression of the experimental points shows a line with parameters a = 0.988 ± 0.008 and b = −0.129 ± 0.003 for yEXP = a × yTHEORY + b, whereas the shaded area corresponds to a 95% confidence interval. The ideal processor should converge to a = 1 and b = 0 with a confidence interval that converges to linear fitting. In our case, the processor has a linear behaviour converging to the ideal case, with a large spread and slight nonlinearity of the experimental values. We explain this behaviour by the non-ideality of the memories and the quantization error due to the limited resolution of the states. This shift in parameter b can be explained by the intrinsic transimpedance amplifier offset with memory leakage seen at yTHEORY = 0, but it does not affect the observed linear trend. We conclude that we can perform MAC operations with reasonable accuracy. This operation is needed for performing diverse types of algorithms, such as signal processing and inference in artificial neural networks.

Fig. 3: MAC operations.
figure 3

a, Output memory states with programming error (<w>) as a function of programming voltage (VPROG). To define the state positions, we perform a fit and select the corresponding state branches for a two-bit open-loop operation b, Normalized yEXP versus yTHEORY plot, comparing the experimental theoretical results of the MAC operation. The curve is fitted with a linear function with parameters a = 0.988 ± 0.008 and b = −0.129 ± 0.003. The shaded area corresponds to the 95% confidence interval of the linear fitting.

Signal processing

Next, we configure this accelerator to perform signal processing to demonstrate a real-world scenario and application. For signal processing, the input signal (x) is convoluted with a kernel (h), resulting in the processed signal (y). Depending on the nature of the kernel elements, different types of processing can be achieved. Here we limit ourselves to three different kernels that perform low-pass filtering, high-pass filtering and feedthrough. All the kernels run in parallel within a single processing cycle, demonstrating the efficiency of this processor targeting data-centric problems by parallelized processing. More kernels could be added in parallel, limited only by the size of the matrix. Figure 4a shows the convolution operation and the different kernels used for processing the input signal. The strategy to encode negative kernel values into the conductance values of the memories is to split the kernel (h) into a kernel with only the positive values (h+) and one with the absolute values of the negative numbers (h) and encode only the positive numbers with a direct relation with the conductance values (G). After the processing is realized, the outputs of the positive (y+) and negative (y) kernels are subtracted (y+ – y), resulting in the final signal (y).

Fig. 4: Signal processing based on in-memory processing.
figure 4

a, Description of convolution-based signal processing for different filters (low-/high-pass filters and identity). y, processed signal; x, input signal; h, filter kernel. The kernel is split between its positive and negative components; these values are proportionally transferred to the memory weights. The input signal is simultaneously applied to all the memories and the difference between the output of two columns is the result of the processed signal for a given kernel. b, Comparison of the theoretical kernel weight mapping and the experimental weight transfer into the conductance of the memories. c, Comparison of the fast Fourier transform (FFT) of the simulated and experimental output signals after each kernel.

Figure 4b shows the comparison between the original weights and the weights transferred into the memory matrix using the previously described open-loop programming scheme. To simplify the transfer, we normalize the weight values at each kernel by its maximum value. As a result, we observe a good agreement between the original and experimental values. Next, to verify the effectiveness of processing, we first construct our input signal (x) as a sum of sinusoidal waves with different frequencies. In this way, we can easily probe the behaviour of the filters at different frequencies without creating an overly complex signal. Since the signal has positive and negative values, the signal amplitude must fall within the linear region of device operation. Thus, we restrict the signal range from −100 to 100 mV at VREAD = 0. Figure 4c shows the fast Fourier transform of the simulated processed signals (left) and experimental signals (right). The grey line in both simulated and measured signals is the fast Fourier transform of each kernel, giving a guideline for the predicted behaviour of each operation. We highlight that the experimental processing of all three filters matches fairly well with the theoretical values as well as the prototype filter. Altogether, large-scale arrays of FGFETs based on 2D materials could be used for other applications such as image processing and inference with artificial neural networks.

Conclusions

We have reported the large-scale integration of 2D materials as the semiconducting channel in an in-memory processor. We demonstrated the reliability and reproducibility of our devices both in terms of characterization and statistical similarity of the programming states in open-loop programming. The processor carries out vector–matrix multiplications and illustrates its functionality by performing discrete signal processing. Our approach could allow in-memory processors to reap the benefits of 2D materials and bring new functionality to edge devices for the Internet of Things.

Methods

Wafer-scale memory fabrication

The fabrication starts with a p-doped silicon substrate with a 270-nm-thick SiO2 insulating layer. The first metal layer and FGFET gates were fabricated by photolithography using an MLA150 advanced maskless aligner with a bilayer 0.4-µm-thick LOR 5A/ 1.1-µm-thick AZ 1512 resist. The 2 nm/40 nm Cr/Pt gate metals were evaporated using an electron-beam evaporator under a high vacuum. After resist removal by dimethyl sulfoxide, deionized water and O2 plasma are used to further clean and activate the surface for HfO2 deposition. The 30-nm-thick HfO2 blocking oxide is deposited by thermal atomic layer deposition using TEMAH and water as precursors with the deposition chamber set at 200 °C. The 5 nm Pt floating gates were patterned by photolithography and deposited using the same process as described previously. With the same atomic layer deposition system, we deposit the 7-nm-thick HfO2 tunnel oxide layer with the same process mentioned before. Next, vias are exposed using a single-layer 1.5-µm-thick ECI 3007 photoresist and etched by Cl2/BCl3 chemistry reactive ion etching. After the transfer of MoS2 onto the substrate, patterning it with photolithography using a 2-µm-thick nLOF resist and etching by O2 plasma. Drain–source electrodes are patterned by photolithography and 2 nm/60 nm Ti/Au is deposited by electron-beam evaporation. To increase the adhesion of contacts and MoS2 onto the substrate, a 200 °C annealing step is performed in a high vacuum. The devices have a width/length ratio of 49.5 μm/3.1 μm.

Device passivation

The fabricated device is first wire-bonded onto a 145-pin pin-grid-array chip carrier. The device is heated inside an Ar glovebox at 135 °C for 12 h, which removes the adsorbed water from the device surface. After in situ annealing in the glovebox, a lid is glued onto the chip carrier using a high-vacuum epoxy and cured in an Ar atmosphere. This protects the device from oxygen and water.

Transfer procedure

The MOCVD-grown material is first spin coated with PMMA A2 at 1,500 r.p.m. for 60 s and baked at 180 °C for 5 min. Next, we attach a 135 °C thermal release tape onto the MoS2 sample and detach it from sapphire in deionized water. After this, we dry the film and transfer it onto the patterned substrate. Next, we bake the stack at 55 °C for 1 h. We remove the thermal release tape by heating it on the hot plate at 130 °C. Next, we immerse the sample in an acetone bath for cleaning the tape polymer residues. Finally, we transfer the wafer to an isopropanol bath and dry it in air.

MOCVD growth

Monolayer MoS2 was grown using the MOCVD method. Mo(CO)6, Na2MoO4 and diethyl sulfide were used as precursors. NaCl was spin coated as a catalyst. A pre-annealed three-inch c-plane sapphire wafer with a small off-cut angle (<0.2°) was used as a growth substrate (UniversityWafer). The chemical vapour deposition reaction was performed using a home-built furnace system with a four-inch quartz tube reactor and mass flow controllers connected with Ar, H2, O2 and metal–organic precursors (Mo(CO)6 and diethyl sulfide). For the MoS2 crystal growth, a reactor was heated to 870 °C at ambient pressure for 20 min.

Electrical measurements

The electrical measurements were performed using a custom device interface board connected to a CompactRIO (cRIO-9056) running a real-time LabVIEW 2020 server. We installed the NI-9264 (16-channel analogue output), NI-9205 (32-channel analogue inputs) and NI-9403 (digital input/output) modules.