Towards altering sound frequency at will by a linear meta-layer with time-varying and quantized properties

Wave frequency is a critical parameter for applications ranging from human hearing, acoustic non-reciprocity, medical imaging to quantum of energy in matter. Frequency alteration holds the promise of breaking limits imposed by the physics laws such as Rayleigh’s criterion and Planck–Einstein relation. We introduce a linear mechanism to convert the wave frequency to any value at will by creating a digitally pre-defined, time-varying material property. The device is based on an electromagnetic diaphragm with a MOSFET-controlled shunt circuit. The measured ratio of acoustic impedance modulation is up to 45, much higher than nonlinearity-based techniques. A significant portion of the incoming source frequency is scattered to sidebands. We demonstrate the conversion of audible sounds to infrasound and ultrasound, respectively, and a monochromatic tone to white noise by a randomized MOSFET time sequence, raising the prospect of applications such as super-resolution imaging, deep sub-wavelength energy flow control, and encrypted underwater communication. Temporal modulation materials have been attracting attention thanks to their ability to break wave reciprocity, i.e., waves traveling from point A to B are identical to waves propagating from B to A. The authors present a linear temporal modulation device with a giant acoustic impedance modulation ratio and ability to linearly change wave frequency, operating in two quantized states when a shunt circuit is connected and disconnected by a MOSFET.

F requency, the reciprocal of wavelength, is one of the fundamental parameters governing wave behaviors and its interaction with matters. Most laws on wave propagation, wave-matter interaction, such as Rayleigh's criterion, Planck-Einstein's energy-frequency relation, diffraction law, mass-density law, are frequency-dependent. Altering frequency can break the limits currently imposed by laws of physics for a given frequency condition, such as imaging resolution limits, energy capacity, and propagation behavior of a wave. Conventional materials, including spatial modulation metamaterials, such as photonic and sonic crystals [1][2][3][4] , have time-invariant physical properties, which may be called static materials. They shape the wave front and change wavenumbers, but the wave frequency is unaffected. Time-variant materials, or dynamic materials, however, can enable the control of wave in terms of time-base, which offers more possibilities for wave manipulation. Pursuit of dynamic materials 5 normally begins with material nonlinearity, such as nonlinear optical crystal 6 and ultrasound contrast agent 7 . Nonlinearity produces higher harmonics of signals, and offers super-resolution in optical 8,9 and ultrasonic 7,10,11 imaging, rectification and reciprocity breaking [12][13][14][15][16] , and directional radiation [16][17][18] . However, such desirable nonlinear effects require a high level of excitation not readily available in daily life. Alternatively, active metamaterials with sensors offer more possibilities such as tuning the effective material parameters 19 and shaping the frequency spectra of sound waves 20 . Recently, active nonlinear metamaterials greatly lower the requirement for excitation 21 . However, these extra benefits of active control often come with the extra cost of a sensor or extra complication of having to deal with system instability. On the other hand, nonlinearity converts waves between discrete harmonic frequencies, and the time-variant properties of nonlinear materials are amplitude-dependent, limiting its scope of application. A linear mechanism of time-base conversion independent of local wave amplitude, without a sensor, will offer far more freedom in wave manipulation, which is the motivation behind the current study.
Temporal modulation materials have been attracting researchers' attention and achieving great successes in breaking reciprocity [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36] in a linear manner. Existing temporal modulation of materials include two mechanisms, moving medium (biasing) and wave modulation 34,35 . Biasing is similar to the Doppler effect 24,25 , which requires sufficient speed or momentum to obtain significant frequency shift. The wave modulation, on the other hand, changes the local medium properties, such as the impedance, in a space-time sequence similar to a traveling wave [28][29][30][31][32][33] . Total frequency shift, hence mode transition in a waveguide, is theoretically achieved at about 10 wavelengths of the incident wave 22,36 . The temporal materials share the same mechanism with the method of amplitude modulation (AM) which can be traced to Bell's work for multiplex telephony system 37,38 . AM utilizes time-varying circuit to mount the signal onto the carrier to produce a long-distance transmittable wave. The transmitted signal contains the difference and sum frequencies of the signal and the carrier. This forms the foundation of telephony network and long-distance wireless radio, which heralds a new era of communication. Pure temporal modulation materials 35 are expected to alter wave frequencies in a linear manner. However, such potential has not been fully materialized.
In this work, a linear temporal modulation device called acoustic meta-layer (AML) is introduced, which converts the color, or rather the frequency in acoustics, of a monochromatic sound wave to another color with two orders of magnitude higher efficiency when compared to the existing nonlinearity-based devices 7 . The fact that it is a programmable material means that we can change wave frequencies at will. Two special cases are demonstrated. The first is the altering of an audible sound to an infrasound in experiment, and the second is the shifting of an audible sound to an ultrasound in calculation. The experiment for the latter is beyond the current reach in our lab. In perception, both infrasound and ultrasound are silent to humans. The former has a wavelength longer than 17 m and decays little in transmission. The latter has a wavelength shorter than 17 mm and hardly travels far. In addition, we also demonstrate a randomized AML capable of dispersing a monochromatic sound to white noise of any prescribed frequency band. The dynamic response of the randomized AML is random in time and broadband in frequency content. It contrasts with all conventional static and dynamic materials and has many potential applications. First, random modulation dilutes the time-signature of the incident sound, rendering its echo undetectable. Second, the broadband modulation widens the frequency scope of energy pumping and can be a building block for parametric gain-medium 39-41 for sound and offer better ultrasound and underwater imaging. Third, random material may become an alternative tool in environment noise control. For instance, tonal noise found in our living, working, and hospital environments is a nuisance at best leading to serious health issues 42 . As a contrast, clinical trials show that random noise (white or pink) promotes sleep for the neonates 43 , patients in intensive care unit 44 , and coronary care unit 45 .

Results and discussion
Acoustic meta-layer architecture. The proposed AML is shown in Fig. 1a. It consists of a suspended diaphragm of a moving-coil loudspeaker, shunted by an analogue circuit. The coil is immersed in a permanent magnetic field and gives an electromechanical coupling between the series shunt circuit and the diaphragm. Our previous work 46,47 demonstrates that the passive shunt circuit can significantly alter the system dynamic properties, including mass, stiffness and damping. When the shunt circuit is connected, the extra acoustic impedance induced by the electromagnetic forces is 46 is the electrical impedance of the circuit with net resistance R, inductance L and capacitance C, Bl is the force factor, and ω is the angular frequency. In this work, a metal-oxide-semiconductor field-effect transistor (MOSFET) is introduced to connect or disconnect the shunt circuit using a pre-defined time sequence as shown in Fig. 1b. When the G terminal of the MOSFET, cf. Fig. 1a, is applied with a bias voltage exceeding a threshold, V g > V 0 , the resistance between terminals D and S is 4 mΩ, and the shunt acoustic impedance for the diaphragm, ΔZ; is loaded. Otherwise, the circuit resistance is very high, R off = 4400 Ω, and ΔZ is unloaded. The MOSFET state is described by the function of g t ð Þ ¼ HðV g t ð Þ À V 0 Þ, where H is the Heaviside step function. Figure 1b illustrates g t ð Þ when a periodical and random gating voltage V g t ð Þ is applied, respectively. Figure 1c, d illustrates, respectively, the working of AML when V g t ð Þ follows harmonic and random patterns, respectively. As shown in Fig. 1c, when the state of the MOSFET is set by a periodical voltage of frequency f m , the energy of the incident sound wave at the source frequency of f s is dispersed to the difference and sum frequencies of When the gating voltage sequence is band-limited random with f m 2 f 1 ; f 2 Â Ã , we expect the harmonic source is scattered to a random wave with side frequencies to cover the same linear bandwidth of f 2 À f 1 as illustrated in Fig. 1d. The time sequence of the MOSFET is pre-defined and unrelated to the incident or transmitted waves, contrasting with traditional active control which is derived from a sensor signal and is thus subject to stability constraints. In this sense, AML is therefore a robust frequency scatterer which does not radiate sound on its own. A tiny amount of voltage leakage from MOSFET is always present and its sound-radiation potential will be discussed in the experimental results.
Theoretical considerations. The lumped-parameter, governing equation for the motion of the diaphragm is where v ¼ dη=dt is the vibration velocity, η is the displacement, I is the electric current in the shunt circuit, M, D, and K are, respectively, the dynamic mass, damping, and stiffness of the diaphragm, A is the cross-section of the waveguide, p I ðtÞ is the incident wave pressure felt at the diaphragm surface, ρ 0 and c 0 are the air density and speed of sound, respectively. The term 2ρ 0 c 0 A accounts for the fluid loading on the downstream and upstream sides of the diaphragm when the waveguide is terminated by anechoic ends 48 . The governing equation for the shunt circuit is given below together with parameter definitions: where q e is the electric charge, RðtÞ is the instantaneous electric resistance, R on is the total circuit resistance when MOSFET is switched on, g ¼ 1, and R off is the huge resistance value when MOSFET is switched off. Note that R off will be treated as infinity in theory but is kept as a finite value for initial analysis. Applying Fourier transform to the Eqs. (1) and (2) and keeping only the diaphragm velocity and electric current as the primary variables, where over-hats denote Fourier transforms, andĜ ω ð Þ is the Fourier transform of 2g t ð Þ À 1 (square wave with zero bias). The presence of the convolution term,Î Ĝ, is the essence of the modulation device. It makes the solution of Eq. (3) complicated as responses at one frequency are coupled to the material properties at all other frequencies. However, preliminary analysis is still possible for simplistic modulations. For a periodic square wave 2g t ð Þ À 1, with period 2π=ω m ;ĜðωÞ, takes the sinc form,Ĝ ω ð Þ ¼ ∑ þ1 where δ is Dirac delta function. Note that the convolutionÎ Ĝ shifts the angular frequency of the electrical currentÎ by 2n À 1 ð Þω m . The amplitude factor of 1= 2n À 1 ð Þ implies the dominance of the lowest (fundamental) orders of n = 0, 1 and this is confirmed by experimental results presented below. However, the convergence of the series involving 2n À 1 ð Þ À1 is likely to be slow. Direct timedomain solution is used in numerical study.
Two factors inÎ Ĝ determine the extent of frequency spread by the modulation. First, when g t ð Þ follows a completely random time sequence,Ĝ ω ð Þ becomes a constant for all frequencies. Second, the finite jump of the circuit resistance during MOSFET switching (over nanoseconds) means an electric current impulse with spectral spill-over to all frequencies. In terms of acoustic impedance, the system jumps between the states of pure Incidence  Fig. 1 Schematic of the proposed acoustic meta-layer (AML) and the conceptual diagram of frequency conversion. a AML formed by a shunted loudspeaker cascading a MOSFET unit (see Supplementary Note 1). G, D, and S represent gate, drain, and source ports of the MOSFET. The voltage supplied to G (V g ) determines the effective resistance between D and S. b Schematic of the MOSFET state function, g t ð Þ = 1 for switch-on state, when the gating voltage V g > 2 volt (see (a)), and g t ð Þ = 0 for switch-off state. c Schematic of harmonic scattering in the waveguide and the measurement system. Four waves are established in the waveguide, which are the incident (p I ), reflected (p Ru ), transmitted (p T ), and the downstream reflected (p Rd ) waves. The downstream transmitted wave contains components of f À , f s , and f þ . d Schematic of randomized AML with a random gating voltage, converting part of a tonal input to white noise.
is the electromagnetically induced acoustic impedance increment. The meta-layer may be said to operate on two quantized impedance states. Together with the randomized switching, AML offers a giant modulation ratio (see the 'Modulation ratio' section).
Following these frequency-domain observations, we now return to the time-domain governing Eqs. (1) and (2), which can be numerically solved for the velocity response v t ð Þ. The transmitted and reflected sound pressures at the two sides of the diaphragm are 48 Analysis is mainly conducted for the transmitted sound, p T , for both harmonic and randomized modulations. The rate of energy scattering from the source frequency f s , which may cover a band of f s 2 f s1 ; f s2 Â Ã , to all orders of sidebands is calculated from the diaphragm velocity spectrumvðf Þ, Variation of α s with respect to various control parameters is analyzed following the presentation of experimental results.
To solve the coupled Eq. (1) and Eq. (2) in time-domain, we use a mechanics-based normalization scheme in which time is measured by in vacuo oscillation frequency of the diaphragm, ffiffiffiffiffiffiffiffiffiffi ffi K=M p , and a normalized electric charge variable q is also defined, Here, q has the dimension of mechanical displacement. The governing equations in (1) and (2)  where F ¼ 2p I t ð ÞA=K is the incident wave pressure having the displacement dimension, D M is the damping matrix consisting of four dimensionless parameters, Here, d m represents the total mechanical loading, with both diaphragm damping and sound-radiation factor 2ρ 0 c 0 A included, d e is an electrical damping coefficient, B x is the magnetic coupling strength, and k e is the quadratic ratio of electrical resonance frequency, 1= ffiffiffiffiffiffi LC p , to the mechanical resonance frequency, ffiffiffiffiffiffiffiffiffiffi ffi K=M p , which may also be regarded as an electric spring constant. Equation (7) degenerates into a set of three equations when the capacitor is absent and there is no need to calculate electrical charge from the current, or two equations for η and dη=d t when the MOSFET is switched off. The time-domain solution is easiest when it is separated into MOSFET-on, R t ð Þ ¼ R on ; g t ð Þ ¼ 1, and MOSFET-off, R off ¼ 1; g t ð Þ ¼ 0, states. Details of the timedomain solution for the state vector U are given in the 'Methods' section. The transmitted and reflected waves are then obtained by Eq. (4).
The purpose of the current study is to demonstrate essential coupling effects instead of full parametric analysis. To this end, we will only allow the dimensionless system mechanical loading, d m , to vary. Physically, this can be achieved by modifying the mechanical damping of the diaphragm, D, as well as modifying the acoustic boundary conditions. For instance, if the fluid media (air) on the two sides of the diaphragm is substituted by a lighter gas like helium, its radiation impedance ρ 0 c 0 A will be greatly reduced. Likewise, an increased diaphragm mass or stiffness for the given surrounding fluid can also reduce d m , cf. Eq. (8). A simple parametric study on d m will be conducted following the presentation of experimental results.
Modulation ratio. We first scrutinize the static acoustic impedance for the meta-layer at MOSFET-on and MOSFET-off states using the impedance tube illustrated in Fig. 1c. The results are shown in Fig. 2. A modulation ratio is defined as the ratio of system acoustic impedances with shunt-on to shunt-off states,  Figure 2a shows the damping and reactance of the AML in MOSFET-on and MOSFET-off states, respectively, which are used to calculate the impedance ratio shown in Fig. 2b. The maximum ratio is found to be α m = 45 at 135 Hz while lower ratios extend over the entire frequency range. This peak α m is at least two orders of magnitude higher than that deployed by the pioneering techniques in literature: 0.14~0.21 for vibration and sound 33,34 , or 10 −4~1 0 −3 in optics 24,25 . The modulation mechanism for AML is therefore described as a giant modulation, which will be desirable in future applications described in the 'Introduction' and indeed beyond. For instance, one technique of achieving time-reversal-based holography is to create an instantaneous time mirror 49 by suddenly changing the global wave speed leading to back-propagation of waves without using an antenna array on an enclosing boundary. A larger time disruption in wave speed, or modulation ratio in the current context, will give more substantial back-propagation. Figure 2c shows that, when the MOSFET is switched on, the meta-layer changes from an acoustic soft state to an acoustic rigid state within the frequency range of 92-184 Hz, in which the transmittance changes by 5 times as shown in Fig. 2d. The sound passband due to structural resonance is transformed to a stopband due to the extremely large acoustic damping induced by the shunt circuit. It acts like an instant phase-change material. Therefore, when the 'on' and 'off' states of the MOSFET oscillate with time, the incident sound wave is also scattered in the frequency spectrum. Note that the gating voltage does not provide energy for such material property change, nor does the meta-layer radiate sound on its own except for tiny sounds caused leakage current and the voltage of the MOSFET.  Fig. 3c for the total transmitted sound. Discrepancies may be attributed to the fact that numerical simulation does not account for various factors that exist in experiment, such as the frequency dependency of diaphragm damping and electrical resistor, and the residual sound reflection from the finite-length anechoic wedges illustrated in Fig. 1.
For the transmitted waves, the ratios of sound energy at the difference and sum frequencies to that of the signal frequency are 43.6% and 44.4%, respectively. The energy scattering efficiency is therefore α s ¼ 1 À  show that α s could increase significantly when d m ! 0. In fact, α s ¼ 0:9 is reached when d m ¼ 0:14. The relatively low scattering efficiency for the current testing rig implies that the diaphragm is overloaded for the purpose of energy scattering from the signal frequency to the sidebands. Further parametric study shows that, when the electric damping d e is reduced from the experimental setting of d e ¼ 0:094 to d e ¼ 0 (dotted, Line 1), the efficiency α s increases over the whole range of d m , but the increment is not significant. When d e is increased to 0.3 (Line 2, thin solid line), and further to 1.0 (Line 3, dashed line), we see a general trend of decreasing α s . This implies that the reduction of both mechanical and electrical damping leads to increased α s . Line 4 (dot-dashed line) is given a different electric spring constant (k e ¼ 2) but the same d e as the experiment, the resulting α s is lower, confirming that the k e value in the experimental setting is better.
A separate question of technical interest is whether the scattering to the side frequency bands can be controlled to achieve the effects of low-pass, namely the scattered energy is mainly below the source frequency ðf < f s Þ, or high-pass ðf > f s Þ. This question is discussed in detail in Supplementary Note 2. The conclusion is that such control is possible with multiple devices arranged in series, for instance. The control mechanism is derived from the acoustic interference between devices for the sideband frequencies. The latter is tuneable by the phase difference between modulating time sequences gðtÞ fed to different devices.
Conversion of audible sound to infrasound. Using the setup shown in Fig. 1c, we demonstrate the conversion of an audible sound to infrasound (< 20 Hz), which is imperceptible by human ears, by using a modulation frequency very close to the source frequency: f s = 160 Hz, f m = 141 Hz. It generates f − = 160 -141 = 19 Hz. The other sideband frequency is f + = 160 + 141 = 301 Hz, and this represents a pathway of frequency doubling towards ultrasound. The analysis below focuses on the infrasound. Figure 4a, b shows the incident and reflected waves in the upstream region. The incident wave is slightly distorted by the generated infrasound of 19 Hz, which cannot be fully absorbed by the anechoic wedge. The reflected wave is more distorted due to the superposition of the blocked field and the scattered field of the AML 48 . Those in the downstream are decomposed as the righttraveling waves in Fig. 4c and the left-traveling waves in Fig. 4d, showing the dominance by the infrasound. Further decomposition into the three frequency components is given in Fig. 4e, f with a right-hand-side Y-axis, illustrating the relative amplitudes of each component. Small high-frequency ripples in Fig. 4f confirm that the waves of f s, f + are mostly absorbed by the downstream wedge while the infrasound bounces back.
More experimental results can be found in Supplementary Note 3 which shows that the meta-layer with the same circuit parameters is broadband effective for the incident waves ranging from 40 to 640 Hz, which is 4 octaves in bandwidth and is tunable by impedance design. Conversion of such a broadband noise to imperceptible infrasound at 19 Hz implies that the giant modulation mechanism of the meta-layer can be further developed to an alternative technology for broadband lowfrequency noise control by converting low-frequency audible sound to infrasound.
Apart from the attribute of being inaudible by human ears, infrasound can bend around obstacles and travel a long distance without significant decay in the atmosphere, with a halfamplitude distance of 100 km for 20 Hz. The demonstrated frequency conversion is a potential method for long-distance wave energy transmission.
By scaling up the parameters of the meta-layer, our prediction using the time-domain model in Eq. (1) shows (see Supplementary Note 4) a conversion of an audible sound of f s = 5 kHz to ultrasounds of f − = 20 kHz and f + = 30 kHz when a modulation frequency of f m = 25 kHz is used. It could be an alternative technology to break the limitation of spatial resolution described by Rayleigh's criterion, and to overcome the conflict of penetrating depth and spatial resolutions of ultrasound imaging. Low-frequency sound penetrates the tissue well, but high frequency is needed for fine image resolution. To realize these, further effort is needed to fabricate a much smaller device to avoid diaphragm vibration modes, such as using MEMS speaker as the base for shunt modulation.
Random modulation. The voltage sequence of random modulation is obtained by setting positive and negative values of a band-limited white signal to be V g = 6 volt (g = 1) and V g = 0 (g = 0). This is illustrated in Fig. 5a Fig. 1c, and a photo in Supplementary Note 1. (a-d) share the Y-axis at the left, while the Y-axes of (e) and (f) are labeled at the right. All amplitudes are normalized by the incident pressure amplitude (7 Pa).  Figure 5b is the actual trace of the transmitted wave covering the whole 60 seconds. Figure 5c is the zoom-in view of the time window marked in Fig. 5b for the randomization onset around the 15th second, which shows the transition of the output pressure from the modulation-off state to the modulation-on state. The incident sound has an amplitude of 4.56 Pa. Significant distortion by AML occurs as soon as the modulation is activated. Figure 5d shows a typical chunk of signal around the 17th second, decomposed into the residual source frequency (f s = 135 Hz, 1.36 Pa) and the randomized sound pressure which has an amplitude of 1.19 Pa. The latter represents some 43.4% of the total transmitted sound energy. Considering the use of a single AML unit, this percentage of energy transfer from the tonal sound to broadband noise is deemed efficient. Figure 5e-g shares the same vertical coordinate for frequency. Figure 5e shows the spectra of the incident tone (dashed line, labeled as f s ) and the modulation signal (solid line, labeled as f m ). Figure 5f is the spectrogram of the measured transmitted sound with color scale in the wide range of 20-100 dB. Sound pressures below 20 dB (0.2 mPa) are displayed by the color at 20 dB and treated as the background noise. Time-segment (i) features a single spectral peak at f s = 135 Hz, as expected. Segment (ii) has somewhat weakened incident wave at frequency f s , plus two finite bright bands of f − = [35,85] Hz, and f + = [185, 235] Hz, which correspond clearly to the spectrum of the transmitted wave shown in Fig. 5g. Segment (iii) has modulation on but without incident sound. Zero output signal is expected here, but, in reality, there are gate leakage current of the MOSFET and voltage ripples caused by the imperfect electrical ground lead, which causes the AML to radiate a tiny amount of sound around 28 dB. Segment (iv) contains only the electronic noise in the measurement system.
The most interesting point of this demonstration is that the output pressure is partially random when the random modulation is on, as shown in Segment (ii) of Fig. 5f. A random wave is fundamentally different from a harmonic one. First, it is unpredictable in time domain while harmonic waves are governed by deterministic dynamics. Specifically, in this study, so long as we hide the modulation key gðtÞ, the randomized output waves cannot be decoded. With one unit of randomized AML, the transmitted sound has around 1 À α s ¼ 56.6% of residual energy at the source frequency f s . When multiple metalayers are cascaded, they are expected to deplete the signature at the source frequency quickly and the final product is multiple convolutions of keys. In this sense, randomized AMLs provide encrypted sound wave, which can be useful for applications like underwater communication. Meanwhile, in physiology perspective, white noise (random sound) is generally a sleep-promoting sound, as majority of us experience the ease from raining sound. In contrast, harmonic sound, such as that from turbo machines, leads to serious annoyance and insomnia. As low-frequency noise is hard to control by conventional static materials, randomized AML may become a starting point to developing a different noise control method by converting detrimental sound to more acceptable or even beneficial sound. Another fundamental distinction is the bandwidth. A harmonic sound wave has zero bandwidth while a random sound has a finite bandwidth.

Conclusions
This study paves the way towards changing the frequency of sound wave at will by modulation techniques. We now take a broad view of the AML operating at the MOSFET-on and MOSFET-off configurations, each with its own eigen spatial mode. The meta-layer is a time crystal introducing a time discontinuity 34 leading to frequency shift of waves, which is similar with wavenumber change induced by space discontinuity. First, we explore using the meta-layer to alter the sound frequency in a linear manner, which is expected 34,35 and yet to be realized. The efficiency of energy scattering in the frequency domain is two orders of magnitude higher than that of traditional nonlinear materials, such as ultrasound contrast agents. Initial parametric analysis points to the direction of further increasing the rate of energy scattering by modulation with proper system mechanical loading. It is also shown that the use of multiple devices can suppress the scattered sound to certain frequency components. By virtue of this, most of the research based on nonlinear harmonics generation can be revisited for substitution by linear counterparts, such as the acoustic rectifier 12,13 . Two special cases, which are converting audible sound to infrasound and ultrasound, are demonstrated in experiment and calculation, respectively.
The second demonstration in the present study is the randomized AML containing pseudo-random properties, which distinguishes itself from all existing deterministic materials. By random modulation, the meta-layer assembles a set of random time-discontinuities to a single structure leading to unpredictable responses in the time-domain. The monochromatic incident wave is therefore scattered to a broadband wave in the frequency domain. We know that the harmonic generation, using ultrasound contrast agent or other nonlinear materials, has achieved great success in medical imaging. As bandwidth normally represents the information volume for a wave as a signal carrier, the ability of randomized AML extending a wave of zero bandwidth to a wave of finite bandwidth may find application in future hologram imaging system.
Other potential applications include tonal noise dispersion, linear acoustic diode, encrypted underwater communication or misleading Doppler-based detection, parametric amplifier, superresolution imaging and hologram, and many other potential applications yet to be put forward.

Experiments.
A commercially available moving-coil loudspeaker is used as the base to build the meta-layer. The moving-coil diaphragm is normally with 4~8 Ω DC resistance, which is too large for the meta-layer. Negative impedance converter is used for reducing the DC resistance to a positive, desired value. A MOSFET controlled by a gate voltage is used to connect the coil and the shunted circuit. The meta-layer is clamped in the impedance tube, dividing it into upstream and downstream regions. A side-branch sound source is mounted in the upstream duct wall. Two 3.5-meter-long wedges form both ends of the measurement impedance tube to avoid excessive end reflections. The wedges are long enough for most audible frequencies but inadequate for infrasound. Due to the reflection by the meta-layer, standing waves form in the upstream, which are decomposed into the incident and reflection waves by signals collected by microphone pairs (GRAS, 8 cm apart) following established procedures 50 . At downstream, two microphones are also used to decompose waves and monitor the refection coefficient downstream. The full diagram of the meta-layer device and the impedance tube measurement system can be found in Supplementary Note 1.
The mechanical and electrical parameters of the diaphragm in Eqs. (1) and (2) are obtained by fitting the acoustic impedance of the diaphragm shown in Fig. 2a for the frequency range of 30-500 Hz.
Time-domain solution for a single device. Equation (7) can be solved for the diaphragm vibration and the shunt circuit responses in the time domain. This can be done by using the MATLAB algorithm of ODE45 when the resistance function RðtÞ is given a smoothed step function at the switching events. The time step will become extremely small at the event of switching. Alternatively, one can treat the switching as a sudden event with R off ¼ 1 and solve equations for the MOSFET-on and MOSFET-off states separately. The latter approach is adopted and described below.
The governing equations during the MOSFET-on state ðg ¼ 1Þ are given in Eq. (7) with four elements in the state vector U. This is reduced to two-element during the MOSFET-off state ðg ¼ 0Þ, whose governing equations are purely mechanical and are given below The electric current vanishes at the precise moment of switching, while all other variables in U remain unchanged. The electric charge is held constant during the entire MOSFET-off period, and the values of dq=d t ¼ 0 and q are given as initial values for the four-element state vector U in the next MOSFET-on period.
For both states, time-marching from time step t n to t nþ1 is given by the usual solution to the first-order differential equations, Eq. (7) or (10), where the excitation vector F only has the incident pressure term F as its non-zero entry. The matrix exponential may be evaluated by eigenvalue decomposition, as shown below: Here, Λ is the diagonal matrix containing four eigenvalues of the damping matrix, and e ± Λ tÀ t n ð Þ h i is the diagonal matrix whose j th entry is calculated by the j th eigenvalue Λ j as exp ± Λ j t À t n À Á . For a real-valued problem, we expect eigenvalues to emerge as conjugate pairs representing eigen oscillations for the diaphragm coupled to the shunt circuit. Columns in matrix H are the corresponding eigenfunctions. When MOSFET changes from the on-state to the off-state, the physical event is rapid and has its time constant given by L=R off , which is typically less than a microsecond and is considered immediate. During the switch-off event, charge q is held constant but the current is reset as dq=d t ¼ 0. The electric energy that is consumed by the resistor, R off , in the rapid switching-off process is 1 2 LI 2 where I is the electrical current right before the switching-off.
Calculation for multiple devices in series. To calculate the scattering by AMLs arranged in series, the above solution procedure for single device has to be extended to cover the wave propagation between devices separated by a finite distance. Two approaches are possible. In the first, the wave equations are discretized in time domain. Sound pressure and its spatial gradient (or velocity) at all nodes will be the unknowns to be resolved simultaneously with the vibration and electrical variables shown in Eq. (7). For small inter-device distances, the nodes will not be too many and the matrix size of the coupled system is manageable (say below 50 × 50). The method of eigenvalue decomposition as shown in Eq. (13) can still be deployed. As an alternative, one can treat the inter-device standing wave as a mere system of two waves with two unknown amplitudes, which is similar to the so-called wave-element method taking advantage of the non-dispersive nature of sound waves in this configuration. The vibration of each device is solved separately using the radiation and reflection by neighboring devices as the excitation, or the term F in Eqs. (7) and (10). This approach is described below and the results for multiple-device studies are given in Supplementary Note 2. Figure 6 illustrates multiple devices in series. Figure 6a shows four waves acting on a single layer, two coming to and two radiating out of the device. Figure 6b depicts a three-layer scenario, and Fig. 6a is exactly what happens to the middle meta-layer. Some waves will be absent for the left-and right-hand-side devices and they are considered as special cases of what is shown in Fig. 6a. If two neighboring meta-layers are separated by a distance of ΔL, which is labeled in Fig. 6b, the same modulation frequency f m can be used with a fixed MOSFET switching time difference, Δτ. The timing of modulation sets the clock for the sideband frequency components of the scattered wave field but not the source frequency; the latter has no fixed phase relation with the modulation since f s ≠ f m . The sideband component does depend on Δτ, or a modulation phase difference ω m Δτ, where ω m ¼ 2πf m .
In what follows, we first consider a two-device system, which is denoted in the formulation by subscripts 'a' and 'b', and the formulation for three or more devices will follow naturally. Equations describing the downstream device 'b' is identical to the single-device equations given in Eqs. (1) and (2), except that the incident wave felt on the diaphragm, p I ðtÞ in Eq. (1), is no longer the incident sound from the far upstream, but the sound radiation by the upstream device 'a', hence ρ 0 c 0 v a ðt À ΔL=c 0 Þ, where ΔL=c 0 is the time delay needed for the wave to reach the downstream device. The time of sound radiation, t À ΔL=c 0 , is also called the retarded time in acoustic literature. Note that ρ 0 c 0 v a ðt À ΔL=c 0 Þ accounts only for the first arrival of the radiation by device 'a'. Once radiated, further reverberation between the two impacts both devices 'a' and 'b', and such subsequent reverberation is found by treating the devices as rigid walls, following the mathematical principle of linear superposition. Subsequent arrivals of reverberation on each device take place at a fixed time interval of 2ΔL=c 0 . For coding convenience, the time-marching step size can be chosen in such a way that the inter-device distance ΔL covers an integer number of time steps, N 0 ¼ F s ΔL=c 0 , where F s is the sampling rate and 1=F s is the time step size. The first arrival of waves radiated from the neighboring device takes N 0 time steps while the retarded time for the reaction on the radiator itself is 2N 0 time steps. Denote the current discrete time step as t j ¼ j=F s , where j is an integer, so that the velocity v a ðt j À ΔL=c 0 Þ may be rewritten in discrete form as v a ðj À N 0 Þ, the full reverberation effects on device 'b' by this velocity is the sum of all past radiations which arrive at device 'b' at the same current time index j, hence ∑ 1 J¼1;3; v a ðj À JN 0 Þ, where J may be called the echo index when J > 1. Note that the upper limit for J should be truncated by the initial time when the far upstream incident wave first strikes device 'a'. Likewise, the echoes coming back to device 'a' itself is calculated in the same manner except that the summation is taken for even numbers of J ¼ 2; 4; 6; Á Á Á. Finally, the incident wave arriving at devices 'a' and 'b' at the current time step j are given below: Extension to configurations with N D > 2 is straightforward and is not repeated here. Note that the effects of the immediate radiation pressure, J ¼ 0, from v a on device 'a', and that from v b on device 'b', are already accounted for by the radiation impedance, which is half of the term 2ρ 0 c 0 A in Eq. (1). Note also that the initial impact on device 'b' from device 'a', the term with index J ¼ 1, or ρ 0 c 0 v a t À ΔL=c 0 À Á ; carries power and excites device 'b' to vibrate. Both J ¼ 0 and J ¼ 1 are the initial radiation and should not be called echoes. In this wavedecomposition scheme, subsequent echoes with J > 1 are between rigid walls and hence contribute mainly to the reactive part of the inter-device coupling. For a short inter-device distance ΔL, the effect of subsequent echoes is similar with that of a spring stiffness. The essence of the inter-device coupling may be captured by considering J ¼ 0; 1 only. Numerical experiments with such simplified modeling show that the results for the frequency up-and down-conversion is very similar to those obtained with the full echo summation model (see Supplementary Note 2).
The resulting system of equations for the displacement of two diaphragms and two shunt circuits are mainly characterized by time-delays for which the characteristics (eigenvalue) equation is rather nonlinear even if the shunt circuits operate without MOSFET. A good choice of solution is the time-domain procedure described for the single-device. At each time step, the dynamics of each device is solved for the system properties of its own, and the inter-device coupling is implemented through the excitation term, or array F in Eq. (7) or (10), for the retarded time indices, j À JN 0 ; J > 0, which are known for the current timemarching step j. Fig. 6 Schematic diagram of multiple layers configuration for idle wave suppression. a Waves coupled to one layer. For each layer, the excitation consists of incident wave and downstream reflected waves, and its vibration radiates waves to both downstream and upstream. b Multiple meta-layers in the one-dimensional waveguide where fixed time difference can be introduced between modulation signals of the same waveform to different meta-layers. ΔL is the distance between two neighboring layers.