Abstract
Neurons convert external stimuli into action potentials, or spikes, and encode the contained information into the biological nervous system. Despite the complexity of neurons and the synaptic interactions in between, rate models are often adapted to describe neural encoding with modest success. However, it is not clear whether the firing rate, the reciprocal of the time interval between spikes, is sufficient to capture the essential features for the neuronal dynamics. Going beyond the usual relaxation dynamics in GinzburgLandau theory for statistical systems, we propose that neural activities can be captured by the U(1) dynamics, integrating the action potential and the “phase” of the neuron together. The gain function of the HodgkinHuxley neuron and the corresponding dynamical phase transitions can be described within the U(1) neuron framework. In addition, the phase dependence of the synaptic interactions is illustrated and the mapping to the KinouchiCopelli neuron is established. It suggests that the U(1) neuron is the minimal model for singleneuron activities and serves as the building block of the neuronal network for information processing.
Similar content being viewed by others
Introduction
Human beings rely on their nervous systems to detect external stimuli and take proper reactions afterwards. Neurons are the fundamental units in the nervous system and deserve careful and thorough characterizations for their responses to external stimuli^{1,2,3,4}. However, because neurons display considerable diversity in morphological and physiological properties, it is rather challenging to pin down the essential degrees of freedom even when studying the singleneuron dynamics. While the neural encoding and decoding^{4} are not fully understood yet, the action potentials, or the spikes, of the neurons upon external stimuli are the apparent means to pass the information onto the nervous system. In consequence, the firing rate of the neuronal spiking is often used for data analysis and modeling^{5,6,7,8,9}. The firingrate models are appealing due to their simplicity and accessibility for numerical simulations.
Although the firingrate models are helpful descriptions for neural circuits, it is not clear whether the neuronal spiking alone is sufficient to capture the essential features of neuronal activities. At the microscopic scale, the electric activities of a single neuron arise from a variety of ionic flows passing the relevant ion channels embedded on the membrane of the nerve cell^{10,11}. The conductancebased approach, such as the HodgkinHuxley model^{12}, provides an effective description for the emergence of action potentials by the inclusion of gating variables of the ion channels involved^{2}. Above some current thresholds, the neuronal spikes start to appear. Hodgkin proposed to classify the neurons into type I or type II depending on whether the firing rate changes continuously or discontinuously above the current threshold^{13}. While the conductancebased model with ionchannel dynamics explains the emergence of the neuronal spikes and provides a better description for biological details, it blurs the priority of various degrees of freedom in a single neuron, rendering a clear understanding of neuronal dynamics intractable.
The dynamical transitions in a single neuron above the current threshold posts another challenge. Both the biologically based models, such as the HodgkinHuxley model and its generalizations, or the reduced neuron models, including the ErmentroutKopell model^{14}, the FitzHughNagumo model^{15}, the Izhikevich model^{16,17} and so on, exhibit rich types of dynamical phase transitions above the firing threshold. There are three major types of dynamical phase transitions found in singleneuron dynamics: saddlenode on invariant circle (SNIC), supercritical Hopf bifurcation and subcritical Hopf bifurcation^{2,3}. Ermentrout^{18} showed that the type I neurons undergo a SNIC dynamical transition at the threshold, while Izhikevich^{3} pointed out that type II neurons may go through all three different bifurcations. It would be great to build a theoretical framework, capturing the essential degrees of freedom for spiking neurons and incorporating all types of dynamical phase transitions systematically.
In this Article, we propose the essential degrees of freedom for a spiking neuron are the membrane potential and the temporal sequence. It is rather remarkable that both can be integrated into a unified theoretical framework described by a single complex dynamical variable and the U(1) dynamics emerges naturally. The real part of the complex dynamical variable represents the potential of the neuron and the phase describes the temporal sequence during the firing process. When describing the neuronal dynamics of the complex dynamical variable, our U(1) neuron model not only reproduces the action potentials from the HodgkinHuxley neuron but also captures the gain function of the firing rate in response to the external current.
It is known that the classification of spiking neurons is closely related to the bifurcation of neuronal dynamics. The gain function of the type I neuron is continuous at the current threshold while the type II neuron exhibits a discontinuous jump at the threshold. The U(1) neuron described by the complex variable provides a natural explanation for the classification. The bifurcations on the complex plane, either in radial or phase directions, lead to various transitions among resting, excitable, firing states of a single neuron. In short, the U(1) neuron not only captures the singleneuron activity upon external stimuli, but also provides a coherent understanding for the dynamical phase transition between different types of neuronal activities.
The major impact of the U(1) neuron is not to provide a realistic description of a spiking neuron (although it can be done as shown in the later paragraphs), just like the Fermi liquid theory is not aiming to provide a precise quantitative description for metals. The key is to grab the essential features in neuronal dynamics so that model building for different purposes can be facilitated with these ingredients. For instance, within the U(1) neuron description, we find the phase dynamics of spiking neurons is nonuniform during the firing process and the firing rate is thus dictated by the bottleneck (phase regime with smaller angular velocity). In addition, it is known that the neuron reacts differently when stimulated in different firing processes. Going beyond the usual Kumamotolike interactions, the U(1) neuron framework provides a systematic approach to describe the phase dependence of the synaptic interactions on the presynaptic and postsynaptic neurons. With the phase dependence in mind, the refractory effect can be incorporated seamlessly into the U(1) neuron framework. In fact, we show that the KinouchiCopelli neuronal network is equivalent to the discrete version of the U(1) neuronal network and the spontaneous asynchronous firing (SAF) state, crucial for information processing, can be realized when the synaptic strength is strong enough.
The remainder of the paper is organized as follows. In Section II, we first compare the artificial and biological neuronal networks and point out the importance of neuronal dynamics. In Section III, we discuss the modelocking phenomena in biological neurons. In Section IV, we go beyond the usual GinzburgLandau theory and construct the theoretical foundation for the U(1) neuron. In Section V, we demonstrate how the HodgkinHuxley neuron can be described with the theoretical framework of the U(1) neuron. In Section VI, we reveal the importance of phase dependence in the synaptic interactions and establish the equivalence of the KinouchiCopelli neuron and the discrete version of the U(1) neuron. Finally, we extend the singleneuron approach to neuronal networks and show that the spontaneous asynchronous firing phase, beneficial for information processing, in the KinouchiCopelli neuronal network.
Artificial and biological neurons
We first briefly explain the difference between artificial and biological neurons as shown in Fig. 1. In biological nervous systems, recurrent neuronal networks are often found, while the artificial neural networks used in deep learning^{19,20} usually belong to the feedforward type with welldefined input, output and hidden layers. Despite the difference between the network structures, the neuronal dynamics within the ratemodel framework is captured by the set of coupled nonlinear differential equations,
where \(A_n\) denotes the firing rate of the single neuron or the activity of the neuronal population at the nth node, \(I_n\) is the external current injected into the nth neuron and \(W_{nm}\) is the synaptic weight from the mth neuron to the nth neuron. The relaxation time \(\tau\) is the typical time scale for a single neuron returning to its resting state when the external stimulus is turned off. The gain function G relates the firing rate with the stimuli, including the external current and the synaptic currents from connected neurons. The variety of individual neurons is ignored here for simplicity but can be included without technical difficulty.
Even with the simple relaxation dynamics, the coupled nonlinear differential equations are already extremely complicated. Because the synaptic dynamics (how \(W_{nm}\) evolves with time) is typically much slower than the neuronal dynamics, the stationary solution plays a significant role in some cases,
Note that the true dynamics drops out completely in the above relations. In the feedforward network, these are exactly the relations between input and output neurons commonly used in the artificial neural network. The synaptic weight \(W_{nm}\) can be adjusted by employing appropriate algorithm to minimize the cost function but the evolution of \(W_{nm}\) at different epochs does not represent the true dynamics of the neuronal network.
The deep neural network^{19} enjoys great success in recent years and makes strong impacts in many areas in science and technology. However, as explained in the previous paragraph, it explains the slower process such as learning but does not include the reactive information processing at the shorter time scale. If the neuronal dynamics is properly included, shall the neuronal network with both types of dynamics exhibits different class of intelligence? The first step to answer this important question is to capture the essential features in neuronal dynamics before constructing the network with complicated structures. It will become evident later that the activity (firing rate) \(A_n\) is insufficient to describe the dynamics of a single neuron and more degrees of freedom must be included to account for proper synaptic interactions.
Modelocking in a single neuron
Mode locking^{21,22,23} is a common phenomenon in physical and biological systems^{24,25,26,27,28} with nonlinear dynamics and may play an important role in neural information processing. In a wide variety of neuron models including the HodgkinHuxley model, the FitzHughNagumo model, the Izhikevich model and some integrateandfire models, the firing rate \(\nu\) is locked to the integer multiples of the oscillatory frequency \(\nu _{\mathrm{ac}}\) of the external ac current. As shown in Fig. 2, the HodgkinHuxley neuron shows robust modelocking behavior in the presence of the timedependent current stimulus,
where I and \(I_{\mathrm{ac}}\) represent the strengths of dc and ac currents respectively. It is rather remarkable that, even with a moderate \(I_{\mathrm{ac}}\), the gain function of the HodgkinHuxley neuron changes drastically and robust firingrate plateaus appear with \(\nu = n \nu _{\mathrm{ac}}\), where \(n=0,1,2,\cdots\). The action potentials on the firingrate plateaus indicate clear modelocking behaviors as shown in Fig. 2b.
The modelocking phenomena have been known in the neuroscience community for quite a long time but its deeper implication seems neglected. First of all, the modelocking phenomena provide a stable method to transfer information between neurons in the presence of stochastic noises. Furthermore, the modelocking phenomena manifest from the underlying nonlinear phase dynamics, or the socalled U(1) dynamics.
To unveil the underlying phase dynamics, we introduce a complex dynamical variable to describe the neuronal dynamics,
where r(t) denotes the firing amplitude and \(\varphi (t)\) represents the phase during the firing process. We propose the essential degrees of freedom for a spiking neuron are the membrane potential \(V(t) = r(t) \cos \varphi (t)\) and the phase \(\varphi (t)\) as shown in Fig. 3. The phases \(\varphi =0, \pi\) correspond to the potential maximum and minimum respectively. When a spiking neuron completes a full firing cycle, it can be viewed as a winding process of \(\Delta \varphi = 2\pi\) in the phase dynamics.
In general, the angular velocity \(\Omega (\varphi )\) is not constant and depends on the phase,
As a demonstrating example, we extract the nonuniform phase dynamics of the HodgkinHuxley neuron with our U(1) neuron framework. The phase dynamics reveals lots of interesting features as shown in Fig. 4. First of all, the angular velocity \(\Omega (\varphi )\) is nonuniform, showing a bottleneck (small angular velocity) starting around \(\varphi =\pi\) and a whirlwind (large angular velocity) slight below \(\varphi = 0\). A nontrivial correlation between membrane potential V(t) and the phase \(\varphi (t)\) is thus established: the phase dynamics is fast near potential maximum while it slows down near potential minimum.
Upon changing the external current stimulus, the overall shape of \(\Omega (\varphi )\) remains more or less the same, indicating the nonuniform phase dynamics we found here is an intrinsic property of the HodgkinHuxley neuron. Zooming into the finer differences caused by different current stimuli, a larger current increases the angular velocity slightly near the bottleneck regime while slows down the phase dynamics around the whirlwind regime. It means that an increase of the injected current suppresses the nonuniformity of the angular velocity.
The firing rate \(\nu\) can be determined from the phase dynamics as well. Because the complete firing cycle corresponds to a \(2 \pi\) phase winding, the firing rate of a spiking neuron is
where \(\Delta t_f\) is the time duration for adjacent firing events. From the above ratephase relation, it is clear that the firing rate is dominated by the angular velocity in the bottleneck regime. Thus, a slight increase of \(\Omega (\varphi )\) in the bottleneck gives rise to a significant upsurge in the firing rate \(\nu\). On the other hand, the relatively large changes in the whirlwind regime for different current stimuli are irrelevant to the firing rate.
The nonuniform phase dynamics revealed by the U(1) neuron framework presents a natural explanation for the observed modelocking phenomena. It is worth emphasizing that a uniform angular velocity \(\Omega (\varphi ) = \omega\) can be gauged away by redefining the complex dynamical variable \(z(t) \rightarrow z(t) \exp (i\omega t)\) but the phasedependent angular velocity \(\Omega (\varphi )\) is an intrinsic property of the neuron and cannot be gauged away. The simple yet stimulating findings presented in Fig. 4 encourage us to go beyond the rate models and to integrate both membrane potential V(t) and the phase \(\varphi (t)\) together into a coherent theoretical framework.
The U(1) Neuron
Extending the conventional GinzburgLandau theory for the complex dynamical variable z(t), its dynamical equation contains two parts,
where the Lyapunov function \(L(z,{\overline{z}})\) is a realvalued potential with U(1) symmetry and \(R(z,{\overline{z}})\) is another realvalued function describing the nonuniform rotation of the phase. It can be shown that the Lyapunov function \(L(z,{\overline{z}})\) decreases throughout the temporal evolution, seeking the potential minimum representing the free energy in thermal equilibrium. Thus, the radial dynamics of the firing amplitude r(t) is relatively simple and, in most cases, can be understood with the usual GinzburgLandau theory with slight modifications. However, the presence of the nonuniform phase dynamics \(R(z,{\overline{z}})\) goes beyond the relaxation dynamics (seeking for specific potential minima) and gives rise to interesting nonequilibrium phenomena as anticipated in the excitable neuronal systems.
To make the radial and phase dynamics explicit, one can choose the biased doublewell potential in the GinzburgLandau theory (for both the firstorder and the secondorder phase transtions) supplemented with the Fourier expansion for the phase dynamics,
Here \(v_2\), \(v_3\), \(v_4\) are real while \(c_n\) are in general complex. Note that, unlike the Lyapunov function, the U(1) symmetry does not hold for the phaserotation function \(R(z,{\overline{z}})\). Separating the complex Eq. (7) into amplitude and phase parts, the dynamical equations read
where \(c_n\) and \(\phi _{n}\) are the amplitudes and phases of the complex numbers \(c_n\). Note that the radial dynamics contains no phase dependence and F(r) serves as the conservative force driving the firing amplitude to the potential minima at \(r=r^*\) satisfying the equilibrium condition \(F(r^*) =0\). The phase dynamics can be rather unconventional because the angular velocity \(\Omega (\varphi )\) is nonuniform^{29}, a direct consequence from the spikelike action potential.
The above dynamical equations for firing amplitude and phase provides a coherent understanding of different types of dynamical phase transitions in spiking neurons. Three major types of dynamical phase transitions are illustrated in Fig. 5: the saddlenodeontoinvariantcycle (SNIC) transition is associated with bifurcations in the phase dynamics, while the supercritical and subcritical Hopf transitions are driven by bifurcations in the radial dynamics for the firing amplitude. While these dynamical phase transitions are found in various neuron models in the literature, it is rather satisfying that all three types of transitions emerge naturally within the U(1) neuron framework.
Let us focus on the SNIC transition first. As shown in Fig. 5, in the presence of a finite amplitude \(r^*\), the equilibrium condition \(\Omega (\varphi ^*)=0\) gives rise to a pair of saddlenode fixed points in phase dynamics. Below the current threshold, the saddlenode structure makes the neuron excitable. Upon increasing current injection, the saddlenode pair gets close, merges into a critical point and eventually disappears on the limit cycle. Because the limit cycle already exists below the current threshold, the firing amplitude exhibits a discontinuous jump. It is known in statistical physics that the dynamics slows down indefinitely in the vicinity of a critical point. In consequence, in near the SNIC transition, the time duration between adjacent firing events diverges, indicating a vanishing firing rate. Thus, the firing rate changes continuously across the current threshold, equating to the type I neuron.
On the other hand, in the regime where the angular velocity remains positive \(\Omega >0\), the dynamical transitions are driven by the competitions among the equilibrium points in the radial dynamics determined by \(F(r^*)=0\). In the supercritical Hopf transition, a stable limit cycle (nonzero \(r^*\) solution) appears at the current threshold, rendering the resting state (\(r^*=0\) solution) unstable. Because the limit cycle grows out from the resting state, the amplitude changes continuously across the current threshold as shown in Fig. 5. Because there is no critical point involved here, the firing rate associated with the emergent limit cycle is finite in general cases. Thus, the gain function exhibits a discontinuous jump at the current threshold and corresponds to the type II neuron.
The subcritical Hopf transition arises when a pair of limit cycles appears at the current threshold. Due to the simultaneous presence of the stable limit cycle (firing state) and the stable fixed point (resting state), the neuron is bistable. As shown in Fig. 5, both the firing rate and amplitude are discontinuous at the current threshold. Although neurons undergo the subcritical Hopf transition can be classified as the type II, their dynamics are more complicated in comparison with the supercritical Hopf transition where neurons are either in the resting state or the firing state.
Fitting HodgkinHuxley Neuron
In this section, we demonstrate how the HodgkinHuxley neuron can be described by the U(1) neuron in a wide range of parameter regimes. The bifurcation diagram^{30} of the HodgkinHuxley neuron undergoes a subcritical Hopf transition first at the current threshold at \(I=9.780\) \(\mu\)A/cm\(^2\), followed by another supercritical Hopf transition at \(I=154.527\) \(\mu\)A/cm\(^2\). These dynamical phase transitions can be captured within the U(1) neuron framework.
Let us focus on the radial dynamics of the firing amplitude first. Because the resting state (\(r = 0\)) is always present, the potential flow in the GinzburgLandau theory can be constructed in the following way. As shown in Fig. 6, we introducing a cubic function f(r) with three zeroes at \(r = 0, p_1, p_2\),
The parameters \(p_1\) and \(p_2\) can be chosen as the amplitudes of the stable and unstable limit cycle, and A should be negative because of the limit of the ionic sources. Then the force in the amplitude equation is the coordinate transformation concerning the external input current I that switches neurons between firing states and resting states:
Here s(I) depends on the external current and serves as a fitting function to reproduce the correct dynamical phase transitions in the targeted parameter regime.
The solutions for \(F(r)=0\) describes the fixed point or the amplitudes of the limit cycles,
Note that only nonnegative solutions are physical because the firing amplitude cannot be negative. The \(r=0\) solution is always present as anticipated. The other two solutions appear in pair and represent a pair of stable and unstable limit cycles.
The remaining task to map the HodgkinHuxley neuron into the U(1) neuron is to equate firing amplitudes in both descriptions,
by numerical fitting. For convenience, we suppose that the stable solution has its maximum when the fitting function s(I) equals to 0. Therefore, the first step is to take the derivative of \(r_{\mathrm{stable}}\) with respect to s and set this derivative zero to find the extreme point of \(r_{\mathrm{stable}}(s)\).
We can find the expression of s at the maximal \(r_{\mathrm{stable}}\) by making the above equation equal to 0.
Now we use our assumption stated in the beginning that the maximum of the stable solution happens when the fitting function s(I) equals to 0. After setting \(s=0\), we will find the relation between the parameters \(p_1\) and \(p_2\).
However, since we previously define that \(p_1\) and \(p_2\) stand for the amplitude of the stable and unstable limit cycle, \(p_2\) should always be smaller than \(p_1\), see Fig. 6. As a result, we should only keep the \(p_2 = \frac{p_1}{2}\) solution. From the bifurcation diagram of the HodgkinHuxley neuron, we know that the maximal firing amplitude is 53.0632(mV) when the current is 7.67(\(\mu\)A/cm\(^2\)). Therefore, we get values for the parameters \(p_1 = 53.0632\) and \(p_2 = 26.5316\). Next, we anchor the maximum of the stable solution \(r_{\mathrm{stable}}[s=0]\) to the maximal firing amplitude in HodgkinHuxley neuron \(r_{\mathrm{HH}}(I=7.67)\) to get a reference of the the \(sI\) relation. Then we just need to map currents I smaller than 7.67 to negative s and those larger than 7.67 to the positive s according to the Equation (15).
As a result, we will have a number of (I, s) pairs to fit. Keeping the lowerorder terms in the fitting function s(I),
it is sufficient to match the firing amplitude rather well as shown in Fig. 7. The coefficients \(c_n\) in the phase dynamics can be found in numerical fitting as well. Keeping the Fourier expansion to the seventh order in phase dynamics, the action potential of the HodgkinHuxley neuron is almost identical to the effective U(1) neuron presented in Fig. 8.
To capture the spikelike profile of the action potential, it is necessary to include higher order terms in the phase dynamics. However, in the cases where the precise shape of the action potential is irrelevant, the precision of the Fourier expansion can be relaxed. As shown in Fig. 9, the gain function of the U(1) neuron up to the seventh order in phase dynamics matches that of the HodgkinHuxley neuron rather well. However, the gain function up to the second order in phase dynamics still keeps the qualitative trend with reasonable compromise in quantitative precision. The parameters in the Fourier expansion to the second order in the phaserotation function are found to be
Phasedependent synaptic interactions
With the established theoretical framework for a single neuron, we move on to investigate the synaptic interactions between connected neurons. As the nonuniform angular velocity in phase dynamics plays a significant role for the single neuron, we anticipate that the synaptic interactions also carry nontrivial phase dependence.
Suppose the phases of presynaptic and postsynaptic neurons are labels as \(\varphi _1(t)\) and \(\varphi _2(t)\) respectively. The Kuramoto interaction^{31} carries the phase dependence \(\cos (\varphi _1\varphi _2)\) and tends to synchronize both neurons. In terms of the complex dynamical variables, the synaptic interaction \(V_K(z_1,{\overline{z}}_1,z_2,{\overline{z}}_2)\) can be written as
where \(K_{12}\) denotes the synaptic strength of the Kuramoto interaction between the neurons. Note that the above interaction is U(1) symmetric, i.e. it is invariant under a constant phase shift for all neurons.
The above synaptic interaction is simple and widely used in the generalized Kuramoto models^{31} for studying synchronization phenomena. But, the U(1) symmetry imposes a rather unrealistic constraint on the neuronal dynamics. A constant phase shift means a temporal shift in the firing process, which is certainly not invariant for any realistic neurons. Furthermore, even though synchronization phenomena are widely observed in many biological systems, it is faulty for information processing. In fact, Parkinson’s disease is correlated with excessively strong oscillatory synchronization in the brain areas such as thalami and basal ganglia^{32,33,34}, while a healthy brain is asynchronous in these areas.
Let us try to model the phase dependence of the synaptic interactions from realistic neuronal properties. When the presynaptic neuron fires, it provides a synaptic current and enhances the chance for the postsynaptic neuron to fire. This process can be approximated by the phase factor \(1+\cos \varphi _1\) qualitatively because it reaches the maximum at \(\varphi _1=0\) (spike) and vanishes at \(\varphi _1=\pi\) (hyperpolarized). It is also known that the postsynaptic neuron is sensitive to external stimuli in the hyperpolarized period while almost insensitive around the spike. Thus, there is another phase factor \(1\cos \varphi _2\) arisen from the postsynaptic neuron. Combing the phase factors for presynaptic and postsynaptic neurons together, we anticipate the synaptic interaction carries the overall phase dependence \((1+\cos \varphi _1)(1\cos \varphi _2)\). In terms of the complex dynamical variables, the synaptic interaction \(V(z_1,{\overline{z}}_1,z_2,{\overline{z}}_2)\) can be written as
where \(W_{12}\) denotes the strength of the synaptic interaction between the neurons. After taking the realistic neuronal properties into account, the synaptic interaction is no longer U(1) symmetric and the nonphysical constraints are lifted. It is rather interesting that the above synaptic interaction is similar to the BardeenCooperSchrieffer interaction, as the U(1) symmetry is also broken in superconductors. The dynamical behaviors of neuronal networks with this type of phasedependent synaptic interactions avoid the ultimate fate of synchronization, are effective for information processing, and remain open for further investigations.
The refractory effects in spiking neurons also lead to phasedependent synaptic interactions. Kinouchi and Copelli proposed a qstate neuron model to account for the observed refractory effects^{35}. As shown in Fig. 10, the resting state is labeled as state 0 and the firing state is labeled as state 1. The other states \(2,3,\cdots ,q1\) are refractory and thus insensitive to external stimuli. Time evolution is discrete in the KinouchiCopelli model, and the update for the whole system is synchronous. A neuron i in state 0 can be excited to state 1 in the next time step in two ways: (1) excited by the external stimulus, driven by a Poisson process with a rate r, or (2) excited by a neighbor neuron j, which is in the excited state in the previous time step, with probability \(p_{ij}\). However, the dynamics is deterministic after excitation. If the neuron is in state n, it must evolve to state \(n+1\) sequentially in the next time step until the last refractory state \(q1\) leads back to the state 0 (resting state). Using \(S_i(t)\) to represent the state of the neuron i at the time step t, we summarize the rules of dynamics for a qstate KCNN in the math form:

If \(0<S_i(t1)<q2\), then \(S_i(t)=(n+1)\);

If \(S_i(t1)=q1\), then \(S_i(t)=0\);

If \(S_i(t1)=0\), then \(S_i(t) = 1\) if triggered by either the external stimulus or neighboring firing neurons at the last time step. Otherwise, \(S_i(t) = 0\).
Because the U(1) neuron provides the general theoretical framework for nonuniform phase dynamics, it is not surprising that the KinouchiCopelli neuron can be mapped to the U(1) neuron as well.
In the absence of external stimuli, the KinouchiCopelli neuron is excitable, described by a pair of saddle and node on the limit cycle as shown in Fig. 10. The resting state corresponds to the stable node, labeled as state 0. When the neuron is activated by external stimuli, the angular velocity \(F(\varphi )\) lifts up and the neuron fires. The phase swings to \(\varphi =0\) (spike) with maximum potential, labeled as state 1. The change of the angular velocity upon external stimuli resembles the HodgkinHuxley neuron studied in the previous section. After firing, the neuron enters the refractory period and gradually relaxes back to the resting state. Cutting the refractory period into \(q1\) time steps of equal intervals, the refractory states \(2,3,\cdots ,q1\) are defined accordingly. In short, the KinouchiCopelli neuron can be viewed as a discrete version of the U(1) neuron going through the SNIC transition upon external stimuli.
Now we turn to the phase dependence of the synaptic interaction caused by the refractory effect. For simplicity, we place these neurons on a twodimensional grid to form the KinouchiCopelli neuronal network (KCNN). The phase dependence of the presynaptic neuron takes the usual form of pulse coupling: when the presynaptic neuron fires (state 1), it gives rise to a finite probability p to activate the postsynaptic neuron. In a twodimensional grid, postsynaptic neurons are connected to 4 presynaptic neurons. Thus, it is convenient to introduce the brach ratio \(\sigma\) as the sum of all activation probabilities, \(\sigma = 4 p\), to parameterize the dynamical behaviors of the KCNN. The postsynaptic neuron also brings about another phase dependence due to refractory effects: only when the postsynaptic neuron is in the resting state (state 0), it can be activated to fire upon external stimuli.
Although the phase dependence of the synaptic interactions in the KCNN is not the same as that in Eq. (22), the broken U(1) symmetry is manifest. Therefore, spontaneous asynchronous firing (SAF) phase is anticipated when the synaptic weight is strong enough. This is indeed true. We perform numerical simulations for the KCNN with different branching ratios as shown in Fig. 11. The initial configurations are randomly set with sparsely distributed activated neurons. When the branching ratio is larger than some critical value, the firing activity is enhanced and the neuronal activity is described by the SAF phase. On the other hand, when the branching ratio is smaller than the critical, the firing activities are suppressed, entering the quiescent phase with diminishing neuronal activities. The dynamical phase transition and associated statistical analysis for the KCNN will be presented in detail elsewhere.
Unlike the synchronous phase, the SAF phase can encode and decode information effectively and is of vital importance for studying neuronal networks. To achieve this goal, the phase dependence of the synaptic interactions must be properly taken into account and the U(1) neuron framework provides a natural and convenient approach to tackle this daunting challenge.
Discussions and conclusions
The notion of “minimum model” is rather common in physics – constructing the simplest model to capture the essential degrees of freedom of the system. When adopting the minimummodel approach, one does not intend to describe all detail features observed in experiments. Instead, one is interested in extracting the most important features/factors in the dynamical systems. The firingrate model, widely used in computational neuroscience, serves as a good examples for the minimummodel approach because only one feature (the firing rate) is kept. However, our findings here demonstrate that the rate models miss out important features and, thus, the appropriate minimum model should be the U(1) neuron model proposed here. The U(1) neuron model integrates the dynamics of firing amplitude and phase together and serves as a better minimum model to capture the essential ingredients of neuronal dynamics. In fact, we study the dynamics of the HodgkinHuxley neuron within the U(1) neuron framework and show that the action potential, the gain function and the associated dynamical phase transitions can all be described consistently by the U(1) neuron model.
The central result of the U(1) neuron is shown in Fig. 4. The nonuniform phase dynamics exhibits “whirlwind” (fast dynamics) and “bottleneck” (slow dynamics) features and can be understood as being away or close to the critical point. For instance, in the vicinity of the critical point, the bottleneck feature emerges due to the criticalslowdown phenomena. One thus anticipates the neuron shall be sensitive to external stimuli, consistent with the experimental observations. On the other hand, the whirlwind point is relatively far away from the critical point so that the dynamics shall remain robust and insensitive to external stimuli, also consistent with current observations. The U(1) neuron model thus provides a coherent understanding for the neuronal sensitivity during different firing periods.
The phase dependence is also important when studying synaptic interactions between neurons. The U(1) neuron framework provides a systematic approach construct the phasedependent synaptic interaction between presynaptic and postsynaptic neurons. For instance, let’s denote the phases of the presynaptic and postsynaptic neurons as \(\varphi _1, \varphi _2\) respectively. Since the presynaptic neuron provides a pulse of synaptic current when it fires, we can approximate this process by the phasedependent factor \((1+\cos \varphi _1)\). Similarly, because the postsynaptic neuron is sensitive to the stimuli during the hyperpolarized period but insensitive during the depolarized period, these phenomena can be approximated by a factor of \((1\cos \varphi _2)\). Combining both factors together can catch the phase dependences on the presynaptic and postsynaptic phases. Note that the synaptic interactions discussed here are different from the usual Kuramoto model and the like, where the synaptic interactions depend on the phase difference \(\cos (\varphi _1\varphi _2)\), ready to kick off synchronization instability among neurons. The phasedependent synaptic interactions are much richer within the U(1) neuron framework and avoid the ultimate fate of synchronization. Therefore, they may lead to asynchronous phases, more effective for information processing. These important issues remain open for future investigations.
Finally, we would like to address the role of network structure. As demonstrated in this article, we assemble the KinouchiCopelli neurons, discrete version of the U(1) neurons, on a two dimensional grid to form a neuronal network. Our numerical simulations show that the SAF phase, crucial for information processing, can be realized in the squarelattice KCNN. Compared to the fullyconnected KCNN, which can be solved by the meanfield approximation, the squarelattice KCNN demonstrates a better capability for spatial information processing, such as the neuronal networks in the retina. There’s plenty of room between these two extreme network structures. It is expected that, as the longerranged synaptic interactions appear, the collective modes of the neurons are enhanced while the spatial resolution compromises. To make the neuronal network smarter, it is crucial to abandon the naive fullyconnected network structure. It remains an interesting challenge to explore various dynamical phases on different network structures now.
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
References
Galizia, C. G. & Lledo, P.M. Neurosciences  from molecule to behavior: A university textbook (SpringerVerlag, Berlin, 2013).
Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. Neuronal dynamics: From single neurons to networks and models of cognition (Cambridge University Press, 2014).
Izhikevich, E. M. Dynamical systems in neuroscience (MIT press, 2007).
Dayan, P. & Abbott, L. F. Theoretical Neuroscience: computational and mathematical modeling of neural systems (MIT Press, 2014).
Wilson, H. R. & Cowan, J. D. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1 (1972).
Ermentrout, B. Reduction of conductancebased models with slow synapses to neural nets. Neural Comput. 6, 679 (1994).
Gerstner, W. Time structure of the activity in the neural network models. Phys. Rev. E 51, 738 (1995).
Shriki, O., Hansel, D. & Sompolinsky, H. Rate models for conductancebased cortical neuronal networks. Neural Comput. 15, 1809 (2003).
Ostojic, S. & Brunel, N. From spiking neuron models to linearnonlinear models. PLoS Comput. Biol. 7, e1001056 (2011).
Choe, Senyon. Potassium channel structures. Nat. Rev. Neurosci. 3, 115 (2002).
Naylor, C. E. et al. Molecular basis of ion permeability in a voltagegated sodium channel. The EMBO J. 35, 820 (2016).
Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500 (1952).
Hodgkin, A. L. The local electric changes associated with repetitive action in a nonmedullated axon. J. Physiol. 107, 165 (1948).
Ermentrout, G. B. & Kopell, N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233 (1986).
FitzHugh, R. Impulses and physiological states in theoretical models of nerve membrane. Biophys. J. 1, 445 (1961).
Izhikevich, E. M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569 (2003).
Izhikevich, E. M. Which model to use for cortical spiking neurons?. IEEE Trans. Neural Netw. 15, 1063 (2004).
Ermentrout, B. Type I membranes, phase resetting curves, and synchrony. Neural Comput. 8, 979 (1996).
Goodfellow, I., Bengio, Y. & Courville, A. Dynamical systems in neuroscience (MIT press, 2007).
Haykin, S. Neural Networks and Learning Machines (Pearson, 2009).
Jensen, M. H., Bak, P. & Bohr, T. Transition to chaos by interaction of resonances in dissipative systems. I. Circle Maps Phys. Rev. A 30, 1960 (1984).
Flaherty, J. E. & Hoppensteadt, F. C. Frequency entrainment of a forced van der Pol oscillator. Stud. Appl. Math. 58, 5 (1978).
Fletcher, N. H. Mode locking in nonlinearly excited inharmonic musical oscillators. J. Acoust. Soc. Am. 64, 1566 (1978).
Guevara, M. R., Glass, L. & Shrier, A. Phase locking, perioddoubling bifurcations, and irregular dynamics in periodically stimulated cardiac cells. Science 214, 1350 (1981).
Aihara, K., Numajiri, T., Matsumoto, G. & Kotani, M. Structures of attractors in periodically forced neural oscillators. Phys. Lett. A 116, 313 (1986).
Takahashi, N., Hanyu, Y., Musha, T., Kubo, R. & Matsumoto, G. Global bifurcation structure in periodically stimulated giant axons of squid. Physica D 43, 318 (1990).
Gray, C. M. & McCormick, D. A. Chattering cells: Superficial pyramidal neurons contributing to the generation of synchronous oscillations in the visual cortex. Science 274, 109 (1996).
Szucs, A., Elson, R. C., Rabinovich, M. I., Abarbanel, H. D. I. & Selverston, A. I. Nonlinear behavior of sinusoidally forced pyloric pacemaker neurons. J. Neurophysiol. 85, 1623 (2001).
Matthews, P. C. & Strogatz, S. H. Phase diagram for the collective behavior of limitcycle oscillators. Phys. Rev. Lett. 65, 1701 (1990).
Xie, Y., Chen, L., Kang, Y. M. & Aihara, K. Controlling the onset of Hopf bifurcation in the HodgkinHuxley model. Phys. Rev. E 77, 061921 (2008).
Acebron, J. A., Bonilla, L. L., Vicente, C. J. P., Ritort, F. & Spigler, R. The kuramoto model: A simple paradigm for synchronization phenomena. Re. Mod. Phys. 77, 137 (2005).
Pare, D., Curro’Dossi, R. & Steriade, M. Neuronal basis of the parkinsonian resting tremor. Neuroscience 35, 217 (1990).
Nini, A., Feingold, A., Slovin, H. & Bergman, H. Neurons in the globus pallidus do not show correlated activity in the normal monkey, but phaselocked oscillations appear in the MPTP model of parkinsonism. J. Neurophysiol. 74, 1800 (1995).
Tass, P. et al. The causal relationship between subcortical local field potential oscillations and parkinsonian resting tremor. J. Neur. Eng. 7, 016009 (2010).
Kinouchi, O. & Copelli, M. Optimal dynamical range of excitable networks at criticality. Nat. Phys. 2, 348 (2006).
Acknowledgements
The authors would like to express their gratitude to ChingLung Hsu, ChinYuan Lee, Georg Northoff and WeiPing Pan for helpful discussions and comments. We acknowledge supports from the Ministry of Science and Technology through grants MOST 1092112M007026MY3, MOST 1092124M007005 and MOST 1102112M005011. Financial supports and friendly environment provided by the National Center for Theoretical Sciences in Taiwan are also greatly appreciated.
Author information
Authors and Affiliations
Contributions
H.H.L. proposed the research topics and supervised the project. C.Y.L. worked out most analytic derivation and numerical simulations. P.H.C. carried out the numerical simulations on the neuronal network. W.M.H. joined the discussions and provided suggestions. C.Y.L. and H.H.L. wrote the manuscript and all authors reviewed the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lin, CY., Chen, PH., Lin, HH. et al. U(1) dynamics in neuronal activities. Sci Rep 12, 17629 (2022). https://doi.org/10.1038/s41598022225260
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598022225260
This article is cited by

A longitudinal electrophysiological and behavior dataset for PD rat in response to deep brain stimulation
Scientific Data (2024)

PhaseLocked Solutions of a Coupled Pair of Nonidentical Oscillators
Journal of Nonlinear Science (2024)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.