Nano-topography Enhances Communication in Neural Cells Networks

Neural cells are the smallest building blocks of the central and peripheral nervous systems. Information in neural networks and cell-substrate interactions have been heretofore studied separately. Understanding whether surface nano-topography can direct nerve cells assembly into computational efficient networks may provide new tools and criteria for tissue engineering and regenerative medicine. In this work, we used information theory approaches and functional multi calcium imaging (fMCI) techniques to examine how information flows in neural networks cultured on surfaces with controlled topography. We found that substrate roughness S a affects networks topology. In the low nano-meter range, S a = 0–30 nm, information increases with S a. Moreover, we found that energy density of a network of cells correlates to the topology of that network. This reinforces the view that information, energy and surface nano-topography are tightly inter-connected and should not be neglected when studying cell-cell interaction in neural tissue repair and regeneration.


Supporting information file #1: Topological analysis of cultured neural cells networks
To have an indication of the connectivity properties of the nodes in a graph and to distinguish between graphs of different types (regular, random or small world), we quantified some network parameters including the clustering coefficient and the characteristic path length.
In graph theory, the clustering coefficient ( ) is a measure of the degree to which nodes in a graph tend to cluster together. ranges from 0 (none of the possible connections among the nodes are realized) to 1 (all possible connections are realized and nodes group together to form a single aggregate). The clustering coefficient is defined as where is the number of neighbors of a generic node , is the number of existing connections between those, ( − 1)/2 being the maximum number of connections, or combinations, that can exist among k nodes. Notice that the clustering coefficient is defined locally, and a global value, , is derived upon averaging over all the nodes that compose the graph.
The characteristic path length ( ) is defined as the average number of steps along the shortest paths for all possible pairs of network nodes. We shall call the minimum distance between a generic couple of nodes the shortest path length ( ), which is expressed as an integer number of steps.
of links. For the present configuration, these parameters were set to = 1 and = 0.025. The probability P varies between 0 for a pair of nodes with an ideally infinite distance, and 1 for a pair of nodes with an ideally zero distance.
The information about the connections among the nodes in a graph is contained in the adjacency matrix = , where the indices and run through the number of nodes in the graph; = 1, if there exists a connection between and , = 0 otherwise. In the analysis, reciprocity between nodes is assumed, and thus if information can flow from to , it can reversely flow from to . In the framework of graph theory, we call a similar network an undirected graph.
Notice that this property translates into symmetry of being = . Moreover, = 0.
We showed above how to derive the distances between nodes in the networks. On the basis of , we may decide whether a pair of nodes is connected, we use at this end the formula: in which is a constant that we have chosen being 0.05 so that the probability of being a connection is = 0.95.
With these premises, we show now how to calculate the for a couple of nodes and .
In , , and , account for all the pairs of nodes which are connected to and respectively. The sum of , and , over all the nodes in , is stored in a new matrix 2 = ∑ , , for all the and and 2 has the same dimension of A. Now multiplicate 2 and repeatedly 2 = 2 , until all the terms of 2 are non-zero and those terms in position will be the between node and node . Finally, the characteristic path length is calculated like the average of over 2 1 .
Once obtained the C c and cpl values, we defined a precise measure of 'small-world-ness', the 'small-world-ness' coefficient (SW), based on the trade off between high local clustering and short path length 3 . but greater clustering of nodes than an equivalent Erdos-Rényi (E-R) random graph with the same m and n (an E-R graph is constructed by uniquely assigning each edge to a node pair with uniform probability). Let and be the mean shortest path length and the mean clustering coefficient for the E-R random graphs, obtained meaning the and the of 20 uniform distributions, and ℎ and ℎ the corresponding quantities for the graphs derived using the methods described above. We can calculate: Thus, the 'small-worl-dness' coefficient is The categorical definition of small-world network above implies ≥ 1, ≫ 1 which, in turn, gives SW > 1.

Supporting information file #3: Simulating information in neural networks
We used a generalized leaky integrate and fire model 4,5 to simulate trains of signals in bidimensional neural networks as described in Reference 6 . Here, the nodes of the grid represent the nuclei of the cells and are extracted from confocal images of cultured neural networks at DIV

11.
Supporting Information Figure 3.1 Nodes of the grid are therefore connected using the Waxman model as described in the methods of the paper and reference 1,7 . In doing so, we obtain undirected graphs where the edges of the graphs represent cell-cell connections (Supporting Information Figure 3.2).
Then, nodes were randomly picked from the network and excited with a random (Supporting Information Figure 3.3a) and periodic (Supporting Information Figure 3.3b) signal of time. Upon excitation, spikes propagate in cascade in the grid.

Supporting Information Figure 3.3
To analyse the flow of signal in the graphs, we used a generalized leaky integrate and fire model 4,5 . In individual neurons, electric pulses excite the neuron until the response (potential) at the postsynaptic sites reaches and surpasses a limiting value (that is, a threshold), then, the target neuron produces an impulse (an action potential) that propagates in turn to another neuron. This process is described by the following Equation, in which the membrane potential obeys to a function of the sole time: Where is the capacitance of the membrane, is the conductance, is the resting potential of the neuron. The current is the stimulus that excites the neuron until the membrane potential reaches a threshold ℎ and an action potential is released from the system. Neurons in a grid are described by a set of coupled differential equations that generalizes the model

Supporting information file #4: Energy landscape of small world networks
We generated graphs having a number of nodes equal to = 150 on squared grids (Supporting Information The connections between the nodes are obtained using the Waxman model described above (S1.2) with a probability of being a link between two generic nodes of = 0.95 (S1.3). In doing so, we modulated network topology and thus networks small-world-ness between SW~0.6 and SW~3 (Supporting Information Table 4.1). Therefore, we associated an harmonic potential

Supporting information file #5: Time evolution of cell density
The time evolution of neural cell density in a mono-dimensional domain is described by the partial differential equation 14