Quantifying the Number of Discriminable Coincident Dendritic Input Patterns through Dendritic Tree Morphology

Current developments in neuronal physiology are unveiling novel roles for dendrites. Experiments have shown mechanisms of non-linear synaptic NMDA dependent activations, able to discriminate input patterns through the waveforms of the excitatory postsynaptic potentials. Contextually, the synaptic clustering of inputs is the principal cellular strategy to separate groups of common correlated inputs. Dendritic branches appear to work as independent discriminating units of inputs potentially reflecting an extraordinary repertoire of pattern memories. However, it is unclear how these observations could impact our comprehension of the structural correlates of memory at the cellular level. This work investigates the discrimination capabilities of neurons through computational biophysical models to extract a predicting law for the dendritic input discrimination capability (M). By this rule we compared neurons from a neuron reconstruction repository (neuromorpho.org). Comparisons showed that primate neurons were not supported by an equivalent M preeminence and that M is not uniformly distributed among neuron types. Remarkably, neocortical neurons had substantially less memory capacity in comparison to those from non-cortical regions. In conclusion, the proposed rule predicts the inherent neuronal spatial memory gathering potentially relevant anatomical and evolutionary considerations about the brain cytoarchitecture.

Neurites are important neuron compartments that distinctly characterize the cytoarchitecture of nervous tissues and realize intercellular communications. Specifically, dendrites are complex tree shaped structures that take part in neurotransmission through specialized membrane protrusions (spines) which represent the preferential sites for the neurotransmitter reception. Remarkably, dendritic spines and trees are considered part of the morphological correlates of structural plasticity and causal modifications of the dendritic tree morphology (synaptogenesis, spinogenesis and branch remodeling) have been related to learning 1,2 . Hence, the dendritic tree inherently represents an attractive perspective to study structural learning and long-term memory at the cellular level [3][4][5][6] .
From a functional point of view, dendrites were generally recognized as passive electrotonic compartments, which conveyed and integrated the electrical field variations triggered by ionic channel openings at the post-synaptic terminals. However, recent studies highlighted that in dendritic trees, a rich repertory of ionic channel mechanisms modulate incoming and back-propagated running signals (dendritic spikes) by local voltage dependent ionic channels 7 . Indeed, a prominent work reported that mechanisms of non-linear synaptic N-Methyl-D-aspartic acid (NMDA) dependent activations have been shown to likely discriminate input patterns along the branches of dendritic trees. The authors argued that "pyramidal cell dendrites can act as processing compartments for the detection of synaptic sequences" 6,8,9 , a tangible property observable in the waveforms of the excitatory post-synaptic potentials (EPSPs). Furthermore, by means of biophysical models, other authors showed that neurons with larger dendritic trees have greater computational power [10][11][12] , however without supplying a quantitative analysis. In such a perspective, dendritic branches acting as computational blocks for neural information processing could potentially sustain the significant computational loads, currently missing in present analytic perspectives. In the last decades, many works focused on the electrodynamical properties of the dendritic tree nonetheless, it is not yet clear how morphological features of dendritic trees are related to or may sustain their functional counterparts.
Complementarily, a recent line of research showed that functionally relevant synaptic inputs, resulting in strongly correlated inputs, are organized in clusters of synapses within dendritic branches thus promoting robust propagations of large dendritic depolarizations [13][14][15] . These evidences came from several experimental setups (including in vivo) and have been observed in many brain regions, generally called synaptic clustering 6,16 . Therefore the synaptic clustering hypothesis provides a spatial constraint for correlated input intensely restricting the theoretical number of possible input configurations along dendrites.
In this work we repropose the idea that dendritic trees are not simple input integrators but, well more broadly, rather recognizers of input patterns and that such recognition takes place in each dendritic branch. This work has two main scopes: the first is to quantitatively assess the impact of these novel facts about dendrites in terms of number of recognizable input patterns per neuron. The second aim is to evaluate the functional consequences generated by the resulting quantitative relationship within the current neuroanatomical data.
In our computational framework, neuron models are composed of two parts: the specification of the cell geometry and the definition of the biophysical properties. Since, such properties comprise many fundamental parameters that can strongly affect the results and most of them are inaccessible, we designed an optimization strategy, based on genetic algorithms, that maximized the number of discriminable input patterns by exploring a multidimensional parameter space composed of five variables: the spine density, the spine spatial distribution, the membrane resting potential, the NMDA and AMPA receptor concentrations.
In a recent study, Cuntz et al. proposed a scaling law relating the total dendritic length, the number of branching points and synapses 17,18 . By exploiting such law, the putative number of spines for each dendritic branch can be extracted to infer the spine distribution along the dendrite segments. Since the Cuntz law has not had an exhaustive experimental support, we further investigated different values of synaptic density to address possible effects. The spatial distribution of synapses represented an additional open question because it is still debated whether dendritic spines are placed according to deterministic schemes (e.g. the 3D helix-shaped Purkinje cells) or to random arrangements [19][20][21][22] . Eventually we included other biophysical properties such as the membrane resting potential and the number of NMDA and α-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptors because they could drive relevant consequences on the input discriminability.
To target the first aim, we quantified the number of discriminable patterns in relation to two relevant morphological properties: the number of dendritic branches and the total dendritic length. In order to address the second aim instead we retrieved the morphological data from the largest open repository of neuron reconstructions (neuromorpho.org). We provided a set of potential inferences by performing comparisons across neuron types, animal species and brain regions suggesting both new perspectives and roles of dendritic morphological features within the mainstream of animal species and of brain region phylogeny.

Results
In this work we planned to quantify the discriminability of dendritic correlated and spatially clustered inputs starting from some morphological features of dendritic trees. We used a purely computational approach based on the NEURON simulator and on the large repository of neuron reconstructions, neuromorpho.org (Fig. 1F). Primarily, we wondered if a general rule linking discriminability capacity and morphological features could be extracted and, subsequently, we adopted such a rule for a comparative neuroanatomical inspection spanning species, neuron types and central nervous system regions. As a rule, two dendritic input patterns are considered as discriminable according to a simple criterion which establishes whether a relevant number of data points (equivalent to 10 ms, see Section 0 for further details and Fig. 2A-C) differs between two somatic waveforms. Input Discriminability. We developed a computational framework to investigate the morphological correlates of input discriminability in dendrites. To this aim, we designed an ordinary genetic algorithm to tune up a set of biophysical properties to be combined to geometries obtained from reconstructed neurons. These included the density of AMPA and NMDA receptors, the synaptic density, the spine spatial distribution and the membrane resting potential. Indeed, the number of AMPA and NMDA receptors has been shown to be critical for the input discrimination as well as the resting potential 8 . Furthermore, because it is generally unknown how spines are located along dendrites [19][20][21][22] , we considered two models to profile the spatial distribution of spines along the dendrites, i.e. the equidistant and the uniformly random. In the first, the spine distance is constant and inversely proportional to the spine density while in the second model the spine location was extracted by means of a random uniform distribution. At last, to establish the spine density we exploited the Cuntz's law which relates the total dendritic length,  6 of simultaneous activation for the seven spines that correspond to the six different waveforms in (D). (E) A scheme of the computational framework where boxes in red represent the variable input files, boxes in black represent constant input files (whether they connect the central ellipse) and the blue box represents the only output file. Reconstructed neurons are first converted in the NEURON neuron geometry syntax, then once specified the synaptic positions along the dendritic tree and which synapses will be active, the NEURON simulation produces a set of somatic voltages that will be analyzed by the algorithm 0 to quantify how much waveforms are distinguishable. Figure 2. Explanation of the method devised to quantify the waveforms discriminability. In this toy example are used 208 activation patterns along a fixed dendritic branch. (A) The 208 somatic waveforms can qualitatively be grouped in three groups (yellow, red and purple). (B) The method first computes a similarity matrix which can be seen as the adjacency matrix of a graph. (C) The number of connected components, i.e. the number of complete disjoint graphs corresponds to the number of previously visually identified discriminable waveforms (purple, red, yellow). (D,F) All waveforms discrimable respectively by the branches highlighted in blue and red of the dendritic tree of the cell (Cell-1a, Mouse, Ventral Thalamus) displayed in (E). The soma centroid is highlighted in purple. (G) The result of the discriminable analysis for all dendritic branches of the cell is shown.
Scientific RepoRts | 5:11543 | DOi: 10.1038/srep11543 the volume and the number of synapses. We also evaluated the potential effects on results assuming different values. The estimation of the discriminability capacity of cell (M) involved hard computations that can last several weeks and this issue limited a broad analysis of the entire neuromorpho dataset which contains more than ten thousand neuronal reconstructions. For this reason, we selected a sample of 100 neurons randomly chosen from the entire neuromorpho dataset. A sample subset is showed in Table 1.
We early inspected some basilar relationships among the number of dendritic branches, the total dendritic length and the number of spines ( Fig. 3A-C) to the entire neuromorpho repository by assuming the spine density implied by the Cuntz's law. We found weak correlations between the number of branches and the number of spines (Fig. 3A, R = 0.231, p < 0.007, permutation test) and between the total dendritic length and the number of branches (Fig. 3C, R = 0.326, p < 0.003, permutation test) but a substantial correlation between the total dendritic length and the number of spines (Fig. 3B, R = 0.560, p < 0.000, permutation test). These results indicated that at least two morphological features (e.g. the number of branches and the number of spines) are required to capture most of the dendritic morphological information and this drove our searching for an analytical law.
The computational approach adopted in this work did not allow the appraisal of the biophysical properties of each reconstructed neuron; hence we adopted an optimization strategy, based on genetic algorithms, which selected within a parameter space the best parameters that maximized M. By applying this framework to a randomly selected pool of 100 cells, we found that the M values fit very well with a a·n·log n + b law (adjusted r-square = 0.996, Fig. 3D) where n is the number of spines per branch. One more relationship (a·x 2 + b·x + c) reached the same goodness of fit but had three parameters thus we preferred the previous simplest one and the linear model a·x + b fitted worse (adjusted r-square = 0.865).
Subsequently, we inferred that the equivalent value of M for a neuron was the sum of the relative M values for each dendritic branch of such neuron ( = ∑ . − . ) M n n 6 17 log 3 07 By having a computationally fast equation to accurately estimate the number of discriminable input patterns totally based on the morphological features of the neuron, we explored the consequences of such law along the entire neuromorpho dataset.
Hence we analyzed the behavior of M in comparison to the other morphological features and we found that it was tightly correlated with the number of spines (Fig. 4A, R = 0.982, p < 0.000, permutation test) and with the total dendritic length (Fig. 4C, R = 0.617, p < 0.000, permutation test) and weakly related with the number of branches (Fig. 4B Table 1. Sample of the analyzed 100 reconstructed neurons used to extract the M law. The 5 biophysical parameters of each neuron have been selected by a genetic algorithm that maximizes M. The first column indicates the specie, the cell type and the nervous system region from which the cell was extracted. The second column reports the name used in the neuromorpho.org repository. The third column contains the membrane resting potential (V rest , mV). The fourth column represents the spine spatial distribution (SP) which could be the Linspace (LS) or the uniform distribution (UN). The fifth column shows the spine density where CL indicates the value suggested by the Cuntz's law. The sixth and seventh columns report the number of AMPA and NMDA receptors allocated along the branches. NS stands for number of spines meaning that each spine had the receptor, otherwise NS/2 indicates that only half of spines had the specific receptor. The last column represents the maximum value of M obtained for the cell. and morphologies. We found that the resting potential appeared very influent on M because, in the range [− 83, − 79] mV, the number of discriminable inputs was considerably higher (p < 0.000, Kruskal-Wallis test, Fig. 4D). Essentially, hyperpolarized cells recognized more input patterns. In addition, we found that the synaptic density suggested by the Cuntz's law (CL) was the density that produced the best input discriminability (p < 0.000, Kruskal-Wallis test, Fig. 4E). In particular, the CL density maximized M for neurons of many types and species except for the neocortical neurons of primates where CL/2 performed better. Further, we analyzed the behavior of M for different concentrations of AMPA and NMDA. The Fig. 4F shows that, when each spine had a AMPA receptor while the number of the randomly assigned NMDA receptors varied, the cells maximized their input discriminability when the NMDA concentration was nearby the 50% of the number of spines (p < 0.000, Kruskal-Wallis test). Vice versa, the variation of the AMPA concentration produced weaker effects on M, although significant influences were observed (p < 0.007, Kruskal-Wallis test, Fig. 4G), and the maximum of M occurred for the AMPA concentration of 100%. This result suggests that NMDA receptors were more influent than AMPA for the input discriminability and that there exists a specific AMPA/NMDA ratio (2:1) which brought the cell in the best functional regime for the input discriminability. Eventually, we also analyzed two models for the spatial distribution of spine along dendrite segments and we found that the equidistant spine model (Linspace) preferentially maximized M (p < 0.000, ranksum test, Fig. 4H). Even in this case, random spine locations were typically preferred in neocortical neurons without however obvious distinction of species, neuron type or cortical layer. Neuroanatomical Comparisons. Once established a quantitative interpretation of the dendritic tree in terms of storage capacity, we proceeded by comparing the number of dendritic branches across animal species, brain regions and neuron types. Although the biophysical parameters adopted in the previous analysis have been chosen following a computational perspective that could not have an appropriate biological plausibility, we decided to perform such a comparative analysis speculating on the result consistency and robustness. Neuron reconstructions were taken from the neuromorpho.org repository which is the largest collection of publicly accessible neuronal reconstructions gathering 10004 neurons of 18 cell types, in 17 brain regions and from 15 animal species (neuromorpho version 5.6, up to May 2014). Across the entire collection, neurons had an average M value of 501549 (SD = 857135, the root of the phylogenetic tree in Fig. 3B) with important variances in the diverse classifications. So in general, by the electrodynamical mechanisms inserted in the neuron reconstruction models, a single neuron can distinguish more than half million of correlated inputs dispersed in their dendritic branches.

Species.
We first compared dendritic tree morphologies across the animal species and we selected 15 species out of the 20 present in the neuromorpho repository putting aside scarcely represented species (agouti, cricket, rabbit, turtle and lobster with less than 15 reconstructed neurons). The number of samples and the brain regions, which they came from, are reported in Table 2 while Fig. 5A-B shows the phylogenetic trees of the analyzed animal species. Leaves of tree contain capitalized words that illustrate the exact name of species and nodes between the root and leaves represent the scientific classification respectively in kingdom, phylum, class, order, family and genus (if applicable). Numbers below names report features (the total dendritic length in Fig. 5A or M in Fig. 5B) of neurons of that species (second numbers indicates standard deviations). All pairwise comparisons below were performed with the non-parametric Wilcoxon ranksum test and p-values were smaller than 0.000 except when diversely specified.
An early phylum classification showed that neurons from Chordata had M values ~90% higher than Arthropoda and 2275% higher than Nematoda. Interestingly, Rodentia had higher M values (+ 138%) than Primates. Also Cyprinidae (+ 24%) and Ambystomatidae (+ 61%) had higher M values  Table 2. Features of neurons extracted from the selected 15 animal species. The second column indicates the number of cells used from that specie, the third column indicates the average number of dendritic branches (the second number is the standard deviation). The last column represents the brain regions where the selected cells are extracted. in comparison to Primates. In particular, humans' M values only overcame C. elegans (+ 1027%) and blowflies (+ 241%) and were statistically equivalent to monkeys (P = 0.172), drosophila (P = 0.525) and elephants (P = 0.317). Such results were quite unexpected because primate and elephant brains are classified as more developed 23 in terms of cognitive ability, awareness, etc. We therefore tried to weight the previous rank by multiplying the average M value of each species with the number of total neurons of their own central nervous system (where available). The Fig. 3C illustrates the new scenario where despite the low M values, the human central nervous system gained the first position scoring more than 10 1 6 recognizable patterns followed by elephant, monkey, cat, rat, mouse, zebrafish, drosophila and C. elegans. By analyzing basilar morphological features neurons of Primates also have less dendritic arborizations (− 54%) than Rodentia and have less arborizations (− 77%) than Diptera. Remarkably, human neurons have a comparable number of dendritic branches with goldfish (+ 0.03%, P = 0.187), monkey (− 26%, P = 0.074), elephant (− 13%, P = 0.240) and more branches than C. elegans (+ 730%) and crickets (+ 273%).
Lastly, we questioned if the reduced discrimination capability found in human neurons could hold when we considered only neocortical neurons instead of the entire nervous systems. Again, human neocortical neurons had an average M value of 245978 (SD = 135640) while rodents had the M average set to 487140 (SD = 800201) confirming the general lower capacity of human neurons in comparison to rodents to discriminate dendritic input patterns.
These comparisons highlighted the surprising low rank of human neurons among the analyzed animal species suggesting that the innumerable better cognitive abilities of humans could not be related to the richness of the dendritic storage mechanisms. In conclusion, the results of this section indicate specific evolutionary strategies adopted in primates to augment their memory (i.e. the ability to distinguish patterns), which result in increasing the number of neurons with concurrent reduction of the single neuron memory capacity.
In this current ranking, bipolar and multipolar types are represented by few reconstructions both from specific regions: the former from the Nucleus laminaris of the chicken brainstem and the latter from the rat perirhinal cortex. Therefore such results should consider the narrowness of these samples. However, neurons of the Nucleus laminaris are crucially involved in essential sound localization functions especially in birds and reptiles. Furthermore, such neurons are coincidence detectors of sound information and constitute fundamental processing stage of the binaural hearing. Similarly, perirhinal cortices integrate high-level multisensory inputs from many sensory cortices in all mammalian and the high M values could be the result of an increasing evolutionary demand to efficiently distinguish abstract information. Another important cell type which deserves further mentions is the Von Economo neuron which had the lowest M values. Also called spindle neurons, von Economo neurons are implicated in emotions and social behaviors and their reduced capacity to discriminate input patterns could remark their hypothesized role of communicators among high-order cortical areas in large brain animals.
Brain Regions. In the last comparative analysis we investigated the neuronal discrimination capability of different brain regions. We first divided brain regions in cortical and non-cortical ones and subsequently we selected only those that were abundantly represented (at least two species and more than 10 neurons as sum).
Lastly, we decided to analyze the M distribution among the most represented human cortical regions ( Table 6) and we found that frontal lobe (+ 12%) and parietal lobe (+ 11%) neurons had greater M values than other neurons, while prefrontal cortex (P = 0.446) and temporal lobe (P = 0.649) neurons displayed no significant differences. Eventually, occipital lobe (− 9%) and anterior cingulate cortex (− 65%) neurons had significant lower M values on average. Although most comparisons are significant, relative differences were much less noticeable and M values appeared more uniform than previous distributions.
By integrating all data, we found that non-cortical neurons had much higher M values than cortical neurons and in particular of human cortical regions (+ 96% and + 210% respectively) and therefore that M values of human cortical neurons were smaller (− 37%) than those of non-human neurons. Results of this section confirmed the smaller capability of human neurons to discriminate input patterns through synaptic clustering potentially suggesting that network mechanisms of memory allocation were preferred (instead of subcellular ones) in the evolutionary lineage of primates (monkeys and humans) and big mammals (elephants). Further, such a hypothesis can explain the profound discrepancies between the cortical and non-cortical brain regions.

Discussion
In this paper, we investigated the computational implications of a class of neuronal models which enable autonomous recognition of input patterns within their dendritic branches through differentiated somatic voltage waveforms. We found a predictive rule that remains invariant across a sample of 100 neuron reconstructions of the neuromorpho repository. Indeed, the total number of discriminable patterns by the whole dendritic tree (M) could be approximated by a ∑ + a n n b log i i i law where n i was the number of spines along the i th dendritic branch and a and b are two constants. By exploring the entire neuromorpho repository, we found a set of remarkable comparative results spanning animal species, neuron types and brain regions. Interestingly, primates did not exhibit highest number of discriminable patterns per neuron even when considering solely the neocortical neurons, but humans outperformed other species when weighed on the total number of neurons. In addition, non-cortical regions had a minor number of discriminable patterns per neuron in comparison to cortical regions possibly indicating different memory allocating strategies. It could be inferred that non-cortical neurons rely on subcellular mechanisms in contrast to the cortical multicellular/network mechanisms, a distinguishing strategy that potentially may explain primate versus non-primate imbalances.

Issues in Biophysical Parameter
Tunings. The biophysical modeling of neurons requires a plenty of parameters starting from the geometry of the cell to numerous electrochemical specifications of each compartment. The only available knowledge was the cell geometry resulting from neuronal reconstructions. We then chose a set of parameters critical for input discriminability and we fixed the other parameters for all the simulated models. This approach could highlight important implications concerning the biological plausibility of the results. For instance, some neurons do not have a random spatial distribution of the spine while our computational framework might have selected that distribution because it incremented the M estimation. Again, many neurons are not known to operate in the specific resting potential ranges determined by the algorithm, at least in normal physiological conditions. However, our neuroanatomical comparisons showed a rich repertoire of consistent results which can corroborate the proposed framework. First, synaptic density is considerably higher in rodent brains than in primates 24 therefore it is reasonable to expect that M may be considerably higher for rodents, as emphasized by our comparisons. In addition, the average dendritic length of rodent neurons is substantially higher than in primates suggesting more important contributes to the input processing [10][11][12] . Furthermore, cortical motor neurons have giant dendritic trees which finely modulate the impinging complex interplay of central afferents to achieve the balanced output into the corticospinal tracts. A comparable design repeat in the spinal cord motor neurons is also evidenced. This is in agreement with our comparisons which reported for cortical and spinal motoneurons, the highest M values. Finally, it is sound to expect that subcortical regions would have higher M values than cortical regions because the neuronal density and the dendritic lengths are considerably greater in non-cortical structures than in the cortex.

Information Processing Considerations.
Memory and, in general, the ability to store information is an essential evolutionary trait requiring complex associations among spatio-temporal arranged inputs. Such signals, widely heterogeneous, imply the storage of increasing amounts of information. This growing repertoire of inputs conflicts with many biological constraints 25 . In fact, one fundamental limitation is represented by the metabolic cost of neuron signaling which limits the numerosity of neurons.
Therefore, in this contradictory scenario, it became crucial to provide compensatory high memory storage to neurons. In this work, we found that N dendritic branches each with n i spines allow for discriminating more than ∑ n n log i N i i distinct patterns. This can represent a plausible computational breakthrough as such neurons, with several thousand spines along dendritic branches, can recognize hundreds of thousands of different synaptic activation combinations. In addition, from a computational perspective, neurons and neuronal circuits also accomplish the storage demand by compression 26 suggesting that information can be encoded cheaply. In our experiments, our model performed data reduction of input patterns by encoding large input patterns in voltage waveforms that lasted 100-150 milliseconds. From a theoretical perspective, neurons would maximize its input discriminability (M) by collapsing its dendritic tree into a long single branch. However this simple strategy impoverished the number of active inputs because one of the fundamental roles of dendrites is to provide an adequate spatial covering of the neighboring space that instead is achievable by a tree structure 17 .
Neuroanatomical Considerations. The importance of the information stored in the different brain regions deserves a dedicated dissertation. Neurons are supposed to differentially represent information at several level of abstraction 27 , hence it is reasonable to assume that some information are more crucial than other. Unexpectedly cortical neurons have less discrimination capability than subcortical neurons. The surprise is justified by the fact that the neocortex processes information of higher order tasks and thus we expected a greater memory power in its neurons. A possible explanation may be searched in the different storage form of information, with different degrees of density, progressively increasing in the scaling up of the neuron rank in a network. Another alternative explanation could be also placed for interpreting the mismatch among the number of discriminable patterns of cortical versus non-cortical neurons. Namely, distributed systems in general and specifically the cortical regions appear to be the highest information distributive systems and, on the other side, show higher resilience to biological insults, that is a superior fault tolerance, coupled to a higher degree of graceful degradation, thus allowing for the instantiation of potential vicarious or compensatory mechanisms. The higher the rank of the brain regions in signal elaboration the higher may be the fault tolerance as individual and species preservation strategy. Losses of high M capacity neurons could be equivalent to less severe functional losses. In addition, from a more extensive perspective, taking into account the whole neuro-glio-vascular compartment, vascular failures can be equiprobable along tissue volumes and because the cortex occupies abundant portions of the total brain volume (77% in humans) 28,29 this strategy would minimize information losses in brain failures.
Another interesting consideration takes into account the metabolic costs related to synaptic transmission: the human brain occupies only the 2% of the body's mass but it uses more than the 20% of the total energy. In particular, about the 55% of the adenosine triphosphate (ATP) consumed by the brain supplies pre-and post-synaptic mechanisms 30 . In addition, it has been estimated that for a single vesicle release, more than 42% of the energy is drained in NMDA and the 40% by non-NMDA (excluding metabotropic signaling, e.g. mGluR) signaling 31 . Therefore, energy expensive neurons with high memory storages could be metabolically little or not suitable in brains with more than a billion of neurons. We also propose a further possible interpretation on the surprising discrepancy between cortical neuron higher evolutionary rank and their lower memory capabilities. Greater memory storage abilities in a neuron could be achieved at the expense of fast plasticity and responses in highly loaded networks. The higher the load of a circuitry (as the cortical circuitries convey) the faster the response and adaptivity requirements are expected. A dendritic receptor distribution or branching enabling memory loads could conflict with the need of transience that multiple simultaneous tasks might require. Hence the selective drive toward rapidly adapting neurons in comparison to memory loaded units. The neurodynamic profiles and neurochemistry of the cortex could support this hypothesis. Namely, the strong cortical neuromodulatory component (serotonergic and cholinergic as first) behaves like overall addresser of cortical outputs where the fast components (e.g. the glutamatergic AMPA-NMDA drive at the synapses) could represent the continuously engaged component for fast adaptation to extant conditions. A heavy neuromodulatory component could be conflicting with accumulation storage in neurons where memories should be expressed as they were stored not being affected by the wave of modulators. Complementarily, modulation could instead represent the fast-written-fast-deleted slate where the responses of network low-memory neurons are hosted.
Limitations and Developments. This work is intended as an exploratory study which inspects the potential opportunities of dendritic morphological features and synaptic clustering in a computational fashion. Many experiments and improvements are required to conclusively settle the results of this work. First of all, the predictive rule for M has been extracted only for 1% of the total available repository because computational times were deeply constraining being nonlinearly proportional to the total dendritic length. Second, although many comparisons were statistically significant, the distribution of neuron reconstruction samples among species, cell types and brain regions was strongly non-uniform. We believe that new versions of the dataset will improve and correct statistics and results (neuromorpho.org has recently released a new version, 5.7, of the repository with more than 3000 additional reconstructions).
At last, although about the 80% of the neuronal activations are glutamatergic, other important neurotransmitters (GABA, acetylcholine, dopamine, serotonin, etc.) could play an important role in the input discriminability.

Materials and Methods
One of the aims of this work was to figure out how dendritic morphological features impacted the capability of neurons to discriminate coincident input patterns by taking into account the input grouping mechanism of synaptic clustering along single branches. We used a computational framework that combines Matlab routines with external calls to the NEURON simulator ( Fig. 1 Neuron reconstructions are previously downloaded into local directories and loaded through a modified version of the load_tree function of the Matlab TREES toolbox 17 . Subsequently, the cell geometry file is generated by the neuron_tree function (a modified version allows better interoperability with the NEURON environment) while other TREES toolbox functions collected morphological statistics (len_tree.m, vol_tree.m, dissect_tree.m). Furthermore, a couple of files specified biophysical behaviors of membranes, channels and synapses were so loaded into the NEURON environment attaching to the cell morphology, active and passive dendritic conductance and AMPA and NMDA receptors in the synaptic points. The source code of the entire computational framework can be downloaded at https://sites.google. com/site/antoniogiulianozippo/codes. Discriminability of Somatic Waveforms. One of the working hypothesis of this work was that dendrites have the capability to provoke unequivocal voltage somatic waveforms to clusters of synaptic activations along a single branch of the dendritic tree. Since, the central aim was to quantify the discriminability of the somatic waveforms, we designed a formal notion of waveform discriminability and we developed an algorithm to efficiently estimate it. Let be = , …, w w w { } We collected somatic waveforms from NEURON simulations that had a static representation of 2 integer digits and 6 floating digits expressing millivolts. Typical patch-clamp electrophysiological recording setups are accompanied by noise levels of 10-20 μV and for this reason we truncated our collected voltage waveforms to the second floating digit obtaining the equivalent precision of 10 μV. In addition, the somatic waveform recordings lasted 200 ms with a dt set to 25 μs gathering 8000 voltage data points for each waveforms (N = 8000). The threshold d was set to 400 (equivalent to 10 ms) and higher values of d tended to discriminate less waveforms, vice versa, smaller d induced more discriminated waveforms.
At last, we defined a fast algorithm to evaluate large sets of waveforms that returns the number of discriminable ones. It builds a distance matrix, later used to isolate the groups of similar waveforms. The algorithm exploits the disjoint set data structures and the union heuristic to identify the representative waveforms as the produced number of disjoint complete graphs 33 .
Algorithm 1: Algorithm for the estimation of the number of discriminable waveforms.where = = , …, 1} is the threshold for the discriminability (higher values return more discriminable waveforms) and ∈ , …, P N {1 } is the estimated number of discriminable waveforms. Binary operators (= = , > ) applied to a matrix, returns a boolean values matrix and the function get_connected_components(X) returns the number of connected components of the graph G, represented by the adjacency matrix X.
The Fig. 2 shows a toy example with 208 somatic waveforms respectively generated by 208 random activation sequences. As visually appreciable in Fig. 2A, the cell essentially exhibited three shapes (red, purple and yellow). In the first phase the algorithm computes the similarity matrix for each waveform couples (Fig. 2B). By interpreting the obtained matrix as a graph, subsequently the algorithm computes the number of connected components (Fig. 2C) which always coincides with the number of distinguishable waveforms.
The implementation of the Algorithm 0 has been done in Matlab using the CUDA computational framework which speed-up the execution time up to hundredfold (60x on average).
Estimating Synaptic Distribution over Dendritic Trees. The density and the spatial distributions of axodendritic synapses are generally unknown. In a recent prominent work, Cuntz et al. proposed and partially validated a simple rule which regulates the total length of a dendritic tree, the number of synapses and the dendrite volume 18 : where L is the total wiring length, c is a proportionality constant, n the number of synapses and V the total volume. By assuming that each synapsis has a spherical basin of influence, the equation became: Since we had to know the number of putative spines that an entire dendrite should has, we solved the previous equation in n obtaining: Thus we can calculate the number of putative synapses adduced by dendritic morphology and we derived an equation to distribute spines in dendrites and branches. where l b is the length of branch and n i is the number of spines in the i th branch. Although the conceived perspective found notable similarities with the available literature, for instance the mouse cortical synaptic density range from 0.5 to 2.1 spines per μm 29,34 and our approach predicted a mean of 1.54 and standard deviation of 0.7, we considered synaptic density values higher and lower than those predicted by Cuntz equation to evaluate possible conditionings on results.
By having n i for each segment of the dendrite compartments and for each reconstructed neuron, we first performed a theoretical combinatorial consideration about the possible number of combinations of correlated Hence a preliminary theoretical examination proposed an exponential law for the number of possible input activations. Such a relationship produced unfeasible instances even with few tens of spines, therefore discarding the exhaustive search of all possible activation patterns we had to devise an alternative strategy which can be accommodated with current computational architectures. For this reason, we developed a stochastic optimization algorithm to face the intractable number of possible input patterns.

Stochastic Estimation of the Number of Discriminable Patterns.
Our strategy holds on the assumption that when an instance of our NEURON model is exerted with n different activation patterns and it recognizes m ≤ n of them, then when the same model is exerted with a number N > n of patterns, it should recognize a number M ≥ m of them otherwise the model already expressed the maximum number (m) of discriminable patterns. Essentially, we assumed the function of the number of discriminable patterns was a strictly growing function. Taking into consideration this assumption, we elaborated a stochastic estimation strategy where we looked for a plateau of the function which corresponded to the maximum of the function values. Specifically, the algorithm starts by probing the initial discriminability of the dendritic branch for two incremental number of randomly generated activation patterns and if the discrete derivative of the two values is positive the algorithm goes on otherwise whether the derivative is equal or smaller than zeros it stops and returns the maximum values available at that time. The pseudocode below illustrates the basic computational steps of the presented model: The files passed as arguments correspond to list of files needs to NEURON simulations (Fig. 1F) and they are: the specification of the neuron morphology (neuron_reconstruction.hoc), the specification of the biophysical compartment properties (biophysical_model.hoc), the specification of the synaptic properties (synapses_specs.hoc) and the synaptic locations (synapses_locs.dat). The routine returns only the estimated number M of discriminable patterns by the dendritic segment. T is the putative number of spines computed by equation (4). The function NEURON_run() triggers the execution of the simulation of the NEURON model for 200 ms (dt = 25 μs, synaptic release at the 50 ms) and returns a set of somatic waveforms each of them related to a random synaptic activation sequence. The function discrimina-bility_analysis(V) returns the number of discriminable patterns according to the criteria presented in section and implemented in the Algorithm 1. Finally, D represents the current derivative estimation, the stop criterion of the while loop.
The functioning of the algorithm can be better illustrated with the help of a toy example. We suppose that we have to estimate the number of discriminable waveforms of a given branch with 40 spines elicited by 7 different activation points (theoretically, there exists ( ) = 18643560 40 7 of possible combinations!), the algorithm first generates 10 random activation sequences and estimate the current value of M (let say 5). Subsequently, it repeats the last step with 20 random activation sequences and it returns a second estimation of M (let say 7). Since, the difference between the two M estimations is positive (D = 7 − 5 = 2) the greedy strategy imposes to run further searching for higher values of M. Thus, the algorithm proceeds with 30 random sequences and so forth until the current estimation of M is lower (or equal) than the last one. At this point, the algorithm ends returning the highest observed values of M.
The Fig. 2D-G shows the estimation of M for the cell Cell-1a (displayed in Fig. 2E) from the neuromorpho repository 35,36 . In particular, the discriminable somatic EPSPs for branches number 19 and 1 are showed respectively in Fig. 2D The pool size of solutions (Size) was initially set to 100 and keeping constant along runs as well as the number of iterations (N) fixed to 500. At each step the algorithm obtains the estimations of M for each candidate solution within the Pool. The general scheme of the algorithm is composed by three steps: the first selects the best two solutions (the two highest M estimations) by the SelectBest2 function; the second step (CrossOver) randomly swaps the values of the previously chosen solutions; the last step (RandomMutations) imposes with a low probability (0.1 for each of 5 parameters) random modifications to the two new candidate solutions. The functions SelectWorst2, AddInPopulation and RemoveFromPopulation serve to keep constant the pool size. The last step calls the function SelectBest which return the 5 parameters which maximize M.
Statistical Tests. The significance of correlation coefficients is asserted by a permutation tests. Given two data sequences, we asserted how many times out of 10000 trials, randomly shuffling the element sequence positions we obtained a correlation coefficient greater than 0.05. If the ratio of trials that pass the previous condition was lesser than 0.05 we rejected the null hypothesis otherwise we accepted it.
Statistical comparisons among samples are computed with the non-parametric Wilcoxon signed-rank test with a significance level of 0.05. To compare different distributions of M which take values in distinctive sets, we normalized M values (M*) mapping them into the set {0,1} by using the feature scaling technique 37 .