This month, I discuss two papers about the axonal initial segment, one paper about the limits of current connectionnist networks, and one paper on learning in spiking neurons. The first paper is a short review by myself and my experimental collaborator Maarten Kole on the electrical impact of the position of the AIS. The second one is also a review, but on the molecular organization of the AIS, in particular with the recent findings from super-resolution microscopy. The third paper is mostly a criticism of current artificial neural networks, showing that they are clueless at seemingly simple tasks, such as determining whether two identical shapes appear in an image. The fourth one is a theoretical paper introducing an interesting idea according to which the discontinuity of spiking is actually a useful feature that allows a neuron to estimate its causal effect on signals (eg reward signals).
From the lab
1. Kole MHP, Brette R (2018). The electrical significance of axon location diversity.
In this short review, we describe the electrical impact of the position of the axonal initial segment (AIS). In many cases, the axon actually stems from a dendrite rather than the soma, which tends to increase the distance between the soma and AIS. This distance has an impact that is predicted by resistive coupling theory (see Brette (2013) and Telecnzuk et al (2017)). In particular, it makes the neuron more excitable (lower threshold for more distal AIS). This effect is in fact rarely observed experimentally. The reason, in my view, is that the voltage threshold is homeostatically tuned, eg by acting on the phosphorylation of Nav channels. Another important impact of AIS distance is it controls the amount of current transmitted to the soma at spike initiation, and therefore the depolarization at the soma. This variation can span at least an order of magnitude across cell types and conditions, probably several. In fact, it makes sense that this current is matched to the size of the cell body and proximal dendrites (capacitive area), as we have argued in Hamada et al. (2016). A third aspect appears when the axon stems from a dendrite; in that case, synapses on the axon-bearing dendrite have quite special properties, because the synaptic current leaks to the soma and the axon is between the synapse and the soma. Perhaps quite surprisingly, what determines the postsynaptic potential at the AIS is mainly the position of the axon on the dendrite, rather than synapse position (except for proximal synapses).
This is a well-written review on the nanoscale molecular organization of the axonal initial segment. It includes a discussion of molecular transport and filtering by the AIS, as well as molecular mechanisms of development and structural plasticity (changes in position and length), pointing to the most recent studies.
3. Ricci M, Kim J, Serre T (2018). Not-So-CLEVR – Visual Relations Strain Feedforward Neural Networks. (Comment on PubPeer; Reviews on OpenReview)
This paper points out some important limitations of current connectionist models, including deep learning networks. While those models can now identify objects in photos with excellent accuracy, they struggle on tasks that seem totally trivial for us, such as deciding whether two shapes are identical or different. Current neural network algorithms are totally clueless for this type of tasks. More generally, they are unable to identify relations between objects in an image. To me, this is related to a fundamental limitation of the cell assembly concept, which I discuss in my essay Is coding a relevant metaphor for the brain?. In classical connectionim, and more generally in the classical neural coding view, percepts are thought to be represented in the brain by the activation of a cell or cell assembly. But a cell assembly is an unstructured set, much like the “bag-of-words” model of text retrieval. It can represent elements in a perceptual scene, not relation between elements. Thus to represent relations with neural activity, a dynamical aspect is needed (hence the proposition of binding by synchrony, which is useful but insufficient).
4. Lansdell BJ, Kording KP (2018). Spiking allows neurons to estimate their causal effect. (Comment on PubPeer).
This paper introduces an interesting idea borrowed from econometrics, on how to estimate the causal effect of a neuron’s spiking on a signal (eg a future reward signal, but potentially any signal).
One could for example compute the spike-triggered average of the signal, but this does not give the causal effect of spikes, because spikes could be correlated to other things that also have an impact on the signal. For example, let us imagine that you want to recover the postsynaptic potential for a given synapse, that is, the causal effect of the presynaptic spike on the postsynaptic membrane potential. Networks can display fast oscillations due to inhibitory feedback; in that case, the spike-triggered average of the membrane potential would show oscillations, even if the postsynaptic potential is a decaying function. To find the causal effect, the idea discussed in the paper is the following: compare the average signal observed when the neuron has barely spiked (just above threshold) to the one observed when the neuron has just missed the spiking threshold. Because spiking is discontinuous while the underlying variable that causes it (membrane potential) is continuous, these two cases should be equivalent except for the fact that the neuron has spiked. It follows that the difference in observed signal should be just the causal effect of the spike.
This is a nice idea. The question is how this might apply to the problem of learning for a neuron. Although the authors discuss a learning rule, that rule addresses the problem of learning to estimate the causal effect of the spike, not the learning of the neuron’s parameters (e.g. synaptic weights). The relation between the two problems is unfortunately not straightforward.