March 2018

Editorial

This month, I discuss two papers about the axonal initial segment, one paper about the limits of current connectionnist networks, and one paper on learning in spiking neurons. The first paper is a short review by myself and my experimental collaborator Maarten Kole on the electrical impact of the position of the AIS. The second one is also a review, but on the molecular organization of the AIS, in particular with the recent findings from super-resolution microscopy. The third paper is mostly a criticism of current artificial neural networks, showing that they are clueless at seemingly simple tasks, such as determining whether two identical shapes appear in an image. The fourth one is a theoretical paper introducing an interesting idea according to which the discontinuity of spiking is actually a useful feature that allows a neuron to estimate its causal effect on signals (eg reward signals).

From the lab

1. Kole MHP, Brette R (2018). The electrical significance of axon location diversity.

In this short review, we describe the electrical impact of the position of the axonal initial segment (AIS). In many cases, the axon actually stems from a dendrite rather than the soma, which tends to increase the distance between the soma and AIS. This distance has an impact that is predicted by resistive coupling theory (see Brette (2013) and Telecnzuk et al (2017)). In particular, it makes the neuron more excitable (lower threshold for more distal AIS). This effect is in fact rarely observed experimentally. The reason, in my view, is that the voltage threshold is homeostatically tuned, eg by acting on the phosphorylation of Nav channels. Another important impact of AIS distance is it controls the amount of current transmitted to the soma at spike initiation, and therefore the depolarization at the soma. This variation can span at least an order of magnitude across cell types and conditions, probably several. In fact, it makes sense that this current is matched to the size of the cell body and proximal dendrites (capacitive area), as we have argued in Hamada et al. (2016). A third aspect appears when the axon stems from a dendrite; in that case, synapses on the axon-bearing dendrite have quite special properties, because the synaptic current leaks to the soma and the axon is between the synapse and the soma. Perhaps quite surprisingly, what determines the postsynaptic potential at the AIS is mainly the position of the axon on the dendrite, rather than synapse position (except for proximal synapses).

Articles

2. Leterrier C (2018). The axon initial segment – an updated viewpoint. (Comment on PubPeer)

This is a well-written review on the nanoscale molecular organization of the axonal initial segment. It includes a discussion of molecular transport and filtering by the AIS, as well as molecular mechanisms of development and structural plasticity (changes in position and length), pointing to the most recent studies.

3. Ricci M, Kim J, Serre T (2018). Not-So-CLEVR – Visual Relations Strain Feedforward Neural Networks. (Comment on PubPeer; Reviews on OpenReview)

This paper points out some important limitations of current connectionist models, including deep learning networks. While those models can now identify objects in photos with excellent accuracy, they struggle on tasks that seem totally trivial for us, such as deciding whether two shapes are identical or different. Current neural network algorithms are totally clueless for this type of tasks. More generally, they are unable to identify relations between objects in an image. To me, this is related to a fundamental limitation of the cell assembly concept, which I discuss in my essay Is coding a relevant metaphor for the brain?. In classical connectionim, and more generally in the classical neural coding view, percepts are thought to be represented in the brain by the activation of a cell or cell assembly. But a cell assembly is an unstructured set, much like the “bag-of-words” model of text retrieval. It can represent elements in a perceptual scene, not relation between elements. Thus to represent relations with neural activity, a dynamical aspect is needed (hence the proposition of binding by synchrony, which is useful but insufficient).

4. Lansdell BJ, Kording KP (2018). Spiking allows neurons to estimate their causal effect. (Comment on PubPeer).

This paper introduces an interesting idea borrowed from econometrics, on how to estimate the causal effect of a neuron’s spiking on a signal (eg a future reward signal, but potentially any signal).

One could for example compute the spike-triggered average of the signal, but this does not give the causal effect of spikes, because spikes could be correlated to other things that also have an impact on the signal. For example, let us imagine that you want to recover the postsynaptic potential for a given synapse, that is, the causal effect of the presynaptic spike on the postsynaptic membrane potential. Networks can display fast oscillations due to inhibitory feedback; in that case, the spike-triggered average of the membrane potential would show oscillations, even if the postsynaptic potential is a decaying function. To find the causal effect, the idea discussed in the paper is the following: compare the average signal observed when the neuron has barely spiked (just above threshold) to the one observed when the neuron has just missed the spiking threshold. Because spiking is discontinuous while the underlying variable that causes it (membrane potential) is continuous, these two cases should be equivalent except for the fact that the neuron has spiked. It follows that the difference in observed signal should be just the causal effect of the spike.

This is a nice idea. The question is how this might apply to the problem of learning for a neuron. Although the authors discuss a learning rule, that rule addresses the problem of learning to estimate the causal effect of the spike, not the learning of the neuron’s parameters (e.g. synaptic weights). The relation between the two problems is unfortunately not straightforward.

January 2018

Editorial

This month, I discuss a thesis that addresses theoretical questions about the morphology of neurons (1). It tries to understand why neurons of higher invertebrates have a different morphology than those of vertebrates. Another paper (2) asks the following question: what would happen if the sodium channels did not accumulate in the initial segment? The authors use a computational model and a genetic approach. Finally, one paper (3) introduces a much needed framework to compare models of binaural hearing.

Thesis

1. Hesse J (2017). Implications of neuronal excitability and morphology for spike-based information transmission.

This is a PhD thesis in theoretical neurophysiology. It addresses several topics, but I will only discuss the part about the morphology of neurons. The thesis addresses the following question: why is the soma of higher invertebrates externalized, that is, with a unipolar morphology (dendrite and axon forming a single process, with the soma attached by a stem). The core of the work was published as a separate paper (Hesse and Schreiber, 2015), but the thesis contains a more in-depth discussion, in particular some comparative biology. The approach is to look at electrical transmission from dendrite to axon, and the theoretical argument is the following. If the soma is on the path between dendrite and axon, then there is an attenuation due to a passive current proportional to the surface of the soma. If the soma is externalized through a long stem, then the leak is proportional to the surface of a characteristic length of neurite, which varies with its diameter. Thus, for a large soma, it is more advantageous to externalize the soma; for a small soma, it is better if it is centralized. More precisely, soma area is proportional to d_soma^2 and stem characteristic area is proportional to d_stem^3/2, and it is the ratio of these two numbers that determines whether the soma should be externalized or not (the paper mentions the ratio d_soma^2/d_stem, because the characteristic length is inserted in the critical value rather than in the ratio). The empirical data shows the expected trend. The thesis discusses other aspects that are not electrical.

One critical question in the theory is the relation between stem diameter and soma diameter. A simple idea proposed in the thesis is that the soma produces proteins at a rate proportional to its volume d_soma^3, and those proteins must flow through a section of area d_stem^2, so the diameter of the stem should scale as d_soma^3/2. This is actually roughly consistent with the data shown in table S1 of the paper (although this interesting empirical fit does not appear in the paper or thesis).

If we then look at what was defined as the “soma-to-neurite” ratio in the paper, which is d_soma^2/d_stem, then we find that it should scale as d_soma^1/2: so, larger somas should be externalized. However, if we look at the more relevant ratio, which is soma area over area of a characteristic length of stem, then we actually find d_soma^(-1/8), and we obtain the opposite conclusion. Thus, the theory would actually predict that larger somas should not be externalized; in other words, it is not so clear that the size of the soma explains its externalization.

The thesis also has an interesting discussion of other scaling relations in neuronal morphology.

Articles

2. Lazarov E, Dannemeyer M, Feulner B, Enderlein J, Gutnick MJ, Wolf F, Neef Z (2017). Axonal spike initiation can be maintained with low axonal Na channel density, but temporal precision of spiking is lost. (Comment on PubPeer)

This study looks at the effect on spike initiation of a genetic mutation that specifically impacts an AIS-specific protein (beta-IV spectrin). That mutation seems to affect the density of Nav channels at the AIS, which becomes close to the somatic density (with the caveat that this observation is based on immunochemistry, and it is not so obvious to precisely compare the axonal and somatic fluorescence signals quantitatively). Electrophysiologically, the consequences are: higher threshold, lower onset rapidness, lower spiking precision, poorer high-frequency tracking properties.

The authors insist on the fact that the results show that a high density of Nav channels is not necessary for axonal initiation of spikes. Indeed the mutant cells generally show a biphasic phase plot characteristic of axonal initiation. They found the same thing in a biophysical model (from Hallermann et al. 2012), where Nav density is modified to be the same in AIS and soma. This, however, is not particularly surprising since Nav channels have a lower activation voltage in the AIS than in the soma (both in the model and in empirical observations). The authors cite an old paper (Moore 1983) that proposes another potential reason why spikes initiate in the axon, which is interesting: if the soma and AIS have the same Nav channel density, spikes would still initiate in the AIS because less current leaks in the direction of the axon than towards the dendrite (both resistance and capacitance). This, however, is only true if there are no dendritic Nav channels, but the model used here actually has the same Nav density in the soma and dendrites, so the reason why spikes initiate in the AIS in this case is because of lower activation voltage of axonal channels.

3. Dietz M, Lestang JH, Majdak P, Stern RM, Marquardt T, Ewert SD, Hartmann WM, Goodman DFM (2017). A framework for testing and comparing binaural models. (Comment on PubPeer)

This paper introduces an open computational framework (on github) to test binaural models on empirical data. This is a very valuable initiative as many models have been developed but it is very difficult currently to compare them, in particular because they are written with different languages. The idea of the framework is to use a file-based approach: the model is expected to read a stereo wave file and output a response (which can be a decision or spike trains, for example). The rest is handled by the framework. The code seems to be in its infancy and there are not many data sets, but hopefully this will grow. I would like in particular to see more ecological sets, eg with natural binaural signals and an absolute localization task.

There is also a valuable review of binaural models. I have a few remarks on some points of the review. In the open questions, it is written that there are binaural neurons that respond at non-zero ITDs and that this is challenging to some models. I believe the authors meant that there are more neurons that respond at non-zero ITDs than at zero ITDs; in all models, there are neurons that respond at non-zero ITDs. When introducing hemispheric rate difference models, the authors cite van Bergeijk (1962) and von Békésy (1930); these two citations are indeed always cited in this context. But it is actually wrong, because those models are quite different from the hemispheric model. One similarity is that the number of active neurons is counted (instead of picking the most active one), but this is done on top of a delay-line model, with heterogeneous delays covering the physiological range – so in reality quite close to the Jeffress model. The discussion of the interaural group delay seems to have missed our work on this topic: 1) Bénichoux V, Rébillat M, Brette R (2016). On the variation of interaural time differences with frequency, where we explain how this produces an additional cue for realistic binaural signals, and most importantly 2) Bénichoux V, Fontaine B, Karino S, Franken TP, Joris PX, Brette R (2015). Neural tuning matches frequency-dependent time differences between the ears, where we explain how coincidence detection between mismatched places on the two cochleae produces selectivity for a specic interaural group delay (or envelope ITD).

I am looking forward to seeing more data integrated into this framework.

December 2017

Editorial

This month, I have read two short books that I both found interesting from a theoretical neuroscience perspective. The first one is a praise of systems biology by Denis Noble (1). It is written in clear language and convincingly argues that biological organisms or cells can only be understood as systems, not just of interacting elements, but also of interacting levels (eg the molecular, the cell and the organism levels). In a recent essay (3), Fields and Levin go further and point out that since only cells, and not genomes, reproduce, it is inevitable that heritable properties are not confined to the genome. This implies that the genome cannot be seen as a code for the organism, but perhaps more appropriately as a resource for the cell. The other book is actually a philosophical review of various books on consciousness, authored by both philosophers and scientists (2). It is a good entry into modern ideas on consciousness. Finally, a recent essay by Yves Frégnac criticizes the industrial-scale projects that have recently emerged in neuroscience (4). One of the points made in that article is that the field is not mature for this type of projects because there is no strong theory to integrate data or even to identify the relevant data to acquire.

Books

1. Denis Noble (2006). The Music of Life.

This short book by the physiologist Denis Noble is a criticism of the “selfish gene” metaphor, and more broadly of the metaphor of the genetic program or code. A gene does not code for a phenotype. A gene encodes the primary structure of a protein, which does not even specify the chemical function of a protein, because that depends on how it’s folded, spliced etc. As to the function of a protein in an organism, well that depends on the context provided by that organism. The same gene can have different effects in different species, or in different cell types of the same species. An organism, or a cell, is a system of many interacting components, and there is no preferential level of analysis. One important concept described in the book is “downward causation”, which is illustrated by the cardiac rhythm and the cardiac action potential. We can try to describe the cardiac rhythm at the molecular level: it is caused by the opening and closing of ionic channels. But there is no rhythm at the molecular level, no intrinsically rhythmic molecule: it emerges by the interaction of the ionic channels, and more precisely by their interaction with the cellular environment. The way it emerges is through a feedback from the cellular level where the membrane potential is formed, a macroscopic phenomenon, to the molecular level of ionic channels, which produce local transmembrane currents. This is what Noble calls “downward causation”: a causal link from a large spatial scale (the cell) to a smaller one (the molecule). Importantly, this cannot be reduced to a causal chain of molecular events: it is not a chain but a loop, which also involves non-molecular events on a larger scale. Thus, the cardiac rhythm (and more generally the action potential) cannot be understood without considering the entire system, at two different scales (cell and ionic channel). It is a powerful motivation for systems approaches in biology.

2. John Searle (1990). The Mystery of Consciousness.

This is a particular book, as it is a collection of book reviews. Searle critically reviews 6 recent books on consciousness, from both philosophers and scientists: Crick, Edelman, Penrose, Dennett, Chalmers and Rosenfield. It is a good entry into the modern literature on consciousness because it exposes some of the philosophical issues and positions. Searle also has his particular view, which not everyone will agree with (for example, Searle repeatedly state that feelings cause behavior, and that this is simply a fact of experience; but that is not true, see e.g. Ned Block’s “On a confusion about a function of consciousness”). Here is a summary of a few ideas an arguments. Both Crick and Edelman, two eminent biologists who were awarded the Nobel prize in different domains, propose physiological explanations of consciousness of the type: when physiological phenomenon X happens (eg synchronized firing of some distant pyramidal cells), consciousness happens. This, however, only describes correlates of consciousness, but does not explain why they make us conscious. Chalmers takes this point seriously, and argues that in fact, no explanation of this type can ever explain why we are conscious; we can only say: this particular physiological phenonemon is (empirically) associated with consciousness. Therefore, he defends a form of property dualism, specifically that certain types of information processing goes with a conscious state – this is seen as an additional law of nature (see my criticism of this position here). This is the position defended by Koch and Tononi (IIT). Dennett essentially denies the existence of experience. That is, he argues that consciousness (phenomenal consciousness, i.e., “how it feels like”) is an illusion. Searle criticizes this view, and rightly so in my opinion, because it is a self-contradictory claim. An illusion is still something that we experience. A computer program doesn’t have illusions when it bugs. To feel something that conflicts with reality is still feeling something, so consciousness cannot be an illusion.

Articles

3. Fields C and Levin M (2017). Multiscale memory and bioelectric error correction in the cytoplasm– cytoskeleton-membrane system. (Comment on PubPeer)

This cell biology essay was not an easy read for me, a theoretical neuroscientist, but I found it very original and insightful. It criticizes the concept of the genome as a program. Indeed, genes by themselves only determine the primary structure of potential proteins (actually not even that if we consider splicing), and this does not determine protein function, let alone the way a cell works. But the authors go further by drawing the implications of a simple fact: the genome does not reproduce, only a cell does.

When a cell divides, the daughters get not only the genome, but also the membrane and cytoskeleton. The authors call this the “architectome” and argue that it also carries heritable properties that are independent of the genome. Given that it is always an entire cell that reproduces, it makes sense. In a few cases, this has been demonstrated, for example in Paramecium. The authors thus propose a fresh perspective: the genome is not the code or program for the cell; it is a resource for the cell, which picks into the genome to produce proteins it needs. Although the new metaphor also has its limitations, it has the great merit of providing an alternative view to the “genetic program”.

4. Frégnac Y (2017). Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain? (Comment on PubPeer)

In this essay, Yves Frégnac criticizes the industrial-scale projects that have recently appeared in neuroscience, for example the Human Brain Project and the Brain Initiative. This type of critiscim is often heard among scientists but more rarely read in academic journals. The essay uses different angles of attack. One is that these large-scale data mining projects are driven by technology and not by concepts, and while technological tools are obviously useful in science, the risk is that a lot of data will be produced, but quite possibly not the right data. This is a very tangible risk since no one seems to have any idea what to do with the data. The data-driven view is based on the epistemological fallacy that data preexist to theory (see my blog series on computational neuroscience). This is wrong: data can be surprising, but they always rely on a choice of what data to acquire, and that choice is theoretically motivated. Here is one example from the historical literature on excitability. To demonstrate the ionic theory of action potentials, Hodgkin thought of the following experiment: immerge an axon in oil, and measure conduction velocity; it should decrease because of the increase in extracellular resistance (it does). You might observe the electrical activity of the whole brain with voltage sensors all over neurons, but you would still not have those data. A second argument is that purely reductionist approaches (or “bottom-up”) are not appropriate to study complex systems: this study must be guided be the understanding of higher levels (eg Marr’s “algorithmic” or “computational” level). Here is a relevant quote: “the danger of the large-scale neuroscience initiatives is to produce purely descriptive ersatz of brain, sharing some of the internal statistics of the biological counterpart, at best the two first order moments (mean and variance), but devoid of self-generated cognitive abilities.” Such approaches are probably doomed to fail. A third argument is that studies in invertebrates suggest that bottom-up approaches vastly underestimate the complexity of the problem; for example, we know from those studies that neuromodulators can totally change the function of neural circuits, and so knowing the connectome will not be sufficient to infer function (see for example this talk by Vladimir Brezina). Generally, industrial-scale bottom-up approaches will not work because we do not have the beginning of a strong brain theory, which is necessary to produce the relevant data and to subsequently integrate them.

One of the dangers identified in this article is that funds will be captured by those large-scale efforts. I think there is a broader threat, which is that it will also impact the criteria for hiring academics, and as an indirect result of those incentives, push all young scientists towards that kind of science, whether they are funded or not by those large-scale efforts. With the attraction of “high-impact” journals for flashy techniques, with papers showing impressive technologies but limited scientific results, this seems to be already happening.

November 2017

Editorial

In this issue, I discuss three recent papers on axons. The first one is a review on recent findings about the molecular organization of the axon, in particular its periodic organization observed thanks to super-resolution microscopy. The second one examines spike conduction in axons using a high-density electrode array, which seems to be a very interesting source of data. The third one is a new paper on plasticity of the axon initial segment. This time it is the blocking of a Kv channel that triggers the displacement of the AIS, accompanied (but in my view not caused) by homeostatic change in excitability.

The fourth paper is epistemological, and argues that we should abandon statistical significance in favor of more informative measures, rather than try to fix it with lower p-values.

Articles

1. Leterrier C, Dubey P and Subhojit R (2017) – The nano-architecture of the axonal cytoskeleton. (Comment on Pubpeer).

This is a very interesting review on recent discoveries about the axonal cytoskeleton, which have been made possible in particular by super-resolution microscopy. One of these discoveries is the actin rings that are periodically spaced along the axon. The roles of these rings are not entirely clear, but probably mechanical robustness is one of them. As channels attach to the cytoskeleton, one also wonders whether these periodic rings might also be involved in the regulation of channel density.

2. Radivojevic M, Franke F, Altermatt M, Müller J, Hierlemann A, Bakkum DJ (2017). Tracking individual action potentials throughout mammalian axonal arbors. (Comment on Pubpeer)

This study uses a high-density multielectrode array (MEA) to analyze the propagation of an action potential in the axon of cultured neurons. The device has 11011 electrodes, and can record 126 simultaneously. The authors trigger a spike extracellularly, then record the spike-triggered response many times with different electrode configurations to get the entire response of the MEA, which makes it a very interesting set of data. Normally a single electrode signal is not sufficient to record axonal spikes in a single trial, but the trick is to increase the signal-to-noise ratio by using several electrodes to detect spikes (the SNR increases as the square root of the number of electrodes). This is done using template matching. This allows the authors to measure propagation velocity and jitter not just between two points, as was previously done, but all along the axon. Although it’s not commented, it is interesting to see for example that velocity is apparently not constant, it looks as if there are sorts of jumps (plateaus in Fig. 4f). As expected, the jitter in spike time increases with distance from initiation site. A simple model, used by the authors, predicts indeed that variance increases linearly with distance (by assuming that each axonal compartment introduces an independent noise). However, it is hard to say whether the data follow this model, because no alternative model is tested. Here is one: as the authors later show, conduction velocity depends on previous history (slowing down at high firing rate); if there is jitter in conduction velocity, then variance should grow quadratically with distance. By the way there is a small error in the reporting of jitter: it should be in s/m^(1/2) (because variance is in s^2/m, according to the authors’ model), not s/m. Finally, the authors show that spikes slow down at high rate; something which was known before but not with this level of detail. The authors mention a few possible mechanisms; I would add inactivation of Nav channels.

3. Lezmy J, Lipinsky M, Khrapunsky Y, Patrich E, Shalom L, Peretz A, Fleidervish IA, Attali B (2017). M-current inhibition rapidly induces a unique CK2-dependent plasticity of the axon initial segment. (Comment on PubPeer)

Recent studies have shown that the axon initial segment (AIS) can move, extend or shrink in response to various treatments. Here the authors show that inhibiting the M-current (a hyperpolarizing K+ current) induces a distal shift of the AIS together with changes in excitability. There are several interesting findings in this study. First, blocking the current immediately depolarizes the neuron and increases the input resistance, which logically reduces the rheobase (threshold current). But then these parameters go back to their initial values over an hour or so, although the M-current is still blocked, so some compensation occurred. Concurrently, the AIS shifts distally (first the Nav and Kv channels together, then the ankyrin-G); the initiation site shifts accordingly. Finally they show that the relocation is blocked by inhibiting CK2. The authors use a model to support their interpretation that the distal shift of the AIS causes the compensatory reduction in excitability. The model shows two effects that I showed theoretically in my 2013 paper (Brette (2013) Sharpness of spike initiation in neurons explained by compartmentalization): 1) if only Nav channels are considered, moving the AIS distally actually increases excitability (lowers threshold); 2) if there is hyperpolarizing current in the AIS, the opposite effect is seen (this is in my supplementary methods). Thus the authors propose that (2) is happening. However, in my view the data in Fig. 5C support a different interpretation. What is seen there is quite surprising: when the M-current is blocked, the spike threshold does not change at all, and then after a couple of hours the spike threshold lowers. This explicitly supports (1) and contradicts (2). If the threshold doesn’t change when the M-current is blocked, then that means that this current doesn’t actually hyperpolarize the AIS relative to the soma. If the effect that the authors propose underlied the reduction in excitability, then the spike threshold should increase, not decrease. Thus it seems that the distal movement of the AIS actually increases excitability, but something else (expression/phosphorylation of another channel?) reduces it.

4. McShane BB, Gal D, Gelman A, Robert C, Tackett JL (2017). Abandon Statistical Significance. (Comment on PubPeer).

Recently, there has a been a lot of discussion about issues of reproducibility in the biomedical and psychological literature. Some people argue that the threshold for statistical signifance should be lowered, say p = 0.01 instead of 0.05. This paper argues, and in my opinion rightly so, that the use of statistical signifance should be abandoned. One of the main arguments, which I also defended in a blog post, is that the null hypothesis is not credible. When any manipulation on a living being is performed, it is unrealistic to hypothesize that the effect will be exactly 0. It might be small, yes, but not exactly zero. And if it’s not zero, then with a sufficient number of observations there is 100% probability that you find a statistically significant effect, whatever threshold you use. The term “significance” is misleading; a ridiculously small effect would still be statistically significant, provided enough observations. It doesn’t prove much, apart from the fact that you are ready to sacrifice hundreds of animals just to get published by glamour journals. Conversely, finding that something is not significant means literally nothing: it could be that you just haven’t repeated the experiment a sufficient number of times – in fact it must be this interpretation, given that the null hypothesis is not realistic. So, I agree with the authors that statistical significance should be abandoned, in favor of more meaningful statistical measures (for example effect size).

October 2017

Editorial

This issue features 3 papers on electrophysiology (1-3) and one on motor control (4). The first one describes electrical communication between bacteria, based on (indirectly) voltage-dependent K+ channels. This shows the universality of electrical communication based on ionic channels, something that is not specific of neurons. The second one shows how to build a simple low-cost dynamic clamp system (where the injected current depends in real time on the measured voltage), using a low-cost microcontroller (not Arduino, but similar). The third one is a (primarily) modeling study on the extracellular field produced by an axon bundle, showing how its terminal can produce strong fields. Finally, I discuss a theoretical paper which shows how spiking neurons can control a simple physical system (an inverted pendulum).

Articles

1. Prindle A, Liu J, Asally M, Garcia-Ojalvo J and GM Süel (2015). Ion channels enable electrical communication in bacterial communities. (Comment on PubPeer)

This paper describes oscillations of membrane potential and extracellular potassium in a bacterial population (shown indirectly with an optical sensor), which show radial synchronization (ie same Vm for cells at the same radius). The proposed mechanism is as follows. A wave of depolarization is initiated by some metabolic factor which makes a K+ channel open, hyperpolarizing the cell. This releases K+ in the extracellular environment. The extracellular increase in K+ reduces the Nernst potential for K+, so all neighboring cells are depolarized. The K+ channel is voltage-dependent (indirectly in the model), it opens when the cell is depolarized. So there is a hyperpolarization that releases K+ extracellularly. With appropriate nonlinearities, the result is a propagating wave of K+ and Vm, which is faster than diffusion. There is a simple Hodgkin-Huxley type model in the supplementary methods. Some of it might be a little questionable (eg K+ reversal potential increases linearly rather than logarithmically with concentration; but that might be ok for small ion fluxes and probably doesn’t change the results qualitatively), but generally sensible. It is a chain of cells coupled through the extracellular environment. It would be interesting to extend the model to a disk and see whether one can account for radial synchrony.

This is interesting for at least two reasons. One is that there is electrical communication based on ionic channels not just in neurons but also in bacteria; so probably in all living cells. Another is the mode of communication is neither gap junctions (direct electrical coupling) nor synapses (through neurotransmitters), but through changes in ionic composition of the extracellular environment. These changes should occur also in the nervous system, so could it be that neurons also communicate in this way?

2. Desai NS, Gray R, Johnston D (2017). A Dynamic Clamp on Every Rig. (Comment on PubPeer)

This paper presents a low-cost dynamic clamp system implemented with a Teensy microcontroller, which works independently of the recording PC. It makes using the dynamic clamp much simpler, when one would otherwise need an operating system with a real time kernel. The associated website is unusually good! with detailed part list and construction methods, code, advice, etc.

3. McColgan T, Liu J, Kuokkanen PT, Carr CE, Wagner H, Kempter R (2017). Dipolar extracellular potentials generated by axonal projections. (Comment on PubPeer)

The authors show that the terminal zone of an axon bundle can generate a strong dipolar extracellular field. This is particularly the case in the auditory brainstem of barn owls (and most likely of mammals), where there is a strong extracellular potential (several mV) locked to the sound, called the neurophonic. The idea is quite simple. In the terminal zone, the axons bifurcate then terminate., so that the number of axons increases, then decreases. If the wavelength of the propagating wave is right, then current is drawn into the region where axons bifurcate and exits where they terminate. This is shown numerically and theoretically, and compared to data in barn owl nucleus laminaris. One point I am wondering about is the role of axon diameters in the phenomenon; indeed at an axon bifurcation, diameters of daughter branches tend to be smaller than that of the primary branch, so one might wonder whether that might not counterbalance the increase in axon number.

4. Kang TS, Banerjee A (2017). Learning Deterministic Spiking Neuron Feedback Controllers. (Comment on PubPeer)

The authors study how spiking neurons can control an inverted pendulum. Each spike produces a force acting on the pendulum (like a muscle twitch), and the observed variables (angle and its derivative) are inputs to the neurons (it’s a single layer). The question is how to set the parameters (input gains) so that the system is stable. This is an interesting problem, which is not straightforward, despite the simplicity of the architecture. The authors simply define an error function and derive a gradient descent on parameters, which seems to work. It seems however that the gradient depends on detailed aspects of the system, so it’s not so clear that is a good solution. Nevertheless, it is interesting because it addresses a problem of learning that is not representational but directly related to behavior, in contrast with most modeling studies on synaptic plasticity and learning.

September 2017

Editorial

This issue features two epistemological papers (1-2), one paper on spatial navigation (3), two papers on automatic patch clamp (4-5). The first one is a critique of the neural coding metaphor, which I just wrote. This critique connects to the more general problem of reductionism in neuroscience (or in biology more generally), about which Tony Bell wrote an interesting essay (2). The coding metaphor indeed implies that there exists a separation between representation and decision/action, but a nervous system cannot be split up in this way. Similarly, Bell argues, seeing the brain as a computer is not very meaningful.

The next paper I discuss (3) is not a neuroscience paper, but an old robotics paper where the authors describe a simple way in which an agent can navigate in crowded environments, by avoiding places it has visited. This is seen in some species (eg slime mold) which leave a trail behind them. A wild but interesting speculation is that the spatial memory system of vertebrates (place cells) might result from an internalization of such mechanisms.

Finally, I discuss two simultaneously published papers on automatic patch that describe more or less the same algorithms (4-5), a rather straightforward but useful improvement where the targeted cell is visually tracked to adjust the trajectory of the pipette as it is moved down.

From the lab

1. Brette (2017). Is coding a relevant metaphor for the brain?

In this essay, I argue that the neural coding metaphor is often inappropriate and misleading. First, it is a dualist metaphor, because for something to count as “information”, that thing must be mapped to some other thing outside the brain. Information in the sense of Shannon is information for an external observer, not for the receiver. A more relevant notion of information is captured by the metaphor of perception as science making (finding laws and structure), rather than perception as encoding. Second, the relation between “input” and “output” of a neuron is circular (through synaptic connections or through the effect of action on sensory signals), and therefore mapping perception as a feedforward process is inappropriate. Spikes are not messages, they are actions on other neurons and on the body.

Articles

2. Bell (1999). Levels and loops: the future of artificial intelligence and neuroscience. (Comment on PubPeer)

This is an interesting epistemological paper which discusses two important ideas in neuroscience. One is the ubiquity of loops. For example, the output of one neuron ultimately influences its own inputs because of cycles in synaptic networks. Sensory signals drive action, and action changes sensory signals. The same loops are seen at all levels (molecular, etc). The interdependency of all elements of a living system makes reductionist accounts inappropriate. One of these accounts is the coding metaphor, in which neurons are presumed to encode properties of the world, in a feedforward way (see my essay on the coding metaphor).

The second idea is a criticism of the computer metaphor of the brain, or of living systems in general. More specifically, in Bell’s words: “the prevalent tendency to view biological organisms as machines in the exact technical sense in which computers are machines, i.e. in the sense that they are physical instantiations of finite models which do not permit physical interactions beneath the level of their machine parts (e.g. the logic gate) to influence their functionality”. Empirically, we find interactions between and across all levels, and this makes the machine metaphor not very insightful.

3. Balch and Arkin (1980). Avoiding the past: Avoiding the Past: A Simple but Effective. Strategy for Reactive Navigation.

This is a paper from the reaction-based robotics field, where the authors describe a simple way to navigate in crowded environments. A classic problem is when there is a U-shaped barrier between the current position and a target position: if the agent goes straight towards the target, it gets stuck in the barrier – this is known as the “fly at the window problem”. It can be solved by planning and detailed knowledge of the environment, but this paper shows another efficient solution which is much simpler and used in some species such as slime molds (Reid et al., 2012). Here the robot maintains a spatial memory of places it has visited. A place it has visited becomes repulsive (in practice the algorithm computes the spatial gradient of a trace). The robot then avoids its own recent trajectory, and thus solves the U-shaped barrier problem. One might try to think of parallels between this system and place cells in the hippocampus (see an old blog post of mine on this).

4. Suk et al. (2017). Closed-Loop Real-Time Imaging Enables Fully Automated Cell-Targeted Patch-Clamp Neural Recording In Vivo. (Comment on PubPeer)

This is an improvement of previously developed automatic patch-clamp systems. The algorithm in Wu et al. (2016) could patch a visually identified cell, but it required some human intervention in about half of the cases. The main reason is that pipette movements induce movements of the targeted cell, and so the trajectory of the pipette needs to be adjusted. The straightforward solution is to track movements of the cell and adjust accordingly. This is what is done here. The algorithm is made very simple by the (more complicated) experimental design, where both the pipette and the cell are fluorescent and a 2-photon microscope is used. This way, tracking the cell is essentially a matter of tracking a fluorescent blob (focus is when intensity is maximal). The authors mention that they did not manage to do it without fluorescence. Fluorescence (Alexa) in the pipette is used in several ways: first to locate the pipette tip before brain penetration, then to check that the pipette is not clogged (there is a fluorescent plume flowing out of the pipette), and finally to check whether break-in was successful. There is also a small improvement in sealing, where the pressure is alternated if sealing fails, before the sealing procedure starts again. A similar algorithm for tracking has been proposed simultaneously by Annecchino et al. (2017).

5. Annecchino et al. (2017). Robotic Automation of In Vivo Two-Photon Targeted Whole-Cell Patch-Clamp Electrophysiology. (Comment on PubPeer)

This is an improvement of automatic patch-clamp systems to patch a visually identified cell, which is very similar to a simultaneously published algorithm by Suk et al. (2017). It uses image processing to track movements of the cell induced by movements of the pipette, both fluorescent. The image processing is more complex than in Suk et al. (2017); it might be able to process more crowded images. Unfortunately it is described quite briefly in the main text and not detailed in the methods (for some reason, the methods describe the hardware but not the algorithms; the same is unfortunately true also of the previous most related paper, Wu et al. (2016) – which oddly enough is not cited). The code however is public, although for proprietary software (Labview). Oddly enough, the paper introduces a pressure controller as a novelty (compared to fixed pressure containers in Kodandaramaiah et al. (2012)), but this was already done by Desai et al. (2015) as well as by Wu et al. (2016) (both uncited).

July 2017

Editorial

This month, I discuss four rather diverse papers. The first paper is a recent review about how the structure of neural networks changes spontaneously in vivo, which raises some questions about our view of memory engrams. The second one is an intriguing study showing that anticipated eye movements have an influence on the eardrums; it questions our view of the senses as separated modalities. The next two are about neurobiology of unicellular organisms. I use the term neurobiology because they show sensory transduction, produce action potentials (presumably, in the case of (3)), leading to motor reactions. These are not very well known but in my view very interesting for theoretical neuroscience.

Articles

1.Chambers and Rumpel (2017). A stable brain from unstable components – Emerging concepts, implications for neural computation. (Comment on PubPeer)

The authors review recent experimental evidence showing that in vivo, in the absence of any particular task (in particular learning task), synapses and functional properties of single neurons are not stable. For example, spines disappear and reappear; more significant in my view, motor tuning and place fields drift. Synaptic changes are still observed when ion channel activity is blocked. This might suggest that they are intrinsic as the authors point out,although in factit does not mean that in normal condition these changes are independent of activity; it could well be that the fluctuations are entrained by activity, in the same way as the response of an intrinsically noisy neuron is entrained by a time-varying current (Mainen and Sejnowski, 1995; see also Brette & Guigon, 2003 for some theory). The more significant point, I think, is that functional properties of neurons, e.g. tuning properties, seem to drift over time. This raises questions about the idea of a cell assembly as a memory engram. If a particular assembly encodes a particular memory, then after some time this same assembly should mean something completely different. Imagine, to take a caricatural example, that a memory of a red car is stored as a network of two connected neurons, the red neuron and the car neuron. After two weeks the red neuron becomes a green neuron. When cued with a car, I now remember a green car.

In theoretical neuroscience, one question which has been the subject of many studies is how can synaptic structure be stable enough to sustain memories while plastic enough to allow learning. Maybe this is not the right question; maybe the right question is: how can learning persists over a time scale longer than the functional dynamics of networks?

2. Gruters et al. (2017). The eardrum moves when the eyes move: multisensory effect on the mechanics of hearing. (Comment on PubPeer)

This is an intriguing paper showing that the eardrums move in conjunction with the eyes. Specifically, when the eyes saccade to the left, the eardrums move to the right (and conversely), and then oscillate at 30 Hz for a few cycles (possibly more, as the dampening could be the result of averaging). These oscillations are not that small, equivalent to 57 dB. Eardrum movements seem to start slightly before eye movements, which suggests that it is a result of anticipatory control from the central nervous system (rather than feedback or coupling). Naturally, one wonders what influence this might have on auditory perception, in particular on spatial perception of sounds. The fact that the oscillation is at the bottom of the audible spectrum might argue for a small role; on the other hand, one wonders what function this anticipatory control might serve if not perceptual. More generally, it makes me wonder to what extent results obtained on anesthesized animals (which form the majority of our knowledge on the auditory system), where the efferent system is down, are meaningful for the physiological condition. Intriguing!

3. Wan & Goldstein (2017). Run stop shock, run shock run: Spontaneous and stimulated gait-switching in a unicellular octoflagellate. (Comment on PubPeer).

The world of unicellular organisms is fascinating. In this paper, the authors show that a unicellular octoflagellate (8 flagellae) of about 17 µm in length displays three different gaits: run, shock (change of direction) and rest, corresponding to different beating modes of the flagellae. The shock is a very quick reaction that can also be triggered mechanically. This reminds me of the avoidance reaction of Paramecium (Eckert & Naitoh, 1972), and I would bet that this occurs by stimulus-induced depolarization followed by an action potential. It would be interesting to stick an electrode in those!

4. Iwatsuki & Naitoh (1988). Behavioural Responses to Light in Paramecium Bursaria in Relation to its Symbiotic Green Alga Chlorella. (Sorry I did not find it on PubPeer!)

To continue on the theme of unicellular neurobiology, this old paper discusses the photosensitive behavior of Paramecium Bursaria. This is a unicellular organism (a ciliate), which lives in symbiosis with green algae (ie, cultivates plants inside its cytoplasm) (see former issue on endosymbiosis). As a result, it tends to accumulate in light. The way it works is very interesting. It uses the avoidance reaction, in which an action potential triggers an abrupt change in direction. This happens in reaction to various stimuli, for example mechanical stimuli. Here the avoidance reaction is triggered when light intensity decreases; thus, it avoids shade and stays in light. It seems that the algae somehow hijack the avoidance reaction system through products of photosynthesis. It is not clear whether photosynthesis products directly trigger a depolarization, or whether they modulate an existing photosensitive system in Paramecium – indeed several species of Paramecium have a photophobic reaction to light increase.

June 2017

Editorial

This issue features two books (1,2), a PhD thesis (3) and one article (4). The first book is about the relation between artificial intelligence and human intelligence. Although it was written a long time ago about a different kind of artificial intelligence (expert systems), a number of arguments are still relevant today. Recently, IEEE Spectrum asked a number of artificial intelligence experts: “When will we have computers as capable as the brain?”. Most of them (but not all) seem to think that it will happen within a few decades or less. This book suggests a more humble answer. The second book is about an unorthodox view of evolution based on endosymbiosis, the idea that major steps in evolution come from the union of organisms into a new one, rather than by mutations.

For the first time, this issue features a PhD thesis (3), on patch-clamp automation. Indeed, why not selecting a thesis in a journal? A thesis is a substantial peer reviewed and published study, often more detailed and useful than articles. This one shows impressive work in robotics, enhancing automatic patch-clamp with automated pipette change (tricky!).

Finally, this issue features one article, showing the coordination between different channels in vertebrates (4).

Books

1. Dreyfus HL and Dreyfus SE (1986). Mind over machine.

This book written in the 1980s is a classic criticism of expert systems as a model of human cognition. The major trend in artificial intelligence at that time was logical inference systems based on rules designed by interrogating human experts. It may seem a little outdated but there are a few interesting elements. First, there is the historical perspective. Artificial intelligence had had a few successes, which motivated claims that soon machines would achieve the level of human intelligence. It also triggered huge investments, both public and private. But these goals were never achieved. All these approaches applied to very limited domains of expertise and failed to produce general-purpose intelligence. To me there is a striking parallel with the situation today, with a number of respected leaders announcing exactly the same thing, that soon machines will outperform and perhaps even replace humans. As for expert systems, the new connectionist generation of artificial intelligence has had impressive successes, and in many ways outperform the previous logic-based systems. But they still apply to limited domains for which they have been trained, and there is no sign that any machine understands anything. Machine translation, for example, works remarkably well today, based on modern statistical learning techniques and massive data, but none of these algorithms understands what a car or love is; the field still stumbles on the symbol grounding problem. So we should be more humble, because nothing but our wishful imagination lets us presume that these successes on statistical learning will extend to problems of a different kind, namely the design of autonomous intelligent beings.

Second, the authors argue that there are fundamental differences between the way expert systems and the human mind work. In particular, they criticize the computational view of the mind as the processing of symbols, and argue that it rather seems to operate by a holistic, pattern-matching process (following phenomenologist philosophy). This might seem like a trivial point today to connectionists, but this view still underlies much of cognitive science, and in fact in my view the criticism is still relevant to connectionism. Indeed, while a typical neural network might take signals as input, rather than symbols (eg an image), it is still casted in an input-output processing framework, in which the output is a symbol (eg label of a face, some category) and not a signal.

The third interesting point in the book is about the way humans acquire skills, in contrast to machines. In expert systems, knowledge is fed into the system directly in the form of rules, obtained by interrogating human experts. This may match how humans learn from the experience of other humans, trying to apply rules that are taught to them. But as the authors argue, while beginners start by applying rules, they quickly start relying less and less on rules, and more on holistic perception of situations, leading them to often break rules. This pattern diverges from the way learning is conceptualized in connectionism – the corresponding paradigm would be supervised learning, which is rigid and does not involve guidance.

Overall, although the arguments in the book were targeted to expert systems, many of them still apply to current artificial intelligence – there is a big gap between mind (or biology) and machine.

2. Lynn Margulis (1998). Symbiotic planet.

Lynn Margulis was an unorthodox biologist who demonstrated that mitochondria, the power plants of cells, result not from random mutations as neo-darwinist theory would suggest, but from endosymbiosis. In other words, mitochondria are bacteria that have been engulfed in a cell and live in symbiosis with it. In this book, she presents her theory that the most important steps in evolution come from endosymbiosis, not from mutations, in particular the evolution from prokaryotes to eukaryotes, for which there is now convincing evidence. It is a very interesting and refreshing counterpoint to the darwinist dogma (see the May 2017 issue).

Thesis

3. Holst (2016). In vivo serial patch clamp robotics for cell-type identification in the mouse visual cortex.

This thesis takes patch clamp automation (see March 2017 issue) one step further, by allowing the robot to change the pipette. This means storing the pipettes on a carousel, filling them with intracellular solution using a pressure controller, placing them on a custom electrode holder, and measuring their geometry (this one has been published in Stockslager et al., 2017). The designs are quite sophisticated. Amazingly, it seems to work! There is also an improved algorithm for break in that uses electrical feedback to stop the suction when break-in is detected, and overall a lot of interesting content in this thesis.

Articles

4. Tran T, Unal CG, Zaborszky L, Rotstein H, Kirkwood A and Golowasch J (2017). Ionic current correlations are ubiquitous across phyla. (Comment on PubPeer)

This is a short paper showing that in mice, a number of ionic conductances vary across cells in a correlated way. This is shown in particular in hippocampal granule cells, which are very compact (important to interpret the results because of space clamp issues). This phenomenon had been previously demonstrated in invertebrates; other work had shown that the voltage-dependence of different channels is also correlated (McAnelly & Zakon, 2000). Another interesting finding is that conductances vary with the circadian rhythm.

The co-variation of conductances has important consequences in terms of modeling. It means in particular that conductances are not genetically set, they are plastic as virtually everything in the cell. The fact that they co-vary, rather than vary independently, suggest that this may not be a random variation, or more precisely that there is some regulation that ensures that the parameters “make sense”, that is, produce a functional cell. For example, in an isopotential cell, the electrophysiological properties vary moderately if all conductances are scaled by the same number (ie you get similar spikes, but possibly a different excitability threshold). This kind of scaling could result from global homeostatic regulation, for example (see e.g. O’Leary et al. (2014) and other work from Marder’s lab). The data in this paper, however, suggest that the regulation of conductances is more complex than a global scaling. Some conductance pairs are not correlated. In other cases, the linear regression has a positive intercept – so the relation is not linear but affine. Generally, there is also a fair amount of variability around the linear regression, which might be noise of various sources, but which might also be simply the signature of a more complex multidimensional dependence (linear or nonlinear).

(By the way, in case the authors read this comment, the caption of Fig. 2 is incomplete on this version.)

May 2017

Editorial

In this issue I have selected papers that are not in theoretical neuroscience, but which I think provide fresh ideas for theoretical neuroscientists. The first paper (1) is a critique of the “genetic code” idea, which argues that the cell and in particular genetic networks must be understood as a system, in contrast with the reductionist idea that one gene (DNA sequence) determines one phenotype (see also my blog piece on evolution as optimization, below). The second paper (2) is related, in that it proposes an original view of gene expression and cellular adaptation, in which genetic networks are seen as stochastic dynamical systems in which the amount noise is regulated by external factors, yielding to adaptation by stochastic search. In my view, this suggests an alternative to the idea of plasticity as gradient descent. The third paper (3) describes a technique to record the propagation of action potentials over days in cultured neurons, which provides a tool to investigate the development of excitability, for which there is currently no theoretical model (and in fact very little is known). Finally, the fourth paper (4) introduces a new model animal, Hydra, in which the activity of all neurons can be simultaneous recorded with calcium imaging, which should give some interesting material for theoreticians.

Blog

Is optimization a good metaphor of evolution?

Optimality arguments are often used in theoretical neuroscience, in reference to evolution. I point out in this text the limitations of the optimization metaphor for understanding evolution.

Articles

1. Noble D (2011). Neo-Darwinism, the Modern Synthesis and selfish genes: are they of use in physiology? (Comment on PubPeer)

I believe this is an important essay for theoretical neuroscientists, where modern evolutionary ideas are explained. This is essentially a critique of the idea of a “genetic code”, specifically of the reductionist idea that a gene (in the modern sense of DNA sequence) encodes a particular phenotype, an idea that has been popularized in particular by Dawkins’ “selfish gene” metaphor. Denis Noble argues that this is a reductionist view that is simply wrong, because the product of a gene depends not only on other genes but also on the cell itself. For example, the same DNA sequence can produce different functions in different species. Noble cites an experiment where the nucleus of a goldfish is placed in the enucleated egg of a carp, and the adult fish that results is not a goldfish, as the genetic code theory would predict, but something between carp and goldfish (in many cases with other species, no viable adult results). The author points out that DNA does not reproduce, only a cell does, and he concludes: “Organisms are interaction systems, not Turing machines”. In addition, not all transmitted material is in the nucleus. There are also transmitted cytoplasmic factors, for example organelles (mitochondria). In fact, there is a theory, which is well established for the case of mitochondria, that major evolutionary changes are due not to mutations but to endosymbiosis, the fusion of different organisms into a new one (see Lynn Margulis, Symbiotic planet). It seems to me that a strikingly analog critique can be formulated against the idea of a “neural code”.

2. Kupiec JJ (1997). A Darwinian theory for the origin of cellular differentiation. (Comment on PubPeer)

This paper is 20 years old but there is a recent paper providing experimental support to the theory (Richard et al., 2016). Although this may seem quite far from theoretical neuroscience, I believe the ideas sketched in this paper are very interesting for the questions of learning and plasticity. Kupiec has written a number of papers and books on his theory; another one worth reading is Kupiec (2010), “On the lack of specificity of proteins and its consequences for a theory of biological organization”, where he points out the fact that a given protein can interact with many molecular partners, which is at odds with the idea of a genetic program. The criticism is linked to Denis Noble’s critique of the genetic code (Noble 2011).

The general idea of this 1997 essay is the following (I may oversimplify and interpret in my way as I have not read all his works on the subject). The expression of genetic networks is actually noisy (this is well established). This noise can make the genetic network spontaneously jump between several stable attractors (which we call cell types). Now the interesting idea: the amount of noise is actually regulated and depends on how adapted the cell is to its environment (growth factors): less noise when the cell is well adapted. This makes the cell adapt to its environment. The Darwinian flavor comes in when you consider that healthy cells reproduce more.

Why I think this is inspiring for theoretical neuroscience is that when trying to connect learning and plasticity, one common idea is to define a functional criterion that is to be minimized, and then to propose plasticity rules that tend to minimize that criterion. A typical choice is gradient descent. The problem I see is that the cell has no means of knowing the gradient, so there is something a little magic in those plasticity rules. Kupiec’s theory suggests an alternative idea, which is to see plasticity as a stochastic optimization process due to noise in genetic networks. This reminds me of chemotaxis in microbes (eg E. Coli): the microbe swims towards the point of highest concentration by a simple method, which consists in turning less often when concentration increases. There is also some relation to Kenneth Harris’s neural marketplace idea (Harris, 2008).

3. Tovar et al (2017). Recording action potential propagation in single axons using multi-electrode arrays. (Comment on PubPeer)

The authors use an MEA on cultured neurons to record the propagation of action potentials in single neurons, and how it changes over days. The electrode signals quite clearly allow identifying the initiation site (presumably, AIS) and axonal processes. The authors show for example (using TTX) that reducing available Na+ conductance reduces conduction velocity without affecting the reliability of propagation, or the amplitude of the signals. As stated in the discussion, what I find particularly interesting is that it might allow investigating the development of excitability. From a theoretical neuroscience perspective, I see this as a very interesting form of learning (learning to propagate spikes to terminals) for which there is unfortunately very little experimental data. For example: does excitability develop jointly with the growth of axonal processes, or does it come after? Is it activity dependent? How does the cell know where to place the channels? (note that there is some relationship with a paper that I have discussed in the previous issue, Williams et al., 2016). The authors suggest that excitability develops after growth because they interpret a change in an electrode signal as the axonal process changing from passive to active. Unfortunately, the interpretation is not entirely straightforward because there is no simultaneous imaging of axonal morphology. This would be indeed a major addition, but not so easy with dense cultures. The discussion points to a number of other studies relevant to this problem.

4. Dupre C, Yuste R (2017). Non-overlapping Neural Networks in Hydra vulgaris. (Comment on PubPeer).

This paper introduces the cnidarian Hydra as a model system for neuroscience, showing that the activity of the entire nervous system (a decentralized nervous system called “nerve net”) can be measured with calcium imaging at single neuron resolution, as the animal is small and transparent. In principle, the ability of recording (and potentially optically stimulating) the entire nervous system is interesting in that it might help go beyond the reductionist approaches that are popular in neuroscience (tuning curves, etc) but are not appropriate to understand biological organisms, which have a systemic organization. This point was in fact made a long time ago in a more general setting, for example by prononents of cybernetics (see e.g. General System Theory by Von Bertalanffy, 1968).

Thus, there is potential in the possibility of measuring the activity of an entire nervous system. This study makes a technical demonstration of feasibility. There are however a number of issues that need to be solved: the temporal resolution (100 ms, which probably does not allow resolving the temporal coordination of spikes); the fact that only correlations between activity and behavior are measured, while understanding interactions requires manipulating neurons (possible with optogenetics) and/or the environment; behavior is not natural because the animal is constrained between two coverslips (perhaps we might imagine an online tracking method?); perhaps more importantly, the effect of an action potential on the animal’s movement is unknown, and probably involves some rather complicated biomechanical and hydrodynamic aspects. Finally, it is often implicitly assumed that behavior is entirely determined by nervous activity. But what might come up is that the nervous system (as well as non-electrical components; metabolism), interacts with a complex biomechanical system (the theme of embodiment, see e.g. Pfeifer & Bongard, 2006).

April 2017

Editorial

In this issue, I have selected one paper on building a cheap lab with 3D-printed equipment (1), and 4 papers related to axonal excitability and plasticity. Williams et al. (2) formalize and analyze a model of intracellular trafficking and its regulation, which might apply to axonal plasticity and protein turn-over. I picked an old paper by Stanford (3) suggesting that there might be in particular axonal plasticity for timing, in a way that equalizes the conduction time from different retinal ganglion cells and their targets. A review by Wefelmeyer et al. (4) summarizes recent findings about the structural plasticity of the axonal initial segment. Finally, I selected a paper by Gouwens and Wilson (5), where they used theory and experiments to study the geometry of spike initiation in Drosophila neurons.

Articles

1. Chagas AM, Godino LP, Arrenberg AB, Baden T (2017). The 100 € lab: A 3-D printable open source platform for fluorescence microscopy, optogenetics and accurate temperature control during behaviour of zebrafish, Drosophila and C. elegans. (Comment on PubPeer).

This is quite exciting: the authors demonstrate the use of a 3D printed platform, with some basic electronics (Arduino, Raspberry Pi), which includes a microscope, manipulators, Peltier heating, and everything necessary to do optogenetics and fluorescence imaging, and behavioral tracking, all of this for about 200 €. The optics are apparently not great (about 10 µm precision) but could be replaced. This could be a way to convince theoreticians to do their own experiments!

2. Williams AH, O’Donnell C, Sejnowski TJ, O’Leary T (2016). Dendritic trafficking faces physiologically critical speed-precision tradeoffs. (Comment on PubPeer).

Plasticity and protein turnover require intracellular transport of molecules. How can molecules be delivered at the right place? A popular model is the “sushi belt” model: material moves along a belt (microtubules) and synapses pick from it at a variable rate. There are different ways to regulate the amount of material that is delivered at different places, for example to regulate the rate of capture, or to regulate trafficking rates (the speed of the belt; although here the analogy with the sushi belt does not work so well). This model, which is not a mathematical model but rather a vague conceptual model, raises very interesting theoretical questions, which are examined in this paper. For example, how is it possible to ensure that material is delivered at different sites in appropriate amounts, based only on local demand signals? If a synapse picks from the belt, wouldn’t that affect delivery to all downstream synapses? The authors formalize the sushi belt model mathematically, and examine essentially two variations, one where the trafficking rates are regulated, another where capture rates are regulated. The study shows that it is in fact not at all trivial to make the model functional, in terms of precision (delivering the right amount of material) and delivery speed. I suspect that there are better ways to regulate the trafficking and capture rates than proposed there, but in any case this study has the merit of formalizing the model and some of the functional problems. Although the model was conceived for dendritic trafficking, I suppose it should also apply to the axon, for example for the maintenance of excitability via protein turn-over. Note that there are other theoretical studies on intracellular trafficking, in particular by Paul Bressloff (e.g. Bressloff and Levien, 2015).

3. Stanford LR (1987). Conduction Velocity Variations Minimize Conduction Time DIfferences Among Rednal Ganglion Cell Axons. (Comment on PubPeer).

This 30 years-old paper is not very well known, but I find it fascinating. In the retina, axons of ganglion cells converge to the optic disk where they then form the optic nerve. The optic nerve is myelinated, but the part of the axons within the retina is not. Because all axons first meet at the optic disk, there is a conduction delay that depends on how far the cell is from the optic disk. The surprising result in this paper is that the conduction time in the optic nerve (from the retina to the LGN) is inversely correlated to the conduction time in the retina, so that the total conduction time is invariant (arguably, there are not so many data points, just 12 cells). This suggests the existence of developmental plasticity mechanisms that adjust axon size (or distance between Ranvier nodes) for synchrony.

4. Wefelmeyer W, Puhl CJ, Burrone J (2016). Homeostatic Plasticity of Subcellular Neuronal Structures- From Inputs to Outputs. (Comment on PubPeer)

This review highlights recent findings on structural plasticity of synapses and the axonal initial segment (AIS). I was especially interested in the AIS part. Several recent studies show that the AIS can change position and length with different manipulations, for example photogenetic stimulation or high potassium depolarization. These structural changes are associated with changes in excitability, which the authors present as homeostatic, although they recognize the results are not so clear. In particular, structural plasticity depends on cell type (distal displacement in some cell types, proximal displacement in others) and other plastic changes (eg expression of ionic channels) occur and act as confounding factors. For example, high potassium depolarization makes the AIS of cultured hippocampal neurons move distally (Grubb & Burrone, 2010). I have shown that this displacement should in principle make the neuron (slightly) more excitable (Brette, 2013), but the opposite is seen in those neurons. There were however strong changes in membrane properties, so the causal relations are not so obvious, all the more that other changes, such as Nav channel phosphorylation might have occurred too. The authors cite Gulledge & Bravo (2016) to point out that attenuation between the soma and AIS could be responsible for the decreased excitability, but that paper was a simulation study were axon diameter was fixed (1 µm) while somatodendritic morphology was changed, but in reality small neurons also have small axons, so that the situation analyzed in (Brette, 2013) still applies, in principle. Another interesting finding reviewed in this paper is that GABAergic synapses on the AIS do not move when the AIS moves, and therefore the number of synapses between the soma and initiation site can change, which changes the effect of inhibitory inputs. All these observations call for theoretical studies, where the relation between geometrical factors and excitability is analyzed. Finally, I would like to point out that one of our recent studies (Hamada et al., 2016) shows that structural plasticity of the AIS can have a homeostatic effect not on excitability per se, but on the transmission of the axonal spike to the soma.

5. Gouwens, NW and Wilson, RI (2009). Signal Propagation in Drosophila Central Neurons. (Comment on PubPeer)

Spike initiation in invertebrate neurons is quite different from vertebrate neurons. In the typical vertebrate neuron, synaptic currents from the dendrites are gathered at the soma, and spikes are initiated in the axon, which starts from the soma. In the typical invertebrate neuron, as the one studied here (Drosophila central neuron), a neurite emerges from the soma, then bifurcates into a dendritic tree and an axon. There is immunochemical evidence of an initial segment-like structure in Drosophila neurons near the bifurcation point (Trunova et al. 2011). This study confirms it with electrophysiological evidence and modeling. Morphologies are reconstructed, and passive responses to currents are measured at the soma. Optimization finds values for the passive properties – there are significant sources of uncertainty, but these are well addressed in the paper. Then they show that spikes in the soma are small, implying that the initiation zone is distal, and they use the model plus recordings of larger action potentials in other types of Drosophila neurons to get an estimate of the spike initiation site, which is found to be near the axon-dendrite bifurcation. Finally, they show that the resting potential is due mainly to Na+ and K+, as in other invertebrate neurons (Marmor, 1975).