May 2017


In this issue I have selected papers that are not in theoretical neuroscience, but which I think provide fresh ideas for theoretical neuroscientists. The first paper (1) is a critique of the “genetic code” idea, which argues that the cell and in particular genetic networks must be understood as a system, in contrast with the reductionist idea that one gene (DNA sequence) determines one phenotype (see also my blog piece on evolution as optimization, below). The second paper (2) is related, in that it proposes an original view of gene expression and cellular adaptation, in which genetic networks are seen as stochastic dynamical systems in which the amount noise is regulated by external factors, yielding to adaptation by stochastic search. In my view, this suggests an alternative to the idea of plasticity as gradient descent. The third paper (3) describes a technique to record the propagation of action potentials over days in cultured neurons, which provides a tool to investigate the development of excitability, for which there is currently no theoretical model (and in fact very little is known). Finally, the fourth paper (4) introduces a new model animal, Hydra, in which the activity of all neurons can be simultaneous recorded with calcium imaging, which should give some interesting material for theoreticians.


Is optimization a good metaphor of evolution?

Optimality arguments are often used in theoretical neuroscience, in reference to evolution. I point out in this text the limitations of the optimization metaphor for understanding evolution.


1. Noble D (2011). Neo-Darwinism, the Modern Synthesis and selfish genes: are they of use in physiology? (Comment on PubPeer)

I believe this is an important essay for theoretical neuroscientists, where modern evolutionary ideas are explained. This is essentially a critique of the idea of a “genetic code”, specifically of the reductionist idea that a gene (in the modern sense of DNA sequence) encodes a particular phenotype, an idea that has been popularized in particular by Dawkins’ “selfish gene” metaphor. Denis Noble argues that this is a reductionist view that is simply wrong, because the product of a gene depends not only on other genes but also on the cell itself. For example, the same DNA sequence can produce different functions in different species. Noble cites an experiment where the nucleus of a goldfish is placed in the enucleated egg of a carp, and the adult fish that results is not a goldfish, as the genetic code theory would predict, but something between carp and goldfish (in many cases with other species, no viable adult results). The author points out that DNA does not reproduce, only a cell does, and he concludes: “Organisms are interaction systems, not Turing machines”. In addition, not all transmitted material is in the nucleus. There are also transmitted cytoplasmic factors, for example organelles (mitochondria). In fact, there is a theory, which is well established for the case of mitochondria, that major evolutionary changes are due not to mutations but to endosymbiosis, the fusion of different organisms into a new one (see Lynn Margulis, Symbiotic planet). It seems to me that a strikingly analog critique can be formulated against the idea of a “neural code”.

2. Kupiec JJ (1997). A Darwinian theory for the origin of cellular differentiation. (Comment on PubPeer)

This paper is 20 years old but there is a recent paper providing experimental support to the theory (Richard et al., 2016). Although this may seem quite far from theoretical neuroscience, I believe the ideas sketched in this paper are very interesting for the questions of learning and plasticity. Kupiec has written a number of papers and books on his theory; another one worth reading is Kupiec (2010), “On the lack of specificity of proteins and its consequences for a theory of biological organization”, where he points out the fact that a given protein can interact with many molecular partners, which is at odds with the idea of a genetic program. The criticism is linked to Denis Noble’s critique of the genetic code (Noble 2011).

The general idea of this 1997 essay is the following (I may oversimplify and interpret in my way as I have not read all his works on the subject). The expression of genetic networks is actually noisy (this is well established). This noise can make the genetic network spontaneously jump between several stable attractors (which we call cell types). Now the interesting idea: the amount of noise is actually regulated and depends on how adapted the cell is to its environment (growth factors): less noise when the cell is well adapted. This makes the cell adapt to its environment. The Darwinian flavor comes in when you consider that healthy cells reproduce more.

Why I think this is inspiring for theoretical neuroscience is that when trying to connect learning and plasticity, one common idea is to define a functional criterion that is to be minimized, and then to propose plasticity rules that tend to minimize that criterion. A typical choice is gradient descent. The problem I see is that the cell has no means of knowing the gradient, so there is something a little magic in those plasticity rules. Kupiec’s theory suggests an alternative idea, which is to see plasticity as a stochastic optimization process due to noise in genetic networks. This reminds me of chemotaxis in microbes (eg E. Coli): the microbe swims towards the point of highest concentration by a simple method, which consists in turning less often when concentration increases. There is also some relation to Kenneth Harris’s neural marketplace idea (Harris, 2008).

3. Tovar et al (2017). Recording action potential propagation in single axons using multi-electrode arrays. (Comment on PubPeer)

The authors use an MEA on cultured neurons to record the propagation of action potentials in single neurons, and how it changes over days. The electrode signals quite clearly allow identifying the initiation site (presumably, AIS) and axonal processes. The authors show for example (using TTX) that reducing available Na+ conductance reduces conduction velocity without affecting the reliability of propagation, or the amplitude of the signals. As stated in the discussion, what I find particularly interesting is that it might allow investigating the development of excitability. From a theoretical neuroscience perspective, I see this as a very interesting form of learning (learning to propagate spikes to terminals) for which there is unfortunately very little experimental data. For example: does excitability develop jointly with the growth of axonal processes, or does it come after? Is it activity dependent? How does the cell know where to place the channels? (note that there is some relationship with a paper that I have discussed in the previous issue, Williams et al., 2016). The authors suggest that excitability develops after growth because they interpret a change in an electrode signal as the axonal process changing from passive to active. Unfortunately, the interpretation is not entirely straightforward because there is no simultaneous imaging of axonal morphology. This would be indeed a major addition, but not so easy with dense cultures. The discussion points to a number of other studies relevant to this problem.

4. Dupre C, Yuste R (2017). Non-overlapping Neural Networks in Hydra vulgaris. (Comment on PubPeer).

This paper introduces the cnidarian Hydra as a model system for neuroscience, showing that the activity of the entire nervous system (a decentralized nervous system called “nerve net”) can be measured with calcium imaging at single neuron resolution, as the animal is small and transparent. In principle, the ability of recording (and potentially optically stimulating) the entire nervous system is interesting in that it might help go beyond the reductionist approaches that are popular in neuroscience (tuning curves, etc) but are not appropriate to understand biological organisms, which have a systemic organization. This point was in fact made a long time ago in a more general setting, for example by prononents of cybernetics (see e.g. General System Theory by Von Bertalanffy, 1968).

Thus, there is potential in the possibility of measuring the activity of an entire nervous system. This study makes a technical demonstration of feasibility. There are however a number of issues that need to be solved: the temporal resolution (100 ms, which probably does not allow resolving the temporal coordination of spikes); the fact that only correlations between activity and behavior are measured, while understanding interactions requires manipulating neurons (possible with optogenetics) and/or the environment; behavior is not natural because the animal is constrained between two coverslips (perhaps we might imagine an online tracking method?); perhaps more importantly, the effect of an action potential on the animal’s movement is unknown, and probably involves some rather complicated biomechanical and hydrodynamic aspects. Finally, it is often implicitly assumed that behavior is entirely determined by nervous activity. But what might come up is that the nervous system (as well as non-electrical components; metabolism), interacts with a complex biomechanical system (the theme of embodiment, see e.g. Pfeifer & Bongard, 2006).

April 2017


In this issue, I have selected one paper on building a cheap lab with 3D-printed equipment (1), and 4 papers related to axonal excitability and plasticity. Williams et al. (2) formalize and analyze a model of intracellular trafficking and its regulation, which might apply to axonal plasticity and protein turn-over. I picked an old paper by Stanford (3) suggesting that there might be in particular axonal plasticity for timing, in a way that equalizes the conduction time from different retinal ganglion cells and their targets. A review by Wefelmeyer et al. (4) summarizes recent findings about the structural plasticity of the axonal initial segment. Finally, I selected a paper by Gouwens and Wilson (5), where they used theory and experiments to study the geometry of spike initiation in Drosophila neurons.


1. Chagas AM, Godino LP, Arrenberg AB, Baden T (2017). The 100 € lab: A 3-D printable open source platform for fluorescence microscopy, optogenetics and accurate temperature control during behaviour of zebrafish, Drosophila and C. elegans. (Comment on PubPeer).

This is quite exciting: the authors demonstrate the use of a 3D printed platform, with some basic electronics (Arduino, Raspberry Pi), which includes a microscope, manipulators, Peltier heating, and everything necessary to do optogenetics and fluorescence imaging, and behavioral tracking, all of this for about 200 €. The optics are apparently not great (about 10 µm precision) but could be replaced. This could be a way to convince theoreticians to do their own experiments!

2. Williams AH, O’Donnell C, Sejnowski TJ, O’Leary T (2016). Dendritic trafficking faces physiologically critical speed-precision tradeoffs. (Comment on PubPeer).

Plasticity and protein turnover require intracellular transport of molecules. How can molecules be delivered at the right place? A popular model is the “sushi belt” model: material moves along a belt (microtubules) and synapses pick from it at a variable rate. There are different ways to regulate the amount of material that is delivered at different places, for example to regulate the rate of capture, or to regulate trafficking rates (the speed of the belt; although here the analogy with the sushi belt does not work so well). This model, which is not a mathematical model but rather a vague conceptual model, raises very interesting theoretical questions, which are examined in this paper. For example, how is it possible to ensure that material is delivered at different sites in appropriate amounts, based only on local demand signals? If a synapse picks from the belt, wouldn’t that affect delivery to all downstream synapses? The authors formalize the sushi belt model mathematically, and examine essentially two variations, one where the trafficking rates are regulated, another where capture rates are regulated. The study shows that it is in fact not at all trivial to make the model functional, in terms of precision (delivering the right amount of material) and delivery speed. I suspect that there are better ways to regulate the trafficking and capture rates than proposed there, but in any case this study has the merit of formalizing the model and some of the functional problems. Although the model was conceived for dendritic trafficking, I suppose it should also apply to the axon, for example for the maintenance of excitability via protein turn-over. Note that there are other theoretical studies on intracellular trafficking, in particular by Paul Bressloff (e.g. Bressloff and Levien, 2015).

3. Stanford LR (1987). Conduction Velocity Variations Minimize Conduction Time DIfferences Among Rednal Ganglion Cell Axons. (Comment on PubPeer).

This 30 years-old paper is not very well known, but I find it fascinating. In the retina, axons of ganglion cells converge to the optic disk where they then form the optic nerve. The optic nerve is myelinated, but the part of the axons within the retina is not. Because all axons first meet at the optic disk, there is a conduction delay that depends on how far the cell is from the optic disk. The surprising result in this paper is that the conduction time in the optic nerve (from the retina to the LGN) is inversely correlated to the conduction time in the retina, so that the total conduction time is invariant (arguably, there are not so many data points, just 12 cells). This suggests the existence of developmental plasticity mechanisms that adjust axon size (or distance between Ranvier nodes) for synchrony.

4. Wefelmeyer W, Puhl CJ, Burrone J (2016). Homeostatic Plasticity of Subcellular Neuronal Structures- From Inputs to Outputs. (Comment on PubPeer)

This review highlights recent findings on structural plasticity of synapses and the axonal initial segment (AIS). I was especially interested in the AIS part. Several recent studies show that the AIS can change position and length with different manipulations, for example photogenetic stimulation or high potassium depolarization. These structural changes are associated with changes in excitability, which the authors present as homeostatic, although they recognize the results are not so clear. In particular, structural plasticity depends on cell type (distal displacement in some cell types, proximal displacement in others) and other plastic changes (eg expression of ionic channels) occur and act as confounding factors. For example, high potassium depolarization makes the AIS of cultured hippocampal neurons move distally (Grubb & Burrone, 2010). I have shown that this displacement should in principle make the neuron (slightly) more excitable (Brette, 2013), but the opposite is seen in those neurons. There were however strong changes in membrane properties, so the causal relations are not so obvious, all the more that other changes, such as Nav channel phosphorylation might have occurred too. The authors cite Gulledge & Bravo (2016) to point out that attenuation between the soma and AIS could be responsible for the decreased excitability, but that paper was a simulation study were axon diameter was fixed (1 µm) while somatodendritic morphology was changed, but in reality small neurons also have small axons, so that the situation analyzed in (Brette, 2013) still applies, in principle. Another interesting finding reviewed in this paper is that GABAergic synapses on the AIS do not move when the AIS moves, and therefore the number of synapses between the soma and initiation site can change, which changes the effect of inhibitory inputs. All these observations call for theoretical studies, where the relation between geometrical factors and excitability is analyzed. Finally, I would like to point out that one of our recent studies (Hamada et al., 2016) shows that structural plasticity of the AIS can have a homeostatic effect not on excitability per se, but on the transmission of the axonal spike to the soma.

5. Gouwens, NW and Wilson, RI (2009). Signal Propagation in Drosophila Central Neurons. (Comment on PubPeer)

Spike initiation in invertebrate neurons is quite different from vertebrate neurons. In the typical vertebrate neuron, synaptic currents from the dendrites are gathered at the soma, and spikes are initiated in the axon, which starts from the soma. In the typical invertebrate neuron, as the one studied here (Drosophila central neuron), a neurite emerges from the soma, then bifurcates into a dendritic tree and an axon. There is immunochemical evidence of an initial segment-like structure in Drosophila neurons near the bifurcation point (Trunova et al. 2011). This study confirms it with electrophysiological evidence and modeling. Morphologies are reconstructed, and passive responses to currents are measured at the soma. Optimization finds values for the passive properties – there are significant sources of uncertainty, but these are well addressed in the paper. Then they show that spikes in the soma are small, implying that the initiation zone is distal, and they use the model plus recordings of larger action potentials in other types of Drosophila neurons to get an estimate of the spike initiation site, which is found to be near the axon-dendrite bifurcation. Finally, they show that the resting potential is due mainly to Na+ and K+, as in other invertebrate neurons (Marmor, 1975).

March 2017


This month: epistemology, motor control and automated patch clamp.

First, why epistemology in a theoretical neuroscience journal? Because epistemology of neuroscience is theoretical neuroscience. It is about reflecting on what it means to model behavior or the nervous system, what methods and metaphors (eg “coding”) are relevant conceptual tools. It sets the frame in which meaningful questions can be asked and theories can be built. What is it that we want to explain when we make a model? Do we want to explain experimental data? If at a cocktail party I am asked about my work, I might respond for example that I try to understand how we localize sounds in space. Crucially, I do not respond that I develop models that try to explain the percentage of errors in a given psychophysical task when hearing tones through headphones in the lab. Yet in most studies, and in particular theoretical studies, we tend to forget the big picture. The model matches some experimental data, but does it actually address the hard problem, i.e., to explain real behavior? In the field of sound localization, most models are hopelessly bad at anything remotely related to sound localization in real settings, but they are good at discriminating tones (Goodman et al., 2013). We simply forget that a model of a sensory system is meant to explain how animals do the awesome things that they do, and not only to match a set of lab data on a trivial task (discriminating tones). Matching artificial experimental data provides contraints on models, but it is not the goal. Krakauer et al. (2) make this point in a recent essay and argue for more thorough studies of behavior (I would even say, ethology). An older paper by Tytell et al. (3) goes further and argues that one needs to realize that the nervous system is embodied and interacts with the physical world, and behavior is the result of this interaction. Crucially, it appears that the nervous system can tune the body, not only control it.

This last point has motivated us to look at how muscles produce movement and force. This is the subject of a theoretical paper by my student Charlotte Le Mouel where we argue that posture is actually tuned not for equilibrium but for potential movement (1). Muscles are controlled by spikes, and this control is often given as an example of rate coding. This in my view is an example of the confusions between correlation and causation often seen in the spike vs. rate debate (see my essay on the subject, Brette 2015). A nice 2006 study by Zhurov and Brezina (5) demonstrated in Aplysia that actually, spike timing is crucial in determining both the temporal pattern and the amplitude of muscular contraction, which is a deterministic function of spike pattern. A recent paper shows that it also appears to be the case in vertebrates (4).

Finally, this issue features 4 papers on automated patch-clamp (6-9). All have been published in the last 5 years. Why is this relevant to a theoretical neuroscience journal? Because I believe this might allow theoretical neuroscientists to dig into experiments themselves, which would be extremely beneficial. Patch-clamp is tedious, technical and labor-intensive. It is difficult to do both serious theory (and by this, I mean not only simulating models but also analyzing them and making predictions) and patch clamp experiments to test it. But for a few years now, it has become possible to automate most of the process – one must still prepare the tissue, the solutions, and pull electrodes. What is missing currently is: open source software for the automation, and perhaps a reduction of hardware costs (currently very expensive) using open hardware (eg 3D printed parts).

From the lab

1. Le Mouel C and Brette R (2017). Mobility as the purpose of postural control. (Comment on PubPeer).

As a first step into the development of sensorimotor models (for example orientation responses), we have looked at how muscles produce movement and force. This paper explains which muscles you should contract and in which order so as to produce certain movements efficiently, using elementary mechanical considerations (ie, we do not need muscle physiology). We then show how it explains muscular contraction patterns that are observed experimentally in humans in a variety of situations. Quite surprisingly (at least for us), we have found that posture seems to be adjusted not for stability per se, but to allow for efficient movements to be performed when necessary (eg when balance is perturbed). The work also questions the theory of muscular synergies, as it shows that skillful movement requires fine muscular control, both spatially and temporally.


2. Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, Poeppel D (2017). Neuroscience Needs Behavior: Correcting a Reductionist Bias. (Comment on PubPeer).

From the perspective of a computational neuroscientist, I believe a very important point is made here. Models are judged on their ability to account for experimental data, so the critical question is what counts as relevant data? Data currently used to constrain models in systems neuroscience are most often neural responses to stereotypical stimuli, and results from behavioral experiments with well-controlled but unecological tasks, for example conditioned responses to variations in one dimension of a stimulus. This leads to models that might agree with laboratory data (by design) but that don’t work, i.e. that do not explain how the animal manages to do what it does. I have made this point in the specific context of sound localization (Brette, 2010; Goodman et al., 2013). More on PubPeer and Pubmed Commons.

3. Tytell ED, Holmes P, Cohen AH (2011). Spikes alone do not behavior make: why neuroscience needs biomechanics. (Comment on PubPeer).

This review makes the point that behavior results not only from neural activity but also from the mechanical properties of the body, or more broadly from the coupling between body and environment. A famous example in robotics is McGeer’s passive walker. The paper draws on many interesting examples from (mostly but not only) insects locomotion. I found that the most interesting part of this review was the discussion of active tuning of passive properties. That is, one way in which animals produce movement is not by directly controlling the different limbs, as we would imagine if we were to control a robot, but by modulating the passive mechanical properties of the musculoskeletal system. For example, if two antagonists muscles are contracted, they become stiffer, which changes their reactions to perturbations. These reactions are instantaneous, as they do not require the nervous system; these are called “preflexes”. The paper ends on the idea that the development of motor skill might rely on the tuning of preflexes, rather than on the development of central control. This opens very interesting paths for theoretical neuroscience.

4. Srivastava KH, Holmes CM, Vellema M, Pack A, Elemans CPH, Nemenman I, Sober SJ (2017). Motor control by precisely timed spike patterns. (Comment on PubPeer).

This study shows that the precise spike timing of vertebrate motoneurons has significant behavioral effect, by looking at breathing in songbirds, which is slow compared to the time scale of spike patterns. Long recordings are obtained with an MEA, together with air pressure and force recordings. Focusing on 20-ms bursts of 3 spikes, they show that shifting the middle spike by a few milliseconds has strong effects on muscle contraction and air pressure, due to nonlinearities in the neuromuscular transform. The findings support the view that firing rates correlate with various aspects of neural activity, but spikes causally determine neural activity and behavior (Brette 2015). This is a nice study, although the authors seem to have missed a previous study that shows very similar findings with more detail in an invertebrate (Aplysia) (Zhurov and Brezina, 2006).

5. Zhurov Y and Brezina V (2006). Variability of Motor Neuron Spike Timing Maintains and Shapes Contractions of the Accessory Radula Closer Muscleof Aplysia. (Comment on PubPeer).

This study shows that the precise spike timing of motoneurons controlling a feeding muscle of Aplysia has strong effect on its contraction. This is surprising because that muscle is a slow muscle that contracts over seconds, but adding or removing just one spike has a very strong and immediate effect on contraction, as shown in this figure (Fig. 1C):

The muscle is controlled by just two neurons, so it is a nice model system. The authors also show that natural spike patterns are irregular, but the neuromuscular transform is deterministic, which means that shifting spikes has a reproducible effect on the pattern of contraction, which is not just a temporal shift but also a strong change in amplitude, due to nonlinear effects. The result is that natural patterns produce twice more contraction than regular patterns of the same rate. In addition, these irregular patterns appear to be synchronized across the two sides of the animal, producing synchronized contractions. This is very convincing and supportive of spike-based theories of neural function (Brette 2015).

6. Kodandaramaiah SB, Franzesi GT, Chow BY, Boyden ES, Forest CR (2012). Automated whole-cell patch-clamp electrophysiology of neurons in vivo. (Comment on PubPeer).

This is the first demonstration of automatic patch-clamp in intact cells (i.e., not with patch clamp chips which work with suspensions). It was done in vivo, which is actually simpler than in vitro because it is blind: the pipette is lowered until a cell is detected, which is signaled by an increase in resistance. The full code and circuit designs are freely available, although the code is in Labview, proprietary software; it is also made for specific hardware (amplifier and acquisition board), although this can of course be adapted. An update with more detail has been recently published (Kodandaramaiah et al. 2016). The key element is the pressure controller, which allows the program to send positive or negative pressure and suction pulses through the pipette. There is a clever design in this study, which is very cheap to build: there are 4 tanks with specified pressures (I suppose using large pipettes that are manually filled with air), and a few electrovalves controlled by an acquisition board switch between the different tanks.

7. Desai NS, Siegel JJ, Taylor W, Chitwood RA, Johnston D (2015). MATLAB-based automated patch-clamp system for awake behaving mice. (Comment on PubPeer).

This is similar to the blind in vivo automatic patch-clamp technique of Kodandaramaiah et al. (2016), with a few differences. One is that it is written in Matlab, also proprietary software. The more interesting difference, in my view, is the pressure controller. Instead of using 4 manually filled tanks, there is an automatic electronic system that adjusts the pressure to any specified value. It essentially mixes two pressure sources (+10 psi and -10 psi) using a PID controller programmed on an Arduino. The code is also freely available.

8. Wu Q, Kolb I, Callahan BM, Su Z, Stoy W, Kodandaramaiah SB, Neve R, Zeng H, Boyden ES, Forest CR, Chubykin AA (2016). Integration of autopatching with automated pipette and cell detection in vitro. (Comment on PubPeer).

This study adapts the automated patch-clamp technique introduced in Kodandaramaiah et al. (2016) to slices. The approach is visually guided (using simple computer vision algorithms); the motorized manipulator is also automatically calibrated with the camera, using a pipette detection algorithm. The paper claims a 2/3 success rate, instead of 1/3 for a human operator. The code is available in Labview and Python, which is nice, but unfortunately the Python code is not in any usable form at this moment (no documentation and very few comments). I regret that a lot of technical detail is missing from the paper, in particular details of the computer vision algorithms and of the pressure control system. This control system is different from the previous one; instead of tanks with fixed pressure, it seems to use a single pump and a pressure sensor in a clever way to produce both positive and negative pressure. The drawing on Fig. 2C is the only information I could find about the system in the paper.

9. Kolb I, Stoy WA, Rousseau EB, Moody OA, Jenkins A, Forest CR (2016). Cleaning patch-clamp pipettes for immediate reuse. (Comment on PubPeer).

This is a simple but very interesting study where the authors show that it is possible to clean patch clamp pipettes in Alconox up to 10 times, and reuse the pipettes on different cells with no noticeable effect. This is what was missing to truly automate patch clamp, as it was previously necessary to manually change the pipette after every recorded cell (or failed attempt).

February 2017


This month, I have selected 4 papers on spike initiation (1-4), 1 classical paper on the theory of brain energetics (5), and 1 paper on bibliometrics (6). Three of the papers on spike initiation (1-3) have in common that they are about the relation between geometry (morphology of the neuron and spatial distribution of channels) and excitability. Spikes are initiated in a small region called the axon initial segment (AIS), and this region is very close to the soma. Thus there is a discontinuity in both the geometry (big soma, thin axon) and the spatial distribution of channels (lots in the AIS). It has great impact on excitability, but this has not been very deeply explored theoretically. In fact, as I have discussed in a recent review (Brette, 2015), most theory on excitability (dynamical systems theory) has been developed on isopotential models, and so is largely obsolete. So there is much to do on spike initiation theory that takes into account the soma-AIS system.


From the lab

1. Hamada M, Goethals S, de Vries S, Brette R, Kole M (2016). Covariation of axon initial segment location and dendritic tree normalizes the somatic action potential. (Comment on PubPeer)

(Full disclosure: I am an author of this paper). In the lab, we are currently interested in the relation between neural geometry and excitability. In particular, what is the electrical impact of the location of the axon initial segment (AIS)? Experimentally, this is a difficult question because manipulations of AIS geometry (distance, length) also induce changes in Nav channel and other channel properties, in particular phosphorylation (Evans et al., 2015). So this is typically a good question for theorists. I have previously shown that moving the AIS away from the soma should make the neuron more excitable (lower spike threshold), everything else being equal (Brette, 2013). Here we look at what happens after axonal spike initiation, when the current enters the soma (I try to avoid the term “backpropagate”, see Telenczuk et al., 2016). The basic insight is simple: when the axonal spike is fully developed, the voltage gradient between soma and start of the AIS should be roughly 100 mV, and so the axonal current into the soma should be roughly 100 mV divided by resistance between soma and AIS, which is proportional to AIS distance. Next, to charge a big somatodendritic compartment, you need a bigger current. So we predict that big neurons should have a more proximal AIS. This is what the data obtained by Kole’s lab show in this paper (along with many other things, as our theoretical work is a small part of the paper – as often, most of the theory ends up in the supplementary).


2. Evans MD, Tufo C, Dumitrescu AS and MS Grubb. (2017). Myosin II activity is required for structural plasticity at the axon initial segment. (Comment on PubPeer)

A number of studies have shown that the AIS can move over hours or days, with various manipulations such as depolarizing the neuron (as in this study) or stimulating it optogenetically. Two open questions: what are the molecular mechanisms involved in this displacement? Is it actually a displacement or is it just that stuff is removed at one end and inserted at the other end? The same lab previously addressed the first question, showing the involvement of somatic L-type calcium channels and calcineurin. This study shows that myosin (the stuff of muscle, except not the type expressed in muscles) is involved, which strongly suggests that it is an actual displacement; this is in line with previous studies showing that dendrites and axons are contractile structures (e.g. Roland et al. (2014)). This and previous studies start to provide building blocks for a model of activity-dependent structural plasticity of the AIS (working on it!).


3. Michalikova M, Remme MWH and R Kempter. (2017). Spikelets in Pyramidal Neurons: Action Potentials Initiated in the Axon Initial Segment That Do Not Activate the Soma. (Comment on PubPeer)

Using simulations of detailed models, the authors propose to explain the observation of spikelets in vivo (small all-or-none events) by the failed propagation of axonal spikes to the soma. Under certain circumstances, they show that a spike generated at the distal axonal initiation site may fail to reach the somatic threshold for AP generation, so that only the smaller axonal spike is observed at the soma. This paper provides a nice overview of the topic and I found the study convincing. There is in fact a direct relation to our paper discussed above (Hamada et al., 2016): this study shows how the axonal spike can fail to trigger the somatic spike, which explains why the AIS needs to be placed at the right position to prevent this. One can argue (speculatively) that if AIS position is indeed tuned to produce the right amount of somatic depolarization, then sometimes this should fail and result in a spikelet (algorithm: if no spikelet, move AIS distally; if spikelet, move AIS proximally).


4. Mensi S, Hagens P, Gerstner W and C Pozzorini (2016). Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons. (Comment on PubPeer)

I had to love this paper, because the authors basically experimentally confirm every theoretical prediction we had made in our paper on spike threshold adaptation (Platkiewicz and Brette, 2011). Essentially, what we had done is derive the dynamics of spike threshold from the dynamics of Nav channel inactivation. There were a number of non-trivial predictions, such as the shortening of the effective integration time constant, sensitivity to input variance, the specific way in which spike threshold depends on membrane potential, and the interaction between spike-triggered and subthreshold adaptation (that we touched upon in the discussion). This study uses a non-parametric model-fitting approach in cortical slices to empirically derive the dynamics of spike threshold (indirectly, based on responses to fluctuating currents), and the results are completely in line with our theoretical predictions.


5. Attwell D and SB Laughlin (2001). An energy budget for signaling in the grey matter of the brain (Comment on PubPeer and Pubmed Commons).

This is an old but important paper on energetics of the brain, in particular: how much does it cost to maintain the resting potential? How much does it cost to propagate a spike? The paper explains some theoretical ideas to do these estimations, and is also a good source for relevant empirical numbers. It is important though to look at follow-up studies, which have addressed some issues, for example action potential efficiency is underestimated in this study. One problem in this study is the estimation of the cost of the resting potential, which I think is wrong (see my detailed comment on Pubmed Commons and the response of the authors). Unfortunately, I think it is really hard to estimate this cost by theoretical means; it would require knowing the permeability at rest to various ions, most importantly in the axon. More on PubPeer.


6. Brembs B, Button K and M Munafò (2013). Deep Impact: Unintended consequences of journal rank. (Comment on PubPeer)

The authors look at the relation between journal rank (derived from impact factor) and various indicators, for example effect sizes reported, statistical power, etc. In summary, they found that the only thing journal rank strongly correlates with is the proportion of retractions and frauds. Another interesting finding is about the predictive power of journal rank on future citations. There is obviously a positive correlation since impact factor measures the number of citations. But it is really quite small (see my post on this). What is most interesting is that the predictive power started increasing in the 1960s, when the impact factor was introduced. This strongly suggests that, rather than being a quality indicator, the impact factor biases the citations of papers (increases the visibility of otherwise equally good papers). This paper also shows evidence of manipulation of impact factors by journals (including Current Biology, whose impact factor went from 7 to 12 after its acquisition by Elsevier), and is generally a good source of references on the subject.