October 2017

Editorial

This issue features 3 papers on electrophysiology (1-3) and one on motor control (4). The first one describes electrical communication between bacteria, based on (indirectly) voltage-dependent K+ channels. This shows the universality of electrical communication based on ionic channels, something that is not specific of neurons. The second one shows how to build a simple low-cost dynamic clamp system (where the injected current depends in real time on the measured voltage), using a low-cost microcontroller (not Arduino, but similar). The third one is a (primarily) modeling study on the extracellular field produced by an axon bundle, showing how its terminal can produce strong fields. Finally, I discuss a theoretical paper which shows how spiking neurons can control a simple physical system (an inverted pendulum).

Articles

1. Prindle A, Liu J, Asally M, Garcia-Ojalvo J and GM Süel (2015). Ion channels enable electrical communication in bacterial communities. (Comment on PubPeer)

This paper describes oscillations of membrane potential and extracellular potassium in a bacterial population (shown indirectly with an optical sensor), which show radial synchronization (ie same Vm for cells at the same radius). The proposed mechanism is as follows. A wave of depolarization is initiated by some metabolic factor which makes a K+ channel open, hyperpolarizing the cell. This releases K+ in the extracellular environment. The extracellular increase in K+ reduces the Nernst potential for K+, so all neighboring cells are depolarized. The K+ channel is voltage-dependent (indirectly in the model), it opens when the cell is depolarized. So there is a hyperpolarization that releases K+ extracellularly. With appropriate nonlinearities, the result is a propagating wave of K+ and Vm, which is faster than diffusion. There is a simple Hodgkin-Huxley type model in the supplementary methods. Some of it might be a little questionable (eg K+ reversal potential increases linearly rather than logarithmically with concentration; but that might be ok for small ion fluxes and probably doesn’t change the results qualitatively), but generally sensible. It is a chain of cells coupled through the extracellular environment. It would be interesting to extend the model to a disk and see whether one can account for radial synchrony.

This is interesting for at least two reasons. One is that there is electrical communication based on ionic channels not just in neurons but also in bacteria; so probably in all living cells. Another is the mode of communication is neither gap junctions (direct electrical coupling) nor synapses (through neurotransmitters), but through changes in ionic composition of the extracellular environment. These changes should occur also in the nervous system, so could it be that neurons also communicate in this way?

2. Desai NS, Gray R, Johnston D (2017). A Dynamic Clamp on Every Rig. (Comment on PubPeer)

This paper presents a low-cost dynamic clamp system implemented with a Teensy microcontroller, which works independently of the recording PC. It makes using the dynamic clamp much simpler, when one would otherwise need an operating system with a real time kernel. The associated website is unusually good! with detailed part list and construction methods, code, advice, etc.

3. McColgan T, Liu J, Kuokkanen PT, Carr CE, Wagner H, Kempter R (2017). Dipolar extracellular potentials generated by axonal projections. (Comment on PubPeer)

The authors show that the terminal zone of an axon bundle can generate a strong dipolar extracellular field. This is particularly the case in the auditory brainstem of barn owls (and most likely of mammals), where there is a strong extracellular potential (several mV) locked to the sound, called the neurophonic. The idea is quite simple. In the terminal zone, the axons bifurcate then terminate., so that the number of axons increases, then decreases. If the wavelength of the propagating wave is right, then current is drawn into the region where axons bifurcate and exits where they terminate. This is shown numerically and theoretically, and compared to data in barn owl nucleus laminaris. One point I am wondering about is the role of axon diameters in the phenomenon; indeed at an axon bifurcation, diameters of daughter branches tend to be smaller than that of the primary branch, so one might wonder whether that might not counterbalance the increase in axon number.

4. Kang TS, Banerjee A (2017). Learning Deterministic Spiking Neuron Feedback Controllers. (Comment on PubPeer)

The authors study how spiking neurons can control an inverted pendulum. Each spike produces a force acting on the pendulum (like a muscle twitch), and the observed variables (angle and its derivative) are inputs to the neurons (it’s a single layer). The question is how to set the parameters (input gains) so that the system is stable. This is an interesting problem, which is not straightforward, despite the simplicity of the architecture. The authors simply define an error function and derive a gradient descent on parameters, which seems to work. It seems however that the gradient depends on detailed aspects of the system, so it’s not so clear that is a good solution. Nevertheless, it is interesting because it addresses a problem of learning that is not representational but directly related to behavior, in contrast with most modeling studies on synaptic plasticity and learning.

September 2017

Editorial

This issue features two epistemological papers (1-2), one paper on spatial navigation (3), two papers on automatic patch clamp (4-5). The first one is a critique of the neural coding metaphor, which I just wrote. This critique connects to the more general problem of reductionism in neuroscience (or in biology more generally), about which Tony Bell wrote an interesting essay (2). The coding metaphor indeed implies that there exists a separation between representation and decision/action, but a nervous system cannot be split up in this way. Similarly, Bell argues, seeing the brain as a computer is not very meaningful.

The next paper I discuss (3) is not a neuroscience paper, but an old robotics paper where the authors describe a simple way in which an agent can navigate in crowded environments, by avoiding places it has visited. This is seen in some species (eg slime mold) which leave a trail behind them. A wild but interesting speculation is that the spatial memory system of vertebrates (place cells) might result from an internalization of such mechanisms.

Finally, I discuss two simultaneously published papers on automatic patch that describe more or less the same algorithms (4-5), a rather straightforward but useful improvement where the targeted cell is visually tracked to adjust the trajectory of the pipette as it is moved down.

From the lab

1. Brette (2017). Is coding a relevant metaphor for the brain?

In this essay, I argue that the neural coding metaphor is often inappropriate and misleading. First, it is a dualist metaphor, because for something to count as “information”, that thing must be mapped to some other thing outside the brain. Information in the sense of Shannon is information for an external observer, not for the receiver. A more relevant notion of information is captured by the metaphor of perception as science making (finding laws and structure), rather than perception as encoding. Second, the relation between “input” and “output” of a neuron is circular (through synaptic connections or through the effect of action on sensory signals), and therefore mapping perception as a feedforward process is inappropriate. Spikes are not messages, they are actions on other neurons and on the body.

Articles

2. Bell (1999). Levels and loops: the future of artificial intelligence and neuroscience. (Comment on PubPeer)

This is an interesting epistemological paper which discusses two important ideas in neuroscience. One is the ubiquity of loops. For example, the output of one neuron ultimately influences its own inputs because of cycles in synaptic networks. Sensory signals drive action, and action changes sensory signals. The same loops are seen at all levels (molecular, etc). The interdependency of all elements of a living system makes reductionist accounts inappropriate. One of these accounts is the coding metaphor, in which neurons are presumed to encode properties of the world, in a feedforward way (see my essay on the coding metaphor).

The second idea is a criticism of the computer metaphor of the brain, or of living systems in general. More specifically, in Bell’s words: “the prevalent tendency to view biological organisms as machines in the exact technical sense in which computers are machines, i.e. in the sense that they are physical instantiations of finite models which do not permit physical interactions beneath the level of their machine parts (e.g. the logic gate) to influence their functionality”. Empirically, we find interactions between and across all levels, and this makes the machine metaphor not very insightful.

3. Balch and Arkin (1980). Avoiding the past: Avoiding the Past: A Simple but Effective. Strategy for Reactive Navigation.

This is a paper from the reaction-based robotics field, where the authors describe a simple way to navigate in crowded environments. A classic problem is when there is a U-shaped barrier between the current position and a target position: if the agent goes straight towards the target, it gets stuck in the barrier – this is known as the “fly at the window problem”. It can be solved by planning and detailed knowledge of the environment, but this paper shows another efficient solution which is much simpler and used in some species such as slime molds (Reid et al., 2012). Here the robot maintains a spatial memory of places it has visited. A place it has visited becomes repulsive (in practice the algorithm computes the spatial gradient of a trace). The robot then avoids its own recent trajectory, and thus solves the U-shaped barrier problem. One might try to think of parallels between this system and place cells in the hippocampus (see an old blog post of mine on this).

4. Suk et al. (2017). Closed-Loop Real-Time Imaging Enables Fully Automated Cell-Targeted Patch-Clamp Neural Recording In Vivo. (Comment on PubPeer)

This is an improvement of previously developed automatic patch-clamp systems. The algorithm in Wu et al. (2016) could patch a visually identified cell, but it required some human intervention in about half of the cases. The main reason is that pipette movements induce movements of the targeted cell, and so the trajectory of the pipette needs to be adjusted. The straightforward solution is to track movements of the cell and adjust accordingly. This is what is done here. The algorithm is made very simple by the (more complicated) experimental design, where both the pipette and the cell are fluorescent and a 2-photon microscope is used. This way, tracking the cell is essentially a matter of tracking a fluorescent blob (focus is when intensity is maximal). The authors mention that they did not manage to do it without fluorescence. Fluorescence (Alexa) in the pipette is used in several ways: first to locate the pipette tip before brain penetration, then to check that the pipette is not clogged (there is a fluorescent plume flowing out of the pipette), and finally to check whether break-in was successful. There is also a small improvement in sealing, where the pressure is alternated if sealing fails, before the sealing procedure starts again. A similar algorithm for tracking has been proposed simultaneously by Annecchino et al. (2017).

5. Annecchino et al. (2017). Robotic Automation of In Vivo Two-Photon Targeted Whole-Cell Patch-Clamp Electrophysiology. (Comment on PubPeer)

This is an improvement of automatic patch-clamp systems to patch a visually identified cell, which is very similar to a simultaneously published algorithm by Suk et al. (2017). It uses image processing to track movements of the cell induced by movements of the pipette, both fluorescent. The image processing is more complex than in Suk et al. (2017); it might be able to process more crowded images. Unfortunately it is described quite briefly in the main text and not detailed in the methods (for some reason, the methods describe the hardware but not the algorithms; the same is unfortunately true also of the previous most related paper, Wu et al. (2016) – which oddly enough is not cited). The code however is public, although for proprietary software (Labview). Oddly enough, the paper introduces a pressure controller as a novelty (compared to fixed pressure containers in Kodandaramaiah et al. (2012)), but this was already done by Desai et al. (2015) as well as by Wu et al. (2016) (both uncited).

July 2017

Editorial

This month, I discuss four rather diverse papers. The first paper is a recent review about how the structure of neural networks changes spontaneously in vivo, which raises some questions about our view of memory engrams. The second one is an intriguing study showing that anticipated eye movements have an influence on the eardrums; it questions our view of the senses as separated modalities. The next two are about neurobiology of unicellular organisms. I use the term neurobiology because they show sensory transduction, produce action potentials (presumably, in the case of (3)), leading to motor reactions. These are not very well known but in my view very interesting for theoretical neuroscience.

Articles

1.Chambers and Rumpel (2017). A stable brain from unstable components – Emerging concepts, implications for neural computation. (Comment on PubPeer)

The authors review recent experimental evidence showing that in vivo, in the absence of any particular task (in particular learning task), synapses and functional properties of single neurons are not stable. For example, spines disappear and reappear; more significant in my view, motor tuning and place fields drift. Synaptic changes are still observed when ion channel activity is blocked. This might suggest that they are intrinsic as the authors point out,although in factit does not mean that in normal condition these changes are independent of activity; it could well be that the fluctuations are entrained by activity, in the same way as the response of an intrinsically noisy neuron is entrained by a time-varying current (Mainen and Sejnowski, 1995; see also Brette & Guigon, 2003 for some theory). The more significant point, I think, is that functional properties of neurons, e.g. tuning properties, seem to drift over time. This raises questions about the idea of a cell assembly as a memory engram. If a particular assembly encodes a particular memory, then after some time this same assembly should mean something completely different. Imagine, to take a caricatural example, that a memory of a red car is stored as a network of two connected neurons, the red neuron and the car neuron. After two weeks the red neuron becomes a green neuron. When cued with a car, I now remember a green car.

In theoretical neuroscience, one question which has been the subject of many studies is how can synaptic structure be stable enough to sustain memories while plastic enough to allow learning. Maybe this is not the right question; maybe the right question is: how can learning persists over a time scale longer than the functional dynamics of networks?

2. Gruters et al. (2017). The eardrum moves when the eyes move: multisensory effect on the mechanics of hearing. (Comment on PubPeer)

This is an intriguing paper showing that the eardrums move in conjunction with the eyes. Specifically, when the eyes saccade to the left, the eardrums move to the right (and conversely), and then oscillate at 30 Hz for a few cycles (possibly more, as the dampening could be the result of averaging). These oscillations are not that small, equivalent to 57 dB. Eardrum movements seem to start slightly before eye movements, which suggests that it is a result of anticipatory control from the central nervous system (rather than feedback or coupling). Naturally, one wonders what influence this might have on auditory perception, in particular on spatial perception of sounds. The fact that the oscillation is at the bottom of the audible spectrum might argue for a small role; on the other hand, one wonders what function this anticipatory control might serve if not perceptual. More generally, it makes me wonder to what extent results obtained on anesthesized animals (which form the majority of our knowledge on the auditory system), where the efferent system is down, are meaningful for the physiological condition. Intriguing!

3. Wan & Goldstein (2017). Run stop shock, run shock run: Spontaneous and stimulated gait-switching in a unicellular octoflagellate. (Comment on PubPeer).

The world of unicellular organisms is fascinating. In this paper, the authors show that a unicellular octoflagellate (8 flagellae) of about 17 µm in length displays three different gaits: run, shock (change of direction) and rest, corresponding to different beating modes of the flagellae. The shock is a very quick reaction that can also be triggered mechanically. This reminds me of the avoidance reaction of Paramecium (Eckert & Naitoh, 1972), and I would bet that this occurs by stimulus-induced depolarization followed by an action potential. It would be interesting to stick an electrode in those!

4. Iwatsuki & Naitoh (1988). Behavioural Responses to Light in Paramecium Bursaria in Relation to its Symbiotic Green Alga Chlorella. (Sorry I did not find it on PubPeer!)

To continue on the theme of unicellular neurobiology, this old paper discusses the photosensitive behavior of Paramecium Bursaria. This is a unicellular organism (a ciliate), which lives in symbiosis with green algae (ie, cultivates plants inside its cytoplasm) (see former issue on endosymbiosis). As a result, it tends to accumulate in light. The way it works is very interesting. It uses the avoidance reaction, in which an action potential triggers an abrupt change in direction. This happens in reaction to various stimuli, for example mechanical stimuli. Here the avoidance reaction is triggered when light intensity decreases; thus, it avoids shade and stays in light. It seems that the algae somehow hijack the avoidance reaction system through products of photosynthesis. It is not clear whether photosynthesis products directly trigger a depolarization, or whether they modulate an existing photosensitive system in Paramecium – indeed several species of Paramecium have a photophobic reaction to light increase.

June 2017

Editorial

This issue features two books (1,2), a PhD thesis (3) and one article (4). The first book is about the relation between artificial intelligence and human intelligence. Although it was written a long time ago about a different kind of artificial intelligence (expert systems), a number of arguments are still relevant today. Recently, IEEE Spectrum asked a number of artificial intelligence experts: “When will we have computers as capable as the brain?”. Most of them (but not all) seem to think that it will happen within a few decades or less. This book suggests a more humble answer. The second book is about an unorthodox view of evolution based on endosymbiosis, the idea that major steps in evolution come from the union of organisms into a new one, rather than by mutations.

For the first time, this issue features a PhD thesis (3), on patch-clamp automation. Indeed, why not selecting a thesis in a journal? A thesis is a substantial peer reviewed and published study, often more detailed and useful than articles. This one shows impressive work in robotics, enhancing automatic patch-clamp with automated pipette change (tricky!).

Finally, this issue features one article, showing the coordination between different channels in vertebrates (4).

Books

1. Dreyfus HL and Dreyfus SE (1986). Mind over machine.

This book written in the 1980s is a classic criticism of expert systems as a model of human cognition. The major trend in artificial intelligence at that time was logical inference systems based on rules designed by interrogating human experts. It may seem a little outdated but there are a few interesting elements. First, there is the historical perspective. Artificial intelligence had had a few successes, which motivated claims that soon machines would achieve the level of human intelligence. It also triggered huge investments, both public and private. But these goals were never achieved. All these approaches applied to very limited domains of expertise and failed to produce general-purpose intelligence. To me there is a striking parallel with the situation today, with a number of respected leaders announcing exactly the same thing, that soon machines will outperform and perhaps even replace humans. As for expert systems, the new connectionist generation of artificial intelligence has had impressive successes, and in many ways outperform the previous logic-based systems. But they still apply to limited domains for which they have been trained, and there is no sign that any machine understands anything. Machine translation, for example, works remarkably well today, based on modern statistical learning techniques and massive data, but none of these algorithms understands what a car or love is; the field still stumbles on the symbol grounding problem. So we should be more humble, because nothing but our wishful imagination lets us presume that these successes on statistical learning will extend to problems of a different kind, namely the design of autonomous intelligent beings.

Second, the authors argue that there are fundamental differences between the way expert systems and the human mind work. In particular, they criticize the computational view of the mind as the processing of symbols, and argue that it rather seems to operate by a holistic, pattern-matching process (following phenomenologist philosophy). This might seem like a trivial point today to connectionists, but this view still underlies much of cognitive science, and in fact in my view the criticism is still relevant to connectionism. Indeed, while a typical neural network might take signals as input, rather than symbols (eg an image), it is still casted in an input-output processing framework, in which the output is a symbol (eg label of a face, some category) and not a signal.

The third interesting point in the book is about the way humans acquire skills, in contrast to machines. In expert systems, knowledge is fed into the system directly in the form of rules, obtained by interrogating human experts. This may match how humans learn from the experience of other humans, trying to apply rules that are taught to them. But as the authors argue, while beginners start by applying rules, they quickly start relying less and less on rules, and more on holistic perception of situations, leading them to often break rules. This pattern diverges from the way learning is conceptualized in connectionism – the corresponding paradigm would be supervised learning, which is rigid and does not involve guidance.

Overall, although the arguments in the book were targeted to expert systems, many of them still apply to current artificial intelligence – there is a big gap between mind (or biology) and machine.

2. Lynn Margulis (1998). Symbiotic planet.

Lynn Margulis was an unorthodox biologist who demonstrated that mitochondria, the power plants of cells, result not from random mutations as neo-darwinist theory would suggest, but from endosymbiosis. In other words, mitochondria are bacteria that have been engulfed in a cell and live in symbiosis with it. In this book, she presents her theory that the most important steps in evolution come from endosymbiosis, not from mutations, in particular the evolution from prokaryotes to eukaryotes, for which there is now convincing evidence. It is a very interesting and refreshing counterpoint to the darwinist dogma (see the May 2017 issue).

Thesis

3. Holst (2016). In vivo serial patch clamp robotics for cell-type identification in the mouse visual cortex.

This thesis takes patch clamp automation (see March 2017 issue) one step further, by allowing the robot to change the pipette. This means storing the pipettes on a carousel, filling them with intracellular solution using a pressure controller, placing them on a custom electrode holder, and measuring their geometry (this one has been published in Stockslager et al., 2017). The designs are quite sophisticated. Amazingly, it seems to work! There is also an improved algorithm for break in that uses electrical feedback to stop the suction when break-in is detected, and overall a lot of interesting content in this thesis.

Articles

4. Tran T, Unal CG, Zaborszky L, Rotstein H, Kirkwood A and Golowasch J (2017). Ionic current correlations are ubiquitous across phyla. (Comment on PubPeer)

This is a short paper showing that in mice, a number of ionic conductances vary across cells in a correlated way. This is shown in particular in hippocampal granule cells, which are very compact (important to interpret the results because of space clamp issues). This phenomenon had been previously demonstrated in invertebrates; other work had shown that the voltage-dependence of different channels is also correlated (McAnelly & Zakon, 2000). Another interesting finding is that conductances vary with the circadian rhythm.

The co-variation of conductances has important consequences in terms of modeling. It means in particular that conductances are not genetically set, they are plastic as virtually everything in the cell. The fact that they co-vary, rather than vary independently, suggest that this may not be a random variation, or more precisely that there is some regulation that ensures that the parameters “make sense”, that is, produce a functional cell. For example, in an isopotential cell, the electrophysiological properties vary moderately if all conductances are scaled by the same number (ie you get similar spikes, but possibly a different excitability threshold). This kind of scaling could result from global homeostatic regulation, for example (see e.g. O’Leary et al. (2014) and other work from Marder’s lab). The data in this paper, however, suggest that the regulation of conductances is more complex than a global scaling. Some conductance pairs are not correlated. In other cases, the linear regression has a positive intercept – so the relation is not linear but affine. Generally, there is also a fair amount of variability around the linear regression, which might be noise of various sources, but which might also be simply the signature of a more complex multidimensional dependence (linear or nonlinear).

(By the way, in case the authors read this comment, the caption of Fig. 2 is incomplete on this version.)

May 2017

Editorial

In this issue I have selected papers that are not in theoretical neuroscience, but which I think provide fresh ideas for theoretical neuroscientists. The first paper (1) is a critique of the “genetic code” idea, which argues that the cell and in particular genetic networks must be understood as a system, in contrast with the reductionist idea that one gene (DNA sequence) determines one phenotype (see also my blog piece on evolution as optimization, below). The second paper (2) is related, in that it proposes an original view of gene expression and cellular adaptation, in which genetic networks are seen as stochastic dynamical systems in which the amount noise is regulated by external factors, yielding to adaptation by stochastic search. In my view, this suggests an alternative to the idea of plasticity as gradient descent. The third paper (3) describes a technique to record the propagation of action potentials over days in cultured neurons, which provides a tool to investigate the development of excitability, for which there is currently no theoretical model (and in fact very little is known). Finally, the fourth paper (4) introduces a new model animal, Hydra, in which the activity of all neurons can be simultaneous recorded with calcium imaging, which should give some interesting material for theoreticians.

Blog

Is optimization a good metaphor of evolution?

Optimality arguments are often used in theoretical neuroscience, in reference to evolution. I point out in this text the limitations of the optimization metaphor for understanding evolution.

Articles

1. Noble D (2011). Neo-Darwinism, the Modern Synthesis and selfish genes: are they of use in physiology? (Comment on PubPeer)

I believe this is an important essay for theoretical neuroscientists, where modern evolutionary ideas are explained. This is essentially a critique of the idea of a “genetic code”, specifically of the reductionist idea that a gene (in the modern sense of DNA sequence) encodes a particular phenotype, an idea that has been popularized in particular by Dawkins’ “selfish gene” metaphor. Denis Noble argues that this is a reductionist view that is simply wrong, because the product of a gene depends not only on other genes but also on the cell itself. For example, the same DNA sequence can produce different functions in different species. Noble cites an experiment where the nucleus of a goldfish is placed in the enucleated egg of a carp, and the adult fish that results is not a goldfish, as the genetic code theory would predict, but something between carp and goldfish (in many cases with other species, no viable adult results). The author points out that DNA does not reproduce, only a cell does, and he concludes: “Organisms are interaction systems, not Turing machines”. In addition, not all transmitted material is in the nucleus. There are also transmitted cytoplasmic factors, for example organelles (mitochondria). In fact, there is a theory, which is well established for the case of mitochondria, that major evolutionary changes are due not to mutations but to endosymbiosis, the fusion of different organisms into a new one (see Lynn Margulis, Symbiotic planet). It seems to me that a strikingly analog critique can be formulated against the idea of a “neural code”.

2. Kupiec JJ (1997). A Darwinian theory for the origin of cellular differentiation. (Comment on PubPeer)

This paper is 20 years old but there is a recent paper providing experimental support to the theory (Richard et al., 2016). Although this may seem quite far from theoretical neuroscience, I believe the ideas sketched in this paper are very interesting for the questions of learning and plasticity. Kupiec has written a number of papers and books on his theory; another one worth reading is Kupiec (2010), “On the lack of specificity of proteins and its consequences for a theory of biological organization”, where he points out the fact that a given protein can interact with many molecular partners, which is at odds with the idea of a genetic program. The criticism is linked to Denis Noble’s critique of the genetic code (Noble 2011).

The general idea of this 1997 essay is the following (I may oversimplify and interpret in my way as I have not read all his works on the subject). The expression of genetic networks is actually noisy (this is well established). This noise can make the genetic network spontaneously jump between several stable attractors (which we call cell types). Now the interesting idea: the amount of noise is actually regulated and depends on how adapted the cell is to its environment (growth factors): less noise when the cell is well adapted. This makes the cell adapt to its environment. The Darwinian flavor comes in when you consider that healthy cells reproduce more.

Why I think this is inspiring for theoretical neuroscience is that when trying to connect learning and plasticity, one common idea is to define a functional criterion that is to be minimized, and then to propose plasticity rules that tend to minimize that criterion. A typical choice is gradient descent. The problem I see is that the cell has no means of knowing the gradient, so there is something a little magic in those plasticity rules. Kupiec’s theory suggests an alternative idea, which is to see plasticity as a stochastic optimization process due to noise in genetic networks. This reminds me of chemotaxis in microbes (eg E. Coli): the microbe swims towards the point of highest concentration by a simple method, which consists in turning less often when concentration increases. There is also some relation to Kenneth Harris’s neural marketplace idea (Harris, 2008).

3. Tovar et al (2017). Recording action potential propagation in single axons using multi-electrode arrays. (Comment on PubPeer)

The authors use an MEA on cultured neurons to record the propagation of action potentials in single neurons, and how it changes over days. The electrode signals quite clearly allow identifying the initiation site (presumably, AIS) and axonal processes. The authors show for example (using TTX) that reducing available Na+ conductance reduces conduction velocity without affecting the reliability of propagation, or the amplitude of the signals. As stated in the discussion, what I find particularly interesting is that it might allow investigating the development of excitability. From a theoretical neuroscience perspective, I see this as a very interesting form of learning (learning to propagate spikes to terminals) for which there is unfortunately very little experimental data. For example: does excitability develop jointly with the growth of axonal processes, or does it come after? Is it activity dependent? How does the cell know where to place the channels? (note that there is some relationship with a paper that I have discussed in the previous issue, Williams et al., 2016). The authors suggest that excitability develops after growth because they interpret a change in an electrode signal as the axonal process changing from passive to active. Unfortunately, the interpretation is not entirely straightforward because there is no simultaneous imaging of axonal morphology. This would be indeed a major addition, but not so easy with dense cultures. The discussion points to a number of other studies relevant to this problem.

4. Dupre C, Yuste R (2017). Non-overlapping Neural Networks in Hydra vulgaris. (Comment on PubPeer).

This paper introduces the cnidarian Hydra as a model system for neuroscience, showing that the activity of the entire nervous system (a decentralized nervous system called “nerve net”) can be measured with calcium imaging at single neuron resolution, as the animal is small and transparent. In principle, the ability of recording (and potentially optically stimulating) the entire nervous system is interesting in that it might help go beyond the reductionist approaches that are popular in neuroscience (tuning curves, etc) but are not appropriate to understand biological organisms, which have a systemic organization. This point was in fact made a long time ago in a more general setting, for example by prononents of cybernetics (see e.g. General System Theory by Von Bertalanffy, 1968).

Thus, there is potential in the possibility of measuring the activity of an entire nervous system. This study makes a technical demonstration of feasibility. There are however a number of issues that need to be solved: the temporal resolution (100 ms, which probably does not allow resolving the temporal coordination of spikes); the fact that only correlations between activity and behavior are measured, while understanding interactions requires manipulating neurons (possible with optogenetics) and/or the environment; behavior is not natural because the animal is constrained between two coverslips (perhaps we might imagine an online tracking method?); perhaps more importantly, the effect of an action potential on the animal’s movement is unknown, and probably involves some rather complicated biomechanical and hydrodynamic aspects. Finally, it is often implicitly assumed that behavior is entirely determined by nervous activity. But what might come up is that the nervous system (as well as non-electrical components; metabolism), interacts with a complex biomechanical system (the theme of embodiment, see e.g. Pfeifer & Bongard, 2006).

April 2017

Editorial

In this issue, I have selected one paper on building a cheap lab with 3D-printed equipment (1), and 4 papers related to axonal excitability and plasticity. Williams et al. (2) formalize and analyze a model of intracellular trafficking and its regulation, which might apply to axonal plasticity and protein turn-over. I picked an old paper by Stanford (3) suggesting that there might be in particular axonal plasticity for timing, in a way that equalizes the conduction time from different retinal ganglion cells and their targets. A review by Wefelmeyer et al. (4) summarizes recent findings about the structural plasticity of the axonal initial segment. Finally, I selected a paper by Gouwens and Wilson (5), where they used theory and experiments to study the geometry of spike initiation in Drosophila neurons.

Articles

1. Chagas AM, Godino LP, Arrenberg AB, Baden T (2017). The 100 € lab: A 3-D printable open source platform for fluorescence microscopy, optogenetics and accurate temperature control during behaviour of zebrafish, Drosophila and C. elegans. (Comment on PubPeer).

This is quite exciting: the authors demonstrate the use of a 3D printed platform, with some basic electronics (Arduino, Raspberry Pi), which includes a microscope, manipulators, Peltier heating, and everything necessary to do optogenetics and fluorescence imaging, and behavioral tracking, all of this for about 200 €. The optics are apparently not great (about 10 µm precision) but could be replaced. This could be a way to convince theoreticians to do their own experiments!

2. Williams AH, O’Donnell C, Sejnowski TJ, O’Leary T (2016). Dendritic trafficking faces physiologically critical speed-precision tradeoffs. (Comment on PubPeer).

Plasticity and protein turnover require intracellular transport of molecules. How can molecules be delivered at the right place? A popular model is the “sushi belt” model: material moves along a belt (microtubules) and synapses pick from it at a variable rate. There are different ways to regulate the amount of material that is delivered at different places, for example to regulate the rate of capture, or to regulate trafficking rates (the speed of the belt; although here the analogy with the sushi belt does not work so well). This model, which is not a mathematical model but rather a vague conceptual model, raises very interesting theoretical questions, which are examined in this paper. For example, how is it possible to ensure that material is delivered at different sites in appropriate amounts, based only on local demand signals? If a synapse picks from the belt, wouldn’t that affect delivery to all downstream synapses? The authors formalize the sushi belt model mathematically, and examine essentially two variations, one where the trafficking rates are regulated, another where capture rates are regulated. The study shows that it is in fact not at all trivial to make the model functional, in terms of precision (delivering the right amount of material) and delivery speed. I suspect that there are better ways to regulate the trafficking and capture rates than proposed there, but in any case this study has the merit of formalizing the model and some of the functional problems. Although the model was conceived for dendritic trafficking, I suppose it should also apply to the axon, for example for the maintenance of excitability via protein turn-over. Note that there are other theoretical studies on intracellular trafficking, in particular by Paul Bressloff (e.g. Bressloff and Levien, 2015).

3. Stanford LR (1987). Conduction Velocity Variations Minimize Conduction Time DIfferences Among Rednal Ganglion Cell Axons. (Comment on PubPeer).

This 30 years-old paper is not very well known, but I find it fascinating. In the retina, axons of ganglion cells converge to the optic disk where they then form the optic nerve. The optic nerve is myelinated, but the part of the axons within the retina is not. Because all axons first meet at the optic disk, there is a conduction delay that depends on how far the cell is from the optic disk. The surprising result in this paper is that the conduction time in the optic nerve (from the retina to the LGN) is inversely correlated to the conduction time in the retina, so that the total conduction time is invariant (arguably, there are not so many data points, just 12 cells). This suggests the existence of developmental plasticity mechanisms that adjust axon size (or distance between Ranvier nodes) for synchrony.

4. Wefelmeyer W, Puhl CJ, Burrone J (2016). Homeostatic Plasticity of Subcellular Neuronal Structures- From Inputs to Outputs. (Comment on PubPeer)

This review highlights recent findings on structural plasticity of synapses and the axonal initial segment (AIS). I was especially interested in the AIS part. Several recent studies show that the AIS can change position and length with different manipulations, for example photogenetic stimulation or high potassium depolarization. These structural changes are associated with changes in excitability, which the authors present as homeostatic, although they recognize the results are not so clear. In particular, structural plasticity depends on cell type (distal displacement in some cell types, proximal displacement in others) and other plastic changes (eg expression of ionic channels) occur and act as confounding factors. For example, high potassium depolarization makes the AIS of cultured hippocampal neurons move distally (Grubb & Burrone, 2010). I have shown that this displacement should in principle make the neuron (slightly) more excitable (Brette, 2013), but the opposite is seen in those neurons. There were however strong changes in membrane properties, so the causal relations are not so obvious, all the more that other changes, such as Nav channel phosphorylation might have occurred too. The authors cite Gulledge & Bravo (2016) to point out that attenuation between the soma and AIS could be responsible for the decreased excitability, but that paper was a simulation study were axon diameter was fixed (1 µm) while somatodendritic morphology was changed, but in reality small neurons also have small axons, so that the situation analyzed in (Brette, 2013) still applies, in principle. Another interesting finding reviewed in this paper is that GABAergic synapses on the AIS do not move when the AIS moves, and therefore the number of synapses between the soma and initiation site can change, which changes the effect of inhibitory inputs. All these observations call for theoretical studies, where the relation between geometrical factors and excitability is analyzed. Finally, I would like to point out that one of our recent studies (Hamada et al., 2016) shows that structural plasticity of the AIS can have a homeostatic effect not on excitability per se, but on the transmission of the axonal spike to the soma.

5. Gouwens, NW and Wilson, RI (2009). Signal Propagation in Drosophila Central Neurons. (Comment on PubPeer)

Spike initiation in invertebrate neurons is quite different from vertebrate neurons. In the typical vertebrate neuron, synaptic currents from the dendrites are gathered at the soma, and spikes are initiated in the axon, which starts from the soma. In the typical invertebrate neuron, as the one studied here (Drosophila central neuron), a neurite emerges from the soma, then bifurcates into a dendritic tree and an axon. There is immunochemical evidence of an initial segment-like structure in Drosophila neurons near the bifurcation point (Trunova et al. 2011). This study confirms it with electrophysiological evidence and modeling. Morphologies are reconstructed, and passive responses to currents are measured at the soma. Optimization finds values for the passive properties – there are significant sources of uncertainty, but these are well addressed in the paper. Then they show that spikes in the soma are small, implying that the initiation zone is distal, and they use the model plus recordings of larger action potentials in other types of Drosophila neurons to get an estimate of the spike initiation site, which is found to be near the axon-dendrite bifurcation. Finally, they show that the resting potential is due mainly to Na+ and K+, as in other invertebrate neurons (Marmor, 1975).

March 2017

Editorial

This month: epistemology, motor control and automated patch clamp.

First, why epistemology in a theoretical neuroscience journal? Because epistemology of neuroscience is theoretical neuroscience. It is about reflecting on what it means to model behavior or the nervous system, what methods and metaphors (eg “coding”) are relevant conceptual tools. It sets the frame in which meaningful questions can be asked and theories can be built. What is it that we want to explain when we make a model? Do we want to explain experimental data? If at a cocktail party I am asked about my work, I might respond for example that I try to understand how we localize sounds in space. Crucially, I do not respond that I develop models that try to explain the percentage of errors in a given psychophysical task when hearing tones through headphones in the lab. Yet in most studies, and in particular theoretical studies, we tend to forget the big picture. The model matches some experimental data, but does it actually address the hard problem, i.e., to explain real behavior? In the field of sound localization, most models are hopelessly bad at anything remotely related to sound localization in real settings, but they are good at discriminating tones (Goodman et al., 2013). We simply forget that a model of a sensory system is meant to explain how animals do the awesome things that they do, and not only to match a set of lab data on a trivial task (discriminating tones). Matching artificial experimental data provides contraints on models, but it is not the goal. Krakauer et al. (2) make this point in a recent essay and argue for more thorough studies of behavior (I would even say, ethology). An older paper by Tytell et al. (3) goes further and argues that one needs to realize that the nervous system is embodied and interacts with the physical world, and behavior is the result of this interaction. Crucially, it appears that the nervous system can tune the body, not only control it.

This last point has motivated us to look at how muscles produce movement and force. This is the subject of a theoretical paper by my student Charlotte Le Mouel where we argue that posture is actually tuned not for equilibrium but for potential movement (1). Muscles are controlled by spikes, and this control is often given as an example of rate coding. This in my view is an example of the confusions between correlation and causation often seen in the spike vs. rate debate (see my essay on the subject, Brette 2015). A nice 2006 study by Zhurov and Brezina (5) demonstrated in Aplysia that actually, spike timing is crucial in determining both the temporal pattern and the amplitude of muscular contraction, which is a deterministic function of spike pattern. A recent paper shows that it also appears to be the case in vertebrates (4).

Finally, this issue features 4 papers on automated patch-clamp (6-9). All have been published in the last 5 years. Why is this relevant to a theoretical neuroscience journal? Because I believe this might allow theoretical neuroscientists to dig into experiments themselves, which would be extremely beneficial. Patch-clamp is tedious, technical and labor-intensive. It is difficult to do both serious theory (and by this, I mean not only simulating models but also analyzing them and making predictions) and patch clamp experiments to test it. But for a few years now, it has become possible to automate most of the process – one must still prepare the tissue, the solutions, and pull electrodes. What is missing currently is: open source software for the automation, and perhaps a reduction of hardware costs (currently very expensive) using open hardware (eg 3D printed parts).

From the lab

1. Le Mouel C and Brette R (2017). Mobility as the purpose of postural control. (Comment on PubPeer).

As a first step into the development of sensorimotor models (for example orientation responses), we have looked at how muscles produce movement and force. This paper explains which muscles you should contract and in which order so as to produce certain movements efficiently, using elementary mechanical considerations (ie, we do not need muscle physiology). We then show how it explains muscular contraction patterns that are observed experimentally in humans in a variety of situations. Quite surprisingly (at least for us), we have found that posture seems to be adjusted not for stability per se, but to allow for efficient movements to be performed when necessary (eg when balance is perturbed). The work also questions the theory of muscular synergies, as it shows that skillful movement requires fine muscular control, both spatially and temporally.

Articles

2. Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, Poeppel D (2017). Neuroscience Needs Behavior: Correcting a Reductionist Bias. (Comment on PubPeer).

From the perspective of a computational neuroscientist, I believe a very important point is made here. Models are judged on their ability to account for experimental data, so the critical question is what counts as relevant data? Data currently used to constrain models in systems neuroscience are most often neural responses to stereotypical stimuli, and results from behavioral experiments with well-controlled but unecological tasks, for example conditioned responses to variations in one dimension of a stimulus. This leads to models that might agree with laboratory data (by design) but that don’t work, i.e. that do not explain how the animal manages to do what it does. I have made this point in the specific context of sound localization (Brette, 2010; Goodman et al., 2013). More on PubPeer and Pubmed Commons.

3. Tytell ED, Holmes P, Cohen AH (2011). Spikes alone do not behavior make: why neuroscience needs biomechanics. (Comment on PubPeer).

This review makes the point that behavior results not only from neural activity but also from the mechanical properties of the body, or more broadly from the coupling between body and environment. A famous example in robotics is McGeer’s passive walker. The paper draws on many interesting examples from (mostly but not only) insects locomotion. I found that the most interesting part of this review was the discussion of active tuning of passive properties. That is, one way in which animals produce movement is not by directly controlling the different limbs, as we would imagine if we were to control a robot, but by modulating the passive mechanical properties of the musculoskeletal system. For example, if two antagonists muscles are contracted, they become stiffer, which changes their reactions to perturbations. These reactions are instantaneous, as they do not require the nervous system; these are called “preflexes”. The paper ends on the idea that the development of motor skill might rely on the tuning of preflexes, rather than on the development of central control. This opens very interesting paths for theoretical neuroscience.

4. Srivastava KH, Holmes CM, Vellema M, Pack A, Elemans CPH, Nemenman I, Sober SJ (2017). Motor control by precisely timed spike patterns. (Comment on PubPeer).

This study shows that the precise spike timing of vertebrate motoneurons has significant behavioral effect, by looking at breathing in songbirds, which is slow compared to the time scale of spike patterns. Long recordings are obtained with an MEA, together with air pressure and force recordings. Focusing on 20-ms bursts of 3 spikes, they show that shifting the middle spike by a few milliseconds has strong effects on muscle contraction and air pressure, due to nonlinearities in the neuromuscular transform. The findings support the view that firing rates correlate with various aspects of neural activity, but spikes causally determine neural activity and behavior (Brette 2015). This is a nice study, although the authors seem to have missed a previous study that shows very similar findings with more detail in an invertebrate (Aplysia) (Zhurov and Brezina, 2006).

5. Zhurov Y and Brezina V (2006). Variability of Motor Neuron Spike Timing Maintains and Shapes Contractions of the Accessory Radula Closer Muscleof Aplysia. (Comment on PubPeer).

This study shows that the precise spike timing of motoneurons controlling a feeding muscle of Aplysia has strong effect on its contraction. This is surprising because that muscle is a slow muscle that contracts over seconds, but adding or removing just one spike has a very strong and immediate effect on contraction, as shown in this figure (Fig. 1C):


The muscle is controlled by just two neurons, so it is a nice model system. The authors also show that natural spike patterns are irregular, but the neuromuscular transform is deterministic, which means that shifting spikes has a reproducible effect on the pattern of contraction, which is not just a temporal shift but also a strong change in amplitude, due to nonlinear effects. The result is that natural patterns produce twice more contraction than regular patterns of the same rate. In addition, these irregular patterns appear to be synchronized across the two sides of the animal, producing synchronized contractions. This is very convincing and supportive of spike-based theories of neural function (Brette 2015).

6. Kodandaramaiah SB, Franzesi GT, Chow BY, Boyden ES, Forest CR (2012). Automated whole-cell patch-clamp electrophysiology of neurons in vivo. (Comment on PubPeer).

This is the first demonstration of automatic patch-clamp in intact cells (i.e., not with patch clamp chips which work with suspensions). It was done in vivo, which is actually simpler than in vitro because it is blind: the pipette is lowered until a cell is detected, which is signaled by an increase in resistance. The full code and circuit designs are freely available, although the code is in Labview, proprietary software; it is also made for specific hardware (amplifier and acquisition board), although this can of course be adapted. An update with more detail has been recently published (Kodandaramaiah et al. 2016). The key element is the pressure controller, which allows the program to send positive or negative pressure and suction pulses through the pipette. There is a clever design in this study, which is very cheap to build: there are 4 tanks with specified pressures (I suppose using large pipettes that are manually filled with air), and a few electrovalves controlled by an acquisition board switch between the different tanks.

7. Desai NS, Siegel JJ, Taylor W, Chitwood RA, Johnston D (2015). MATLAB-based automated patch-clamp system for awake behaving mice. (Comment on PubPeer).

This is similar to the blind in vivo automatic patch-clamp technique of Kodandaramaiah et al. (2016), with a few differences. One is that it is written in Matlab, also proprietary software. The more interesting difference, in my view, is the pressure controller. Instead of using 4 manually filled tanks, there is an automatic electronic system that adjusts the pressure to any specified value. It essentially mixes two pressure sources (+10 psi and -10 psi) using a PID controller programmed on an Arduino. The code is also freely available.

8. Wu Q, Kolb I, Callahan BM, Su Z, Stoy W, Kodandaramaiah SB, Neve R, Zeng H, Boyden ES, Forest CR, Chubykin AA (2016). Integration of autopatching with automated pipette and cell detection in vitro. (Comment on PubPeer).

This study adapts the automated patch-clamp technique introduced in Kodandaramaiah et al. (2016) to slices. The approach is visually guided (using simple computer vision algorithms); the motorized manipulator is also automatically calibrated with the camera, using a pipette detection algorithm. The paper claims a 2/3 success rate, instead of 1/3 for a human operator. The code is available in Labview and Python, which is nice, but unfortunately the Python code is not in any usable form at this moment (no documentation and very few comments). I regret that a lot of technical detail is missing from the paper, in particular details of the computer vision algorithms and of the pressure control system. This control system is different from the previous one; instead of tanks with fixed pressure, it seems to use a single pump and a pressure sensor in a clever way to produce both positive and negative pressure. The drawing on Fig. 2C is the only information I could find about the system in the paper.

9. Kolb I, Stoy WA, Rousseau EB, Moody OA, Jenkins A, Forest CR (2016). Cleaning patch-clamp pipettes for immediate reuse. (Comment on PubPeer).

This is a simple but very interesting study where the authors show that it is possible to clean patch clamp pipettes in Alconox up to 10 times, and reuse the pipettes on different cells with no noticeable effect. This is what was missing to truly automate patch clamp, as it was previously necessary to manually change the pipette after every recorded cell (or failed attempt).

February 2017

Editorial

This month, I have selected 4 papers on spike initiation (1-4), 1 classical paper on the theory of brain energetics (5), and 1 paper on bibliometrics (6). Three of the papers on spike initiation (1-3) have in common that they are about the relation between geometry (morphology of the neuron and spatial distribution of channels) and excitability. Spikes are initiated in a small region called the axon initial segment (AIS), and this region is very close to the soma. Thus there is a discontinuity in both the geometry (big soma, thin axon) and the spatial distribution of channels (lots in the AIS). It has great impact on excitability, but this has not been very deeply explored theoretically. In fact, as I have discussed in a recent review (Brette, 2015), most theory on excitability (dynamical systems theory) has been developed on isopotential models, and so is largely obsolete. So there is much to do on spike initiation theory that takes into account the soma-AIS system.

 

From the lab

1. Hamada M, Goethals S, de Vries S, Brette R, Kole M (2016). Covariation of axon initial segment location and dendritic tree normalizes the somatic action potential. (Comment on PubPeer)

(Full disclosure: I am an author of this paper). In the lab, we are currently interested in the relation between neural geometry and excitability. In particular, what is the electrical impact of the location of the axon initial segment (AIS)? Experimentally, this is a difficult question because manipulations of AIS geometry (distance, length) also induce changes in Nav channel and other channel properties, in particular phosphorylation (Evans et al., 2015). So this is typically a good question for theorists. I have previously shown that moving the AIS away from the soma should make the neuron more excitable (lower spike threshold), everything else being equal (Brette, 2013). Here we look at what happens after axonal spike initiation, when the current enters the soma (I try to avoid the term “backpropagate”, see Telenczuk et al., 2016). The basic insight is simple: when the axonal spike is fully developed, the voltage gradient between soma and start of the AIS should be roughly 100 mV, and so the axonal current into the soma should be roughly 100 mV divided by resistance between soma and AIS, which is proportional to AIS distance. Next, to charge a big somatodendritic compartment, you need a bigger current. So we predict that big neurons should have a more proximal AIS. This is what the data obtained by Kole’s lab show in this paper (along with many other things, as our theoretical work is a small part of the paper – as often, most of the theory ends up in the supplementary).

Articles

2. Evans MD, Tufo C, Dumitrescu AS and MS Grubb. (2017). Myosin II activity is required for structural plasticity at the axon initial segment. (Comment on PubPeer)

A number of studies have shown that the AIS can move over hours or days, with various manipulations such as depolarizing the neuron (as in this study) or stimulating it optogenetically. Two open questions: what are the molecular mechanisms involved in this displacement? Is it actually a displacement or is it just that stuff is removed at one end and inserted at the other end? The same lab previously addressed the first question, showing the involvement of somatic L-type calcium channels and calcineurin. This study shows that myosin (the stuff of muscle, except not the type expressed in muscles) is involved, which strongly suggests that it is an actual displacement; this is in line with previous studies showing that dendrites and axons are contractile structures (e.g. Roland et al. (2014)). This and previous studies start to provide building blocks for a model of activity-dependent structural plasticity of the AIS (working on it!).

 

3. Michalikova M, Remme MWH and R Kempter. (2017). Spikelets in Pyramidal Neurons: Action Potentials Initiated in the Axon Initial Segment That Do Not Activate the Soma. (Comment on PubPeer)

Using simulations of detailed models, the authors propose to explain the observation of spikelets in vivo (small all-or-none events) by the failed propagation of axonal spikes to the soma. Under certain circumstances, they show that a spike generated at the distal axonal initiation site may fail to reach the somatic threshold for AP generation, so that only the smaller axonal spike is observed at the soma. This paper provides a nice overview of the topic and I found the study convincing. There is in fact a direct relation to our paper discussed above (Hamada et al., 2016): this study shows how the axonal spike can fail to trigger the somatic spike, which explains why the AIS needs to be placed at the right position to prevent this. One can argue (speculatively) that if AIS position is indeed tuned to produce the right amount of somatic depolarization, then sometimes this should fail and result in a spikelet (algorithm: if no spikelet, move AIS distally; if spikelet, move AIS proximally).

 

4. Mensi S, Hagens P, Gerstner W and C Pozzorini (2016). Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons. (Comment on PubPeer)

I had to love this paper, because the authors basically experimentally confirm every theoretical prediction we had made in our paper on spike threshold adaptation (Platkiewicz and Brette, 2011). Essentially, what we had done is derive the dynamics of spike threshold from the dynamics of Nav channel inactivation. There were a number of non-trivial predictions, such as the shortening of the effective integration time constant, sensitivity to input variance, the specific way in which spike threshold depends on membrane potential, and the interaction between spike-triggered and subthreshold adaptation (that we touched upon in the discussion). This study uses a non-parametric model-fitting approach in cortical slices to empirically derive the dynamics of spike threshold (indirectly, based on responses to fluctuating currents), and the results are completely in line with our theoretical predictions.

 

5. Attwell D and SB Laughlin (2001). An energy budget for signaling in the grey matter of the brain (Comment on PubPeer and Pubmed Commons).

This is an old but important paper on energetics of the brain, in particular: how much does it cost to maintain the resting potential? How much does it cost to propagate a spike? The paper explains some theoretical ideas to do these estimations, and is also a good source for relevant empirical numbers. It is important though to look at follow-up studies, which have addressed some issues, for example action potential efficiency is underestimated in this study. One problem in this study is the estimation of the cost of the resting potential, which I think is wrong (see my detailed comment on Pubmed Commons and the response of the authors). Unfortunately, I think it is really hard to estimate this cost by theoretical means; it would require knowing the permeability at rest to various ions, most importantly in the axon. More on PubPeer.

 

6. Brembs B, Button K and M Munafò (2013). Deep Impact: Unintended consequences of journal rank. (Comment on PubPeer)

The authors look at the relation between journal rank (derived from impact factor) and various indicators, for example effect sizes reported, statistical power, etc. In summary, they found that the only thing journal rank strongly correlates with is the proportion of retractions and frauds. Another interesting finding is about the predictive power of journal rank on future citations. There is obviously a positive correlation since impact factor measures the number of citations. But it is really quite small (see my post on this). What is most interesting is that the predictive power started increasing in the 1960s, when the impact factor was introduced. This strongly suggests that, rather than being a quality indicator, the impact factor biases the citations of papers (increases the visibility of otherwise equally good papers). This paper also shows evidence of manipulation of impact factors by journals (including Current Biology, whose impact factor went from 7 to 12 after its acquisition by Elsevier), and is generally a good source of references on the subject.