In this issue I have selected papers that are not in theoretical neuroscience, but which I think provide fresh ideas for theoretical neuroscientists. The first paper (1) is a critique of the “genetic code” idea, which argues that the cell and in particular genetic networks must be understood as a system, in contrast with the reductionist idea that one gene (DNA sequence) determines one phenotype (see also my blog piece on evolution as optimization, below). The second paper (2) is related, in that it proposes an original view of gene expression and cellular adaptation, in which genetic networks are seen as stochastic dynamical systems in which the amount noise is regulated by external factors, yielding to adaptation by stochastic search. In my view, this suggests an alternative to the idea of plasticity as gradient descent. The third paper (3) describes a technique to record the propagation of action potentials over days in cultured neurons, which provides a tool to investigate the development of excitability, for which there is currently no theoretical model (and in fact very little is known). Finally, the fourth paper (4) introduces a new model animal, Hydra, in which the activity of all neurons can be simultaneous recorded with calcium imaging, which should give some interesting material for theoreticians.
Is optimization a good metaphor of evolution?
Optimality arguments are often used in theoretical neuroscience, in reference to evolution. I point out in this text the limitations of the optimization metaphor for understanding evolution.
1. Noble D (2011). Neo-Darwinism, the Modern Synthesis and selfish genes: are they of use in physiology? (Comment on PubPeer)
I believe this is an important essay for theoretical neuroscientists, where modern evolutionary ideas are explained. This is essentially a critique of the idea of a “genetic code”, specifically of the reductionist idea that a gene (in the modern sense of DNA sequence) encodes a particular phenotype, an idea that has been popularized in particular by Dawkins’ “selfish gene” metaphor. Denis Noble argues that this is a reductionist view that is simply wrong, because the product of a gene depends not only on other genes but also on the cell itself. For example, the same DNA sequence can produce different functions in different species. Noble cites an experiment where the nucleus of a goldfish is placed in the enucleated egg of a carp, and the adult fish that results is not a goldfish, as the genetic code theory would predict, but something between carp and goldfish (in many cases with other species, no viable adult results). The author points out that DNA does not reproduce, only a cell does, and he concludes: “Organisms are interaction systems, not Turing machines”. In addition, not all transmitted material is in the nucleus. There are also transmitted cytoplasmic factors, for example organelles (mitochondria). In fact, there is a theory, which is well established for the case of mitochondria, that major evolutionary changes are due not to mutations but to endosymbiosis, the fusion of different organisms into a new one (see Lynn Margulis, Symbiotic planet). It seems to me that a strikingly analog critique can be formulated against the idea of a “neural code”.
2. Kupiec JJ (1997). A Darwinian theory for the origin of cellular differentiation. (Comment on PubPeer)
This paper is 20 years old but there is a recent paper providing experimental support to the theory (Richard et al., 2016). Although this may seem quite far from theoretical neuroscience, I believe the ideas sketched in this paper are very interesting for the questions of learning and plasticity. Kupiec has written a number of papers and books on his theory; another one worth reading is Kupiec (2010), “On the lack of specificity of proteins and its consequences for a theory of biological organization”, where he points out the fact that a given protein can interact with many molecular partners, which is at odds with the idea of a genetic program. The criticism is linked to Denis Noble’s critique of the genetic code (Noble 2011).
The general idea of this 1997 essay is the following (I may oversimplify and interpret in my way as I have not read all his works on the subject). The expression of genetic networks is actually noisy (this is well established). This noise can make the genetic network spontaneously jump between several stable attractors (which we call cell types). Now the interesting idea: the amount of noise is actually regulated and depends on how adapted the cell is to its environment (growth factors): less noise when the cell is well adapted. This makes the cell adapt to its environment. The Darwinian flavor comes in when you consider that healthy cells reproduce more.
Why I think this is inspiring for theoretical neuroscience is that when trying to connect learning and plasticity, one common idea is to define a functional criterion that is to be minimized, and then to propose plasticity rules that tend to minimize that criterion. A typical choice is gradient descent. The problem I see is that the cell has no means of knowing the gradient, so there is something a little magic in those plasticity rules. Kupiec’s theory suggests an alternative idea, which is to see plasticity as a stochastic optimization process due to noise in genetic networks. This reminds me of chemotaxis in microbes (eg E. Coli): the microbe swims towards the point of highest concentration by a simple method, which consists in turning less often when concentration increases. There is also some relation to Kenneth Harris’s neural marketplace idea (Harris, 2008).
3. Tovar et al (2017). Recording action potential propagation in single axons using multi-electrode arrays. (Comment on PubPeer)
The authors use an MEA on cultured neurons to record the propagation of action potentials in single neurons, and how it changes over days. The electrode signals quite clearly allow identifying the initiation site (presumably, AIS) and axonal processes. The authors show for example (using TTX) that reducing available Na+ conductance reduces conduction velocity without affecting the reliability of propagation, or the amplitude of the signals. As stated in the discussion, what I find particularly interesting is that it might allow investigating the development of excitability. From a theoretical neuroscience perspective, I see this as a very interesting form of learning (learning to propagate spikes to terminals) for which there is unfortunately very little experimental data. For example: does excitability develop jointly with the growth of axonal processes, or does it come after? Is it activity dependent? How does the cell know where to place the channels? (note that there is some relationship with a paper that I have discussed in the previous issue, Williams et al., 2016). The authors suggest that excitability develops after growth because they interpret a change in an electrode signal as the axonal process changing from passive to active. Unfortunately, the interpretation is not entirely straightforward because there is no simultaneous imaging of axonal morphology. This would be indeed a major addition, but not so easy with dense cultures. The discussion points to a number of other studies relevant to this problem.
4. Dupre C, Yuste R (2017). Non-overlapping Neural Networks in Hydra vulgaris. (Comment on PubPeer).
This paper introduces the cnidarian Hydra as a model system for neuroscience, showing that the activity of the entire nervous system (a decentralized nervous system called “nerve net”) can be measured with calcium imaging at single neuron resolution, as the animal is small and transparent. In principle, the ability of recording (and potentially optically stimulating) the entire nervous system is interesting in that it might help go beyond the reductionist approaches that are popular in neuroscience (tuning curves, etc) but are not appropriate to understand biological organisms, which have a systemic organization. This point was in fact made a long time ago in a more general setting, for example by prononents of cybernetics (see e.g. General System Theory by Von Bertalanffy, 1968).
Thus, there is potential in the possibility of measuring the activity of an entire nervous system. This study makes a technical demonstration of feasibility. There are however a number of issues that need to be solved: the temporal resolution (100 ms, which probably does not allow resolving the temporal coordination of spikes); the fact that only correlations between activity and behavior are measured, while understanding interactions requires manipulating neurons (possible with optogenetics) and/or the environment; behavior is not natural because the animal is constrained between two coverslips (perhaps we might imagine an online tracking method?); perhaps more importantly, the effect of an action potential on the animal’s movement is unknown, and probably involves some rather complicated biomechanical and hydrodynamic aspects. Finally, it is often implicitly assumed that behavior is entirely determined by nervous activity. But what might come up is that the nervous system (as well as non-electrical components; metabolism), interacts with a complex biomechanical system (the theme of embodiment, see e.g. Pfeifer & Bongard, 2006).
Pingback: June 2017 | Brette's free journal of theoretical neuroscience