December 2017

Editorial

This month, I have read two short books that I both found interesting from a theoretical neuroscience perspective. The first one is a praise of systems biology by Denis Noble (1). It is written in clear language and convincingly argues that biological organisms or cells can only be understood as systems, not just of interacting elements, but also of interacting levels (eg the molecular, the cell and the organism levels). In a recent essay (3), Fields and Levin go further and point out that since only cells, and not genomes, reproduce, it is inevitable that heritable properties are not confined to the genome. This implies that the genome cannot be seen as a code for the organism, but perhaps more appropriately as a resource for the cell. The other book is actually a philosophical review of various books on consciousness, authored by both philosophers and scientists (2). It is a good entry into modern ideas on consciousness. Finally, a recent essay by Yves Frégnac criticizes the industrial-scale projects that have recently emerged in neuroscience (4). One of the points made in that article is that the field is not mature for this type of projects because there is no strong theory to integrate data or even to identify the relevant data to acquire.

Books

1. Denis Noble (2006). The Music of Life.

This short book by the physiologist Denis Noble is a criticism of the “selfish gene” metaphor, and more broadly of the metaphor of the genetic program or code. A gene does not code for a phenotype. A gene encodes the primary structure of a protein, which does not even specify the chemical function of a protein, because that depends on how it’s folded, spliced etc. As to the function of a protein in an organism, well that depends on the context provided by that organism. The same gene can have different effects in different species, or in different cell types of the same species. An organism, or a cell, is a system of many interacting components, and there is no preferential level of analysis. One important concept described in the book is “downward causation”, which is illustrated by the cardiac rhythm and the cardiac action potential. We can try to describe the cardiac rhythm at the molecular level: it is caused by the opening and closing of ionic channels. But there is no rhythm at the molecular level, no intrinsically rhythmic molecule: it emerges by the interaction of the ionic channels, and more precisely by their interaction with the cellular environment. The way it emerges is through a feedback from the cellular level where the membrane potential is formed, a macroscopic phenomenon, to the molecular level of ionic channels, which produce local transmembrane currents. This is what Noble calls “downward causation”: a causal link from a large spatial scale (the cell) to a smaller one (the molecule). Importantly, this cannot be reduced to a causal chain of molecular events: it is not a chain but a loop, which also involves non-molecular events on a larger scale. Thus, the cardiac rhythm (and more generally the action potential) cannot be understood without considering the entire system, at two different scales (cell and ionic channel). It is a powerful motivation for systems approaches in biology.

2. John Searle (1990). The Mystery of Consciousness.

This is a particular book, as it is a collection of book reviews. Searle critically reviews 6 recent books on consciousness, from both philosophers and scientists: Crick, Edelman, Penrose, Dennett, Chalmers and Rosenfield. It is a good entry into the modern literature on consciousness because it exposes some of the philosophical issues and positions. Searle also has his particular view, which not everyone will agree with (for example, Searle repeatedly state that feelings cause behavior, and that this is simply a fact of experience; but that is not true, see e.g. Ned Block’s “On a confusion about a function of consciousness”). Here is a summary of a few ideas an arguments. Both Crick and Edelman, two eminent biologists who were awarded the Nobel prize in different domains, propose physiological explanations of consciousness of the type: when physiological phenomenon X happens (eg synchronized firing of some distant pyramidal cells), consciousness happens. This, however, only describes correlates of consciousness, but does not explain why they make us conscious. Chalmers takes this point seriously, and argues that in fact, no explanation of this type can ever explain why we are conscious; we can only say: this particular physiological phenonemon is (empirically) associated with consciousness. Therefore, he defends a form of property dualism, specifically that certain types of information processing goes with a conscious state – this is seen as an additional law of nature (see my criticism of this position here). This is the position defended by Koch and Tononi (IIT). Dennett essentially denies the existence of experience. That is, he argues that consciousness (phenomenal consciousness, i.e., “how it feels like”) is an illusion. Searle criticizes this view, and rightly so in my opinion, because it is a self-contradictory claim. An illusion is still something that we experience. A computer program doesn’t have illusions when it bugs. To feel something that conflicts with reality is still feeling something, so consciousness cannot be an illusion.

Articles

3. Fields C and Levin M (2017). Multiscale memory and bioelectric error correction in the cytoplasm– cytoskeleton-membrane system. (Comment on PubPeer)

This cell biology essay was not an easy read for me, a theoretical neuroscientist, but I found it very original and insightful. It criticizes the concept of the genome as a program. Indeed, genes by themselves only determine the primary structure of potential proteins (actually not even that if we consider splicing), and this does not determine protein function, let alone the way a cell works. But the authors go further by drawing the implications of a simple fact: the genome does not reproduce, only a cell does.

When a cell divides, the daughters get not only the genome, but also the membrane and cytoskeleton. The authors call this the “architectome” and argue that it also carries heritable properties that are independent of the genome. Given that it is always an entire cell that reproduces, it makes sense. In a few cases, this has been demonstrated, for example in Paramecium. The authors thus propose a fresh perspective: the genome is not the code or program for the cell; it is a resource for the cell, which picks into the genome to produce proteins it needs. Although the new metaphor also has its limitations, it has the great merit of providing an alternative view to the “genetic program”.

4. Frégnac Y (2017). Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain? (Comment on PubPeer)

In this essay, Yves Frégnac criticizes the industrial-scale projects that have recently appeared in neuroscience, for example the Human Brain Project and the Brain Initiative. This type of critiscim is often heard among scientists but more rarely read in academic journals. The essay uses different angles of attack. One is that these large-scale data mining projects are driven by technology and not by concepts, and while technological tools are obviously useful in science, the risk is that a lot of data will be produced, but quite possibly not the right data. This is a very tangible risk since no one seems to have any idea what to do with the data. The data-driven view is based on the epistemological fallacy that data preexist to theory (see my blog series on computational neuroscience). This is wrong: data can be surprising, but they always rely on a choice of what data to acquire, and that choice is theoretically motivated. Here is one example from the historical literature on excitability. To demonstrate the ionic theory of action potentials, Hodgkin thought of the following experiment: immerge an axon in oil, and measure conduction velocity; it should decrease because of the increase in extracellular resistance (it does). You might observe the electrical activity of the whole brain with voltage sensors all over neurons, but you would still not have those data. A second argument is that purely reductionist approaches (or “bottom-up”) are not appropriate to study complex systems: this study must be guided be the understanding of higher levels (eg Marr’s “algorithmic” or “computational” level). Here is a relevant quote: “the danger of the large-scale neuroscience initiatives is to produce purely descriptive ersatz of brain, sharing some of the internal statistics of the biological counterpart, at best the two first order moments (mean and variance), but devoid of self-generated cognitive abilities.” Such approaches are probably doomed to fail. A third argument is that studies in invertebrates suggest that bottom-up approaches vastly underestimate the complexity of the problem; for example, we know from those studies that neuromodulators can totally change the function of neural circuits, and so knowing the connectome will not be sufficient to infer function (see for example this talk by Vladimir Brezina). Generally, industrial-scale bottom-up approaches will not work because we do not have the beginning of a strong brain theory, which is necessary to produce the relevant data and to subsequently integrate them.

One of the dangers identified in this article is that funds will be captured by those large-scale efforts. I think there is a broader threat, which is that it will also impact the criteria for hiring academics, and as an indirect result of those incentives, push all young scientists towards that kind of science, whether they are funded or not by those large-scale efforts. With the attraction of “high-impact” journals for flashy techniques, with papers showing impressive technologies but limited scientific results, this seems to be already happening.

Leave a Reply

Your email address will not be published. Required fields are marked *