This issue features two epistemological papers (1-2), one paper on spatial navigation (3), two papers on automatic patch clamp (4-5). The first one is a critique of the neural coding metaphor, which I just wrote. This critique connects to the more general problem of reductionism in neuroscience (or in biology more generally), about which Tony Bell wrote an interesting essay (2). The coding metaphor indeed implies that there exists a separation between representation and decision/action, but a nervous system cannot be split up in this way. Similarly, Bell argues, seeing the brain as a computer is not very meaningful.
The next paper I discuss (3) is not a neuroscience paper, but an old robotics paper where the authors describe a simple way in which an agent can navigate in crowded environments, by avoiding places it has visited. This is seen in some species (eg slime mold) which leave a trail behind them. A wild but interesting speculation is that the spatial memory system of vertebrates (place cells) might result from an internalization of such mechanisms.
Finally, I discuss two simultaneously published papers on automatic patch that describe more or less the same algorithms (4-5), a rather straightforward but useful improvement where the targeted cell is visually tracked to adjust the trajectory of the pipette as it is moved down.
From the lab
1. Brette (2017). Is coding a relevant metaphor for the brain?
In this essay, I argue that the neural coding metaphor is often inappropriate and misleading. First, it is a dualist metaphor, because for something to count as “information”, that thing must be mapped to some other thing outside the brain. Information in the sense of Shannon is information for an external observer, not for the receiver. A more relevant notion of information is captured by the metaphor of perception as science making (finding laws and structure), rather than perception as encoding. Second, the relation between “input” and “output” of a neuron is circular (through synaptic connections or through the effect of action on sensory signals), and therefore mapping perception as a feedforward process is inappropriate. Spikes are not messages, they are actions on other neurons and on the body.
This is an interesting epistemological paper which discusses two important ideas in neuroscience. One is the ubiquity of loops. For example, the output of one neuron ultimately influences its own inputs because of cycles in synaptic networks. Sensory signals drive action, and action changes sensory signals. The same loops are seen at all levels (molecular, etc). The interdependency of all elements of a living system makes reductionist accounts inappropriate. One of these accounts is the coding metaphor, in which neurons are presumed to encode properties of the world, in a feedforward way (see my essay on the coding metaphor).
The second idea is a criticism of the computer metaphor of the brain, or of living systems in general. More specifically, in Bell’s words: “the prevalent tendency to view biological organisms as machines in the exact technical sense in which computers are machines, i.e. in the sense that they are physical instantiations of finite models which do not permit physical interactions beneath the level of their machine parts (e.g. the logic gate) to influence their functionality”. Empirically, we find interactions between and across all levels, and this makes the machine metaphor not very insightful.
3. Balch and Arkin (1980). Avoiding the past: Avoiding the Past: A Simple but Effective. Strategy for Reactive Navigation.
This is a paper from the reaction-based robotics field, where the authors describe a simple way to navigate in crowded environments. A classic problem is when there is a U-shaped barrier between the current position and a target position: if the agent goes straight towards the target, it gets stuck in the barrier – this is known as the “fly at the window problem”. It can be solved by planning and detailed knowledge of the environment, but this paper shows another efficient solution which is much simpler and used in some species such as slime molds (Reid et al., 2012). Here the robot maintains a spatial memory of places it has visited. A place it has visited becomes repulsive (in practice the algorithm computes the spatial gradient of a trace). The robot then avoids its own recent trajectory, and thus solves the U-shaped barrier problem. One might try to think of parallels between this system and place cells in the hippocampus (see an old blog post of mine on this).
This is an improvement of previously developed automatic patch-clamp systems. The algorithm in Wu et al. (2016) could patch a visually identified cell, but it required some human intervention in about half of the cases. The main reason is that pipette movements induce movements of the targeted cell, and so the trajectory of the pipette needs to be adjusted. The straightforward solution is to track movements of the cell and adjust accordingly. This is what is done here. The algorithm is made very simple by the (more complicated) experimental design, where both the pipette and the cell are fluorescent and a 2-photon microscope is used. This way, tracking the cell is essentially a matter of tracking a fluorescent blob (focus is when intensity is maximal). The authors mention that they did not manage to do it without fluorescence. Fluorescence (Alexa) in the pipette is used in several ways: first to locate the pipette tip before brain penetration, then to check that the pipette is not clogged (there is a fluorescent plume flowing out of the pipette), and finally to check whether break-in was successful. There is also a small improvement in sealing, where the pressure is alternated if sealing fails, before the sealing procedure starts again. A similar algorithm for tracking has been proposed simultaneously by Annecchino et al. (2017).
5. Annecchino et al. (2017). Robotic Automation of In Vivo Two-Photon Targeted Whole-Cell Patch-Clamp Electrophysiology. (Comment on PubPeer)
This is an improvement of automatic patch-clamp systems to patch a visually identified cell, which is very similar to a simultaneously published algorithm by Suk et al. (2017). It uses image processing to track movements of the cell induced by movements of the pipette, both fluorescent. The image processing is more complex than in Suk et al. (2017); it might be able to process more crowded images. Unfortunately it is described quite briefly in the main text and not detailed in the methods (for some reason, the methods describe the hardware but not the algorithms; the same is unfortunately true also of the previous most related paper, Wu et al. (2016) – which oddly enough is not cited). The code however is public, although for proprietary software (Labview). Oddly enough, the paper introduces a pressure controller as a novelty (compared to fixed pressure containers in Kodandaramaiah et al. (2012)), but this was already done by Desai et al. (2015) as well as by Wu et al. (2016) (both uncited).