June 2017

Editorial

This issue features two books (1,2), a PhD thesis (3) and one article (4). The first book is about the relation between artificial intelligence and human intelligence. Although it was written a long time ago about a different kind of artificial intelligence (expert systems), a number of arguments are still relevant today. Recently, IEEE Spectrum asked a number of artificial intelligence experts: “When will we have computers as capable as the brain?”. Most of them (but not all) seem to think that it will happen within a few decades or less. This book suggests a more humble answer. The second book is about an unorthodox view of evolution based on endosymbiosis, the idea that major steps in evolution come from the union of organisms into a new one, rather than by mutations.

For the first time, this issue features a PhD thesis (3), on patch-clamp automation. Indeed, why not selecting a thesis in a journal? A thesis is a substantial peer reviewed and published study, often more detailed and useful than articles. This one shows impressive work in robotics, enhancing automatic patch-clamp with automated pipette change (tricky!).

Finally, this issue features one article, showing the coordination between different channels in vertebrates (4).

Books

1. Dreyfus HL and Dreyfus SE (1986). Mind over machine.

This book written in the 1980s is a classic criticism of expert systems as a model of human cognition. The major trend in artificial intelligence at that time was logical inference systems based on rules designed by interrogating human experts. It may seem a little outdated but there are a few interesting elements. First, there is the historical perspective. Artificial intelligence had had a few successes, which motivated claims that soon machines would achieve the level of human intelligence. It also triggered huge investments, both public and private. But these goals were never achieved. All these approaches applied to very limited domains of expertise and failed to produce general-purpose intelligence. To me there is a striking parallel with the situation today, with a number of respected leaders announcing exactly the same thing, that soon machines will outperform and perhaps even replace humans. As for expert systems, the new connectionist generation of artificial intelligence has had impressive successes, and in many ways outperform the previous logic-based systems. But they still apply to limited domains for which they have been trained, and there is no sign that any machine understands anything. Machine translation, for example, works remarkably well today, based on modern statistical learning techniques and massive data, but none of these algorithms understands what a car or love is; the field still stumbles on the symbol grounding problem. So we should be more humble, because nothing but our wishful imagination lets us presume that these successes on statistical learning will extend to problems of a different kind, namely the design of autonomous intelligent beings.

Second, the authors argue that there are fundamental differences between the way expert systems and the human mind work. In particular, they criticize the computational view of the mind as the processing of symbols, and argue that it rather seems to operate by a holistic, pattern-matching process (following phenomenologist philosophy). This might seem like a trivial point today to connectionists, but this view still underlies much of cognitive science, and in fact in my view the criticism is still relevant to connectionism. Indeed, while a typical neural network might take signals as input, rather than symbols (eg an image), it is still casted in an input-output processing framework, in which the output is a symbol (eg label of a face, some category) and not a signal.

The third interesting point in the book is about the way humans acquire skills, in contrast to machines. In expert systems, knowledge is fed into the system directly in the form of rules, obtained by interrogating human experts. This may match how humans learn from the experience of other humans, trying to apply rules that are taught to them. But as the authors argue, while beginners start by applying rules, they quickly start relying less and less on rules, and more on holistic perception of situations, leading them to often break rules. This pattern diverges from the way learning is conceptualized in connectionism – the corresponding paradigm would be supervised learning, which is rigid and does not involve guidance.

Overall, although the arguments in the book were targeted to expert systems, many of them still apply to current artificial intelligence – there is a big gap between mind (or biology) and machine.

2. Lynn Margulis (1998). Symbiotic planet.

Lynn Margulis was an unorthodox biologist who demonstrated that mitochondria, the power plants of cells, result not from random mutations as neo-darwinist theory would suggest, but from endosymbiosis. In other words, mitochondria are bacteria that have been engulfed in a cell and live in symbiosis with it. In this book, she presents her theory that the most important steps in evolution come from endosymbiosis, not from mutations, in particular the evolution from prokaryotes to eukaryotes, for which there is now convincing evidence. It is a very interesting and refreshing counterpoint to the darwinist dogma (see the May 2017 issue).

Thesis

3. Holst (2016). In vivo serial patch clamp robotics for cell-type identification in the mouse visual cortex.

This thesis takes patch clamp automation (see March 2017 issue) one step further, by allowing the robot to change the pipette. This means storing the pipettes on a carousel, filling them with intracellular solution using a pressure controller, placing them on a custom electrode holder, and measuring their geometry (this one has been published in Stockslager et al., 2017). The designs are quite sophisticated. Amazingly, it seems to work! There is also an improved algorithm for break in that uses electrical feedback to stop the suction when break-in is detected, and overall a lot of interesting content in this thesis.

Articles

4. Tran T, Unal CG, Zaborszky L, Rotstein H, Kirkwood A and Golowasch J (2017). Ionic current correlations are ubiquitous across phyla. (Comment on PubPeer)

This is a short paper showing that in mice, a number of ionic conductances vary across cells in a correlated way. This is shown in particular in hippocampal granule cells, which are very compact (important to interpret the results because of space clamp issues). This phenomenon had been previously demonstrated in invertebrates; other work had shown that the voltage-dependence of different channels is also correlated (McAnelly & Zakon, 2000). Another interesting finding is that conductances vary with the circadian rhythm.

The co-variation of conductances has important consequences in terms of modeling. It means in particular that conductances are not genetically set, they are plastic as virtually everything in the cell. The fact that they co-vary, rather than vary independently, suggest that this may not be a random variation, or more precisely that there is some regulation that ensures that the parameters “make sense”, that is, produce a functional cell. For example, in an isopotential cell, the electrophysiological properties vary moderately if all conductances are scaled by the same number (ie you get similar spikes, but possibly a different excitability threshold). This kind of scaling could result from global homeostatic regulation, for example (see e.g. O’Leary et al. (2014) and other work from Marder’s lab). The data in this paper, however, suggest that the regulation of conductances is more complex than a global scaling. Some conductance pairs are not correlated. In other cases, the linear regression has a positive intercept – so the relation is not linear but affine. Generally, there is also a fair amount of variability around the linear regression, which might be noise of various sources, but which might also be simply the signature of a more complex multidimensional dependence (linear or nonlinear).

(By the way, in case the authors read this comment, the caption of Fig. 2 is incomplete on this version.)

One thought on “June 2017

  1. Pingback: July 2017 | Brette's free journal of theoretical neuroscience

Leave a Reply

Your email address will not be published. Required fields are marked *