Non-greedy physicalism is at once appropriately reductive, non-epiphenomenal, and preserves ontic qualia without the need for strong emergence.
Read moreThree Ways I've Changed my Mind on Mind
In which I make the following points: 1. The Cortical Fallacy is Real, 2. Consciousness is (Essentially) Not Substrate-Free, and 3. Consciousness and Intelligence are Not the Same Thing
Read moreCognition as Maps-of-Maps
[reading time 5 minutes]
It has been said at least twice recently (by philosophers Patrick Grim and Peter Godfrey-Smith) that consciousness is somehow attributable to "loops" in neural processing. In the jargon of signal processing (as in robotics), this would be called "feedback". In computer science, such processing is called "recursion". Yet neither robots nor computers as we know them are conscious. I have previously proposed that insofar as cognitive loops give rise to consciousness, this could only be done by a system which can map in space and time which then maps itself. Among other things, this is why you must sometimes return to a certain room or area of space in order to recover a thought. From pages 11 and 12 in The Phenomenology of Animal Tracking (link):
After that, what might the next step in conceptual evolution be for such a creature? It could - and this, as I’ll put forth, would be momentous - develop a nervous system to “map” its world - where are the food and predators (at one moment in time - more on this later). I would argue that this would be the most rudimentary brain possible, as - while not conscious - it would lead to the development of consciousness, because once you have a Complex Adaptive System which has invented a way to map, you have one component of a system which can take advantage of folding its processing back in on itself.
I want to briefly distinguish this concept from the more familiar concept of recursion in Computer Science. Recursion is simply the reuse of code or circuitry to either save memory or otherwise make a device or piece of software more compact. It is clever, in a logical way, but it can always be rewritten to accomplish the same goal with non-recursive loops, and it never results in an amplification of processing or abstractive power - in the jargon of Complex Adaptive Systems, it never results in improved schema.
What I’m talking about with maps is, if a system develops an architecture which can keep track of entities in space, then that architecture - itself a collection of entities in space - might further develop into a system which applies its own mapping structures to map itself, thus resulting in a new and more powerful architecture. In the realm of conceptual evolution, then, this is a plausible beginning to how higher-order brains developed from simpler nervous systems. Only a system which is naturally designed to process spatial/temporal data with mapping and memory could possibly take advantage of mapping and memory abstracted internally. No computer as we know it today could do this - could abstract its power by adding more of the same architecture. But I’m getting a little ahead of myself.
Once a creature can map, and make maps with maps of maps, the stage is set for the next major step in conceptual evolution. If you can create a map, then you can modify your map - you can add and delete elements. There is nothing special in this - you go to spatial position Y to retrieve food which you earlier marked at Y. If the food is still there, you eat food at Y, and you erase Y from your map. Or you find the food has already been eaten, so you also erase Y from your map. However, if you begin to use your map to in essence overlay maps in time, you develop the next (and perhaps ultimate as we’ll see soon) ability in the realm of conceptual evolution. You can make predictions. In other words, spatial and temporal processing architecture could be developed in such a way as to take advantage of turning its processing back onto itself, which develops maps into maps of maps, and these into predictive schemata. Now, whereas once such a creature was wholly a Complex Adaptive System by bodily architecture alone, now it is that Complex Adaptive System architecture with another Complex Adaptive System inside of itself - its brain. It is a model of the world, within a model of the world. It is all material. Yet it is mind and body. And that is us.
What I want to add here are two pieces of independent research which corroborate my cognition as maps-of-maps hypothesis. But first, briefly, a note on the term "cognition". If within our minds and bodies we employ both conscious and unconscious cognition, and realms in-between these two as on a spectrum, and we take this spectrum as occurring elsewhere in the evolutionary chain of life, at least in part, by "cognition" here I mean anything on the spectrum which produces some type of qualia or subjective experience. That is, any form of cognition which is not wholly unconscious (non-conscious, robotic, etc).
Now, the first piece of corroborating evidence is from Milner and Goodale's seminal paper on the two-streams hypothesis (link). This research lays out the now well-established fact that after visual data reaches the occipital lobe, it is processed twice: once as a "where" stream of information (the dorsal stream), and separately as a "what" stream of information (the ventral stream). The "where" stream is shown in their paper to process map-type information for vision-sensed objects, and this is done non-consciously. The "what" stream - which yields visual recognition of objects - is processed consciously, with full awareness of the observer, or as John Locke would have it "perceiving that he does perceive".
I propose that - evolutionarily - the unconscious dorsal stream developed first. This is a robotic mapping of one's environment. Once this can be accomplished, the visual processes themselves can be mapped - made amenable to tracking and predicting. This became the ventral stream**, and this is exactly the neural-level definition of conscious processing. In other words, this is what consciousness is - neural mapping architecture applied to the architecture signals themselves.
The second piece of corroborating evidence comes from Jeff Hawkins' intelligence research outfit, Numenta, and their work on the role of grid cells in cognitive processing among cortical regions (link). In a word, their research shows that objects are recognized in allocentric (object-centered) coordinates, where grid cells perform coordinate transforms between object and observer coordinate frames. That object mapping is so central to object recognition is extremely telling in the story of how evolution yielded consciousness, as should be obvious by now from the rest of this post.
The case here is by no means closed. Yet my cognition as maps-of-maps hypothesis is very amenable to testing, both in biological systems as well as more artificial ones. In your own map of technologies and scientific ideas to watch out for, keep this idea pinned.
** - note: work by Dehaene et. al. suggests the conscious/unconscious split between ventral and dorsal streams is not so clear cut, but in a way that suggests only that the recursive mapping mechanisms, while capable of producing consciousness, may also operate in or contribute to unconscious modes of cognition; see Dehaene “Consciousness and the Brain” pages 58-59.
Sleep Paralysis, the Mind-Body Problem, and the Non-Self
[reading time 5 minutes]
Last night I had my first full episode of sleep paralysis.
It started with this weird-ass dream (see supplementary post). I half-noticed a sound in real-life which ended the weird dream about eating penguins. This was the beginning of the real strangeness. The sound (which I now believe was a high-pitched moment of music in a YouTube video my daughter was watching) became immeasurably loud in my mind - out of control - as if amplified in a feedback loop. Eyes still closed and half-in and half-out of sleep, I had the urge to scream, partly to get whomever was making the sound to shut up, and partly so I could hear my own voice and in that way take a sounding of the depth of noise I was at sea upon.
Details of the Episode
My body lurched in real space and I opened my mouth. I had the sensation of screaming but was realizing the sound was completely in my mind - or was it? Confused and scared, I opened my eyes, and at that moment my mind and senses were fully conscious of reality but my body was still locked in sleep. I had stopped the internal pressure - the will and command to scream - and was now looking at my cat about a foot away from my face, just ignoring my ridiculous situation in his cat way. My body was canted over in some unknown fashion, seemingly tilted at an impossible, gravity-defying angle, but also not. I think I tried to move my arms and legs and experienced no movement as if I at once applied the force to both move and keep them in place. I could now hear the veridical music of my daughter's YouTube video and knew she was alright (when I first heard the runaway noise I also had a feeling that she was in trouble), so I called for her help. No sound came from my mouth. My mouth could not even form the shape of words, despite desperate effort. I could push air, almost - summon a pressure with my breathing muscles - but still no sound came out. I became incredibly fearful, such that I lay there savoring it - I had never felt such a way in my life and recognized the singular experience. I was at once scared, humorous, relaxed, and in awe of myself.
Still unable to move except for automatic breathing and the beating of my heart, I was finally able to produce a sound. I tried to say "help" or yell my daughter's nickname, which came out as a pathetic "ugh-UGH". I had heard myself make this sound two other times in my life. The first was in post-op for a tonsillectomy, I noticed someone yelling and as I wished they would stop I realized it was I who was making ludicrous noises. The second was in waking from a random nightmare.
By now my daughter had heard my sounds and was asking what was going on. I wanted to explain but couldn't. Confident, somehow, that voluntary movement would soon return to my body, I was finally freed and able to tell my daughter I was having a bad dream. Later that morning I fully explained the paralysis.
Philosophy of the Episode
This strange happening pokes its fingers in the eyes of many philosophical topics. First of all, when I say "voluntary movement", I do not mean a return of my free will - for there is no free will, only determinism. I only mean the causal connection between my relevant neural circuitry and subsequent motor responses.
But the more profound insight here - the more tenacious eye poke - is in the face of the mind-body problem. That is, do we have a soul, or is our consciousness wholly the result of an assemblage of material parts?
If we have a soul - a non-material driver of consciousness - does it depart during sleep? If so, where does it go? If not, how can the soul effect a half-sleep, half-awake state such as sleep paralysis?
As usual, it is much simpler and profound to do away with such a thing as a soul. So how then does a material consciousness express itself liminally between sleep and wakefullness? There are proposals for the neurological mechanics of sleep paralysis, such as this paper which discusses an abnormal interplay between cholinergic and serotonergic neural pathways.
This interplay reminds us that although ultimately we as living beings are constructed from one basic material "stuff" (call it, say, quantum particles), at the relatively low entropic levels which allow for life, this stuff has organized and differentiated such that each of us is not really one singular "self". This is true even at levels of cognitive organization well above the simple neurotransmitter level (whether cholinergic, serotonergic, or what-have-you). Consider, for example, Jeff Hawkins' (and his team's) Thousand Brains Model of Intelligence. This multi-self paradox of self is so pervasive in the tree of life, it is even present pre-cognition, evolutionarily speaking, for example, in the quorum sensing behavior of bacteria.
Anyone familiar with Buddhism will find much to ruminate on here. Or not :-)
If you do choose to further reflect here, think about this: if the brain is really more like a thousand mini-brains, or merely chemicals which get to "vote" to establish thought in the overall brain and behavior in the overall organism - what, truly, is the self? If self is the result of a vote, then self is a concept, not a thing. To be not-a-thing is very profound.
Simulation, Simulation, Concepts, and Qualia
[reading time 10 minutes]
Qualia (Simulation a la Barrett?)
I want to talk about qualia - the way things seem to us, like the taste of wine, or the orangeness of an orange. Some people think qualia is something of a causal dangler, that is to say, the orangeness of an orange is just the way an orange is to us and there really is nothing more to say about it. I disagree. Qualia serves a purpose. I'm going to conclude that qualia is a veneer that simplifies the world for us, because at the smallest scales of reality there is just too much information for us to digest. So let's get there.
I've already talked about how the external world is modeled within us using theories such as Jeff Hawkins' Hierarchical Temporal Memory (link). What I want to add is that "the world" also includes our internal bodily selves - we need to model ourselves as an object in this world as well, particularly our internal energy and status signals (our interoceptive network). This is the insight of Lisa Feldman Barrett in her book How Emotions are Made: The Secret Life of the Brain. In fact, she concludes, emotions are the qualia of our internal world, in direct analogy with the visual/auditory/etc qualia derived from the external world.
Only she doesn't use the word qualia. This seems to be a bridge too far in her book. For example, her constructionist view of visualizing a bee begins with the full qualia of bright/dark blobs, and ends in the full qualia of a bee. I insist on taking this one step further: visualizing a bee begins in fully non-conscious, raw, utterly non-qualia'd light frequencies, perhaps proceeds to bright/dark blobs or other intermediate steps, and finally ends in the full experience of a bee, all via the same constructionist principles invoked by Dr Barrett. In this way, the philosophical "problem" of qualia is solved. Qualia equate to what Dr Barrett calls simulation, or concepts, aka prediction. The way the world seems to us is our low-fidelity prediction of the world (which includes our internal selves), such that we can process an incredibly complicated reality.
Let me pause to emphasize how awesome Dr Barrett and her research are. The insight that we model our internal world like an external world (or rather, that we model them with the same methods) is crucial to understanding consciousness itself. Consider Artificial Intelligence. All AI is first trained using Machine Learning techniques. All machine learning is a mathematical optimization problem - one must provide a truth dataset or simulation on which to train the AI. But what is truth to a living being which must bootstrap itself in the world? At the level of our interoceptive network, "truth" is simply whatever optimizes our internal states, from an evolutionary fitness standpoint (see this post on the conceptual evolution of consciousness). This network, being evolutionarily first, provides an optimization framework for our higher-order (evolutionarily more advanced) senses and qualia, such as vision and hearing.
Another amazing insight from Dr Barrett is humanity's use of words to spread concepts amongst ourselves. That is, rather than having to model every piece of the world from scratch, we can (and inevitably do) copy the way those around us have modeled the world as a starting point within ourselves. This explains the centrality of language in what makes us human, and the ability of words to store cultural energy, as I've said here (link).
Locke as Constructionist
Having rightfully praised Dr Barrett, let me tally one more criticism. In at least three whitepapers, Dr Barrett lumps philosopher John Locke into the essentialist camp, but as I'll show, John Locke is squarely a constructionist, especially when it comes to Dr Barrett's particular field of study, emotions. That Locke has been mistaken as an essentialist is a misunderstanding of Locke's comments on Aristotle (as Nigel Leary argues in How Essentialists Misunderstand Locke). This is ironic since Dr Barrett corrects similarly egregious misunderstandings in the name of Charles Darwin and William James.
So what does Locke himself say about emotions? Here is Locke in Book II, chapter 20, paragraph 14 of An Essay Concerning Human Understanding,
These two last, envy and anger, not being caused by pain and pleasure simply in themselves, but having in them some mixed considerations of ourselves and others, are not therefore to be found in all men, because those other parts, of valuing their merits, or intending revenge, is wanting in them.
Thus emotions are not universal or based in essences, but are rather constructed by our societal concepts. Here, "pain and pleasure" for Locke are his version of affect, which he further refines as "uneasiness" and "desire". From Book II, chapter 21, paragraph 31,
All pain of the body, of what sort soever, and disquiet of the mind, is uneasiness: and with this is always joined desire, equal to the pain or uneasiness felt; and is scarce distinguishable from it.
These are even specifically associated with body budgeting concerns (Book II, chapter 21, paragraph 34),
And thus we see our all-wise Maker, suitably to our constitution and frame, and knowing what it is that determines the will, has put into man the uneasiness of hunger and thirst, and other natural desires, that return at their seasons, to move and determine their wills, for the preservation of themselves, and the continuation of their species.
Further on the construction of emotions ("passions") from affect ("uneasiness/desire") - Book II, chapter 21, paragraph 40,
But yet we are not to look upon the uneasiness which makes up, or at least accompanies, most of the other passions, as wholly excluded in the case. Aversion, fear, anger, envy, shame, etc. have each their uneasinesses too, and thereby influence the will. … Nay, there is, I think, scarce any of the passions to be found without desire joined with it.
Here is Locke, in 1689, fully anticipating constructionism, even psycho-physiological affect, and we are not to discredit Locke for calling these by different names, as he did not benefit from the wealth of cultural knowledge in the form of 20th century words (an amazing example of the power of words-as-tool).
Once more, from Book II, chapter 21, paragraph 46,
The ordinary necessities of our lives fill a great part of them with the uneasinesses of hunger, thirst, heat, cold, weariness, with labor, and sleepiness, in their constant returns, etc. To which, if, besides accidental harms, we add the fantastical uneasiness (as itch after honor, power, or riches, etc.) which acquired habits, by fashion, example, and education, have settled in us, and a thousand other irregular desires, which custom has made natural to us, we shall find that a very little part of our life is so vacant from these uneasinesses, as to leave us free to the attraction of remoter absent good.
It's time to give Locke credit where credit is due.
So there it is. Qualia is the way in which a consciousness - a tiny subset of a world - can know things about that world, including itself. Qualia is constructed within us for this purpose. Locke knew as much. Only he didn't have all the right words for it ;-)
Simulation a la Bostrom (post-script)
One final note on simulation... "Simulation" as used above refers to one of the ways Dr Barrett describes the prediction mechanisms of our brain. These days in Philosophy, however, using the word "simulation" is likely to evoke the paper Are You Living in a Computer Simulation? by Nick Bostrom. That is not what is meant above. But now that I've brought up Bostrom's paper, let me answer the question posed by its title: no, we are not living in a computer simulation. My counter argument is as follows. Using Bostrom's own template, we are either the most advanced intelligence in the universe, or we are not. Let's call the probability of either 50/50, since we do not have evidence either way to moderate that starting probability. It follows, then, there is an enormous chance we are as ants living under foot of an advanced intelligence that is not our own. But it can not be both likely that we are a simulation of our progeny's making and likely that our intelligence is subordinate to an alien superintelligence, lest we are in a simulation of alien design, or would have our lives interceded upon by that intelligence in some other way (or not, but that is only part of the larger probability). This paradox of possibilities stems from the fact that Bostrom's template begins in fantasy, and is not sufficiently mediated by fact. If we extrapolate our future from unmitigated imagination, then the limiting factor is not fact, but just that - imagination.
The Phenomenology of Animal Tracking
What is the relevance of Phenomenology in elucidating mind in the 21st century, when elucidating mind is increasingly accomplished via neuroscience and artificial intelligence? I will compare Merleau-Ponty’s World War II era work to a modern attempt at modeling the mind (Hierarchical Temporal Memory) and show that Merleau-Ponty anticipates, albeit roughly, some of its key findings. In doing so, I will also contribute to Philosophy of Mind by offering a plausible evolutionary pathway and mechanism for the production of consciousness (maps-of-maps hypothesis). An additional thread running throughout will be animal tracking as a type of Phenomenological exercise with a surprise impact on cognitive evolution.
[reading time: 20 minutes]