MIND: Identity rides this horse (3)

Inside and Underside

Inside of a person it’s too dark to read. — after Mark Twain

There is another realm of perception — that of internal (bodily) perception.  On our mind map  (Figure 1) internal perception (in purple) is a gateway connecting the conscious Ego Tunnel with bodily events in the unconscious realm.  Philosophers are interested in our internal events because they long believed that, compared to our perception of the outside world, internal perceptions are special and thus contribute to our gaining and retaining a sense of self.

mindmodelV02

Figure 1: Map of the Embedded Mind

There’s a paradox here.  Even if you were a “philosophical zombie” (behaving like a person but with no sense of self), you would still have a rich inner life because of all the bodily sensations that you have.  But what is special about them is that (to yourself) they are so clearly yours and yours alone.  Nobody else has the itch that you are currently feeling.  Nobody else feels that particular stomach grumble, heart beat, knee pain, nausea, or dizziness.  Nobody else can use your muscles to guide your finger to your nose with your eyes closed.

This knowledge is an instance of the philosophical notion that there are some first person things we cannot be wrong about.  The philosophers say that we can’t be wrong about who we refer to when we say “I believe”, “I am” or “I feel”.  We might be wrong about the truth of a belief itself (such as “Climate change is a Chinese hoax”), or be deluded about some quality that we think we have (”Trust me, I’m like a smart person.”)  But we can’t be wrong about the fact that we are talking about ourselves, that we believe a thing.  Philosophers call this principle “immunity to error through misidentification”.  That is a truly awkward phrasing.  Even they are glad to shorten it to “IEM”, because they like to talk about it a lot.

Some things that our body knows and deals with may never rise to consciousness: that we need more blood flow, more glycogen dumped from the liver, more digestive acids, and other stuff that just keeps the human animal running.  If you went to school in the twentieth century, this was called maintaining homeostasis, a steady state. The idea that life needed a steady state was, in a way, a reflection of the needs for stability that we had in that turbulent century.  Now in the twenty-first we are all about finding and pushing our limits. We only have some inkling of these unconscious processes if we are exceeding our operating specifications.  Such excess can happen, for example, because of disease or because of some cheerleading (”Go! Do something unusual or hard to prove how good you are.”) from the conscious mind.  So a big dessert after a plenty big meal might have you tasting excess stomach acid.  A marathon makes you feel, rightly, that all the glycogen is gone, and that your heart couldn’t possibly pump any faster.  We push beyond homeostasis, and automatic processes can rise to consciousness.

There are some other internal things that we just know, as the philosophers say, transparently, without thinking about them.  These include such things as the position of our limbs, whether we are tired or hungry, where we hurt.  These may pop into consciousness or not, depending on our current needs.  If they do, they are IEM, that is, we know they are happening to us and not someone else. We probably don’t start out life knowing this IEM stuff, because for a while it is hard for us to discriminate between what is true of ourselves and what is true of others, particularly our caregivers.  We presumably learn the difference ultimately by observing others, that their reactions to things are different from our own, which leads to us acquiring the above-mentioned theory of mind.  Theorists have noted that knowledge of others might very well contribute to the development of our own sense of self.

In the long run we get quite sophisticated about inferring (or knowing) things about our selves.  These things divide quite nicely, from a philosophical point of view, into agency and ownership.  Agency is the sense that you and you alone have caused something to happen, such as taking a walking step, or heartburn after over-eating.  Ownership is the belief that some attribute (like your having two legs) is, without a doubt, yours and yours alone. These aspects of self knowledge are not always IEM (immune to error about whether they refer to our self), but we tend to think they are because we are so familiar with other aspects, as explained above, that are IEM.  Philosophers currently love to talk about agency and ownership because (1) they are important to our sense of self or personal identity, and (2) the situations where we can get them wrong are ripe for arguments over what they mean.

What would be some examples of getting them wrong?  In terms of errors about agency, people with schizophrenia often say that someone else put a thought in their own mind, or that someone else caused them to believe or even do something. A more everyday occurrence is when those of us who are not psychotic still find a way to blame someone else for our actions.  Doubt about self agency can be more subtle, as when one says or thinks: “Did I just say that (awkward or regrettable utterance) out loud?”

The most commonly known example of error about ownership is the rubber hand illusion.  This illusion, even though it is really quite mind-blowing, can be produced by anyone; you don’t have to be an experimental psychologist.  The illusion pairs the stroking of a subject’s concealed hand with a view of a rubber hand being touched in the same way.  Eventually the subject will feel his own hand being touched when he sees the rubber hand being touched, even though his own hand is NO LONGER being touched.  In effect the subject takes the ownership of the rubber hand to be his own.  We could also say that some things that happen during hypnotism disconnect us from ownership, e.g., of a physical symptom, or an emotional connection to a traumatic memory.

I leave it to the reader to imagine ways in which drunkenness could mess up agency or ownership.  In general, though, agency and ownership tend to work pretty well nearly all the time. I can tell whether I moved my arm or whether the doctor did it while examining me.  So we know what’s us and what’s not.  Some argue that such knowledge is foundational, that without it you would not have the kind of sense of self that you take for granted.  Even the Buddhists, when speaking about[1] the illusory Self, note that one may identify with some mental events as “me” (= agency) or “mine” (= ownership).  What I want to emphasize here is that these everyday things we know about ourselves, and that give us the feeling that “this is me” generally are either directly retrieved from the unconscious or else built on top of knowledge from the unconscious.

Note that if you are not psychotic, drunk or brain-damaged (which is hopefully most of the time), the self knowledge described above is also, according to Quassim Cassam[2], “trivial” because it is so easy to come by.  Cassam notes that philosophers nearly always ignore the kind of self knowledge we really care about, “substantial self knowledge”: that is, a more or less accurate understanding of our own character, limitations and aspirations.  Such knowledge requires mental sophistication and use of information from other people.  We can be very deceived about this.  A person might not know that he is a narcissistic bastard, while others, perhaps many others, do know.   Nearly all of us choose to believe only feedback about our imagined good qualities, throwing anything else into a mental trash can.

Which brings us to the fact that some deficits in our substantial self knowledge are also traceable to the unconscious.  We often feel things or take actions that seem to come from nowhere, because unconscious parts of us suddenly seize control without yielding any awareness about why.  Some people give the Jungian name, the Shadow, to these part/parts of the mind responsible for perplexing bursts of temper or tears.  These parts may have been actively repressed earlier in life.

Another mode of unconscious expression, according to anthropologist Gregory Bateson[3], is our nonverbal communications.  He said that we express though body language and tone of voice things that are not really translatable into words.  While some nonverbals can be feigned by actors or other speech makers, they normally happen without a conscious decision or even awareness.  Thus some expressions of our personal style and feelings, which might otherwise give us self insight, can be obscured from us, and often are perceived only subliminally by others.

We are only hitting the highlights here, so it is time to talk about how things get to be conscious.

Fishing in the Unconscious Reservoir.

“Consciousness … does not appear to itself chopped up in bits.  It is nothing jointed; it flows.  A ‘river’ or a ‘stream’ are the metaphors by which it is most naturally described.” — William James, 1890

William James’s metaphor of a conscious stream is meant to emphasize continuity, how one thing flows into another. He says that we have transitions from one focus of thought to another, but each transition partakes somehow of both the previous and following thought, the net result being absolute subjective continuity.  We noted before that with modern research tools we know that  there are tiny gaps that happen in perceiving and mental processing, but we aren’t aware of those in consciousness.  Therefore James is, once again, correct from his perspective of more than a hundred years ago.

Of course a real stream might have different currents going alongside one another.  Sticking with the stream metaphor, that would allow that we could have two or more thoughts running in parallel.  Can you think about more than one thing at a time?  Not at a conscious level.  You might do more than one thing at a time, but you are doing it by rapidly switching back and forth.  So our prized “multi-tasking” is just an illusion, and, research shows, is neither efficient nor harmless.  Trying to do chores and manage young children “at the same time” is incredibly wearing.  Trying to drive and mess with your phone is a major cause of “accidents.”

Now there’s a paradox here.  The brain has often been described as an enormous parallel processor.  And we know that it does fantastic numbers of things at once, like an octopus with a million arms.  But, consider that we have this commonsense concept about “paying attention”.  Attention is also a longstanding concept in psychology.  What both concepts mean is that at any given time the stream of consciousness has a single directed focus.  Our attention is on one thing at a time, such as: the appearance of the person in front of us, or what they are saying, or how what they are saying relates to what they said before, or how having to listen to them is annoying because we need to get a drink instead.  Note that in the first two examples attention is focused on the Ego Tunnel surface facing the external world.  In the other examples attention is focused on the internally facing surface, the border between consciousness and the unconscious.

“Attention” then is our everyday term for changes in the contents of our single stream of consciousness.   For a while I thought that we could blame the I* for changing attention, like it was some sort of channel changer remote control.  But that gives full control over consciousness to the part that we have described as pure awareness, i.e., an observer only.  Changes of attention are the same thing as changes to the contents of our conscious stream.  If we now sort out how those changes happen, then this long-winded “basic” overview of our simple mind map can come, mercifully, to a conclusion.

The stream of consciousness arises from five main mechanisms.

  • Unconscious mental processes have to take turns getting access to consciousness.
  • Subjectively we see this as shifting of our attention. Sometimes the shift seems volitional and sometimes it seems to be imposed upon us.
  • Memories come into consciousness, sometimes being pushed up from the unconscious and sometimes after a mental effort to find them.
  • We explore the future by imagining it.
  • We annotate our own perceptions and thoughts, adding or extracting meaning that also serves to help future extraction from memory.

So we only have a single stream of thought, more like a water pipe, really, but there’s a lot going on, both in the world around us and in the many parts of our society of mind.  How is it determined which things get into consciousness?  My 2 1/2 year old daughter once said to me, “Daddy, I know everything now.”  That would be us if we knew how consciousness keeps filled up. One natural model would be to suppose that thoughts and perceptions have to compete for access to the stream, based on some criteria.  It feels like a competition at some times (when driving, lost, in fast traffic in the rain with a baby crying in the back seat) more than others (reclining on white sand with a mild breeze and gentle waves rolling in).  But what kinds of mental events would compete?

First of all there are immediate, deliberate decisions.  Following our train of thought, we just naturally decide: to look at something, to listen for something, to say something, to recall a name or what someone said last time we saw them, to memorize a phone number, that it’s time to stop and look for birds, to take a selfie, to … OMG did I leave the water running?

I threw that last one in even though it feels different.  It might be an unconscious warning that somehow finally got access to consciousness.  It instead might be the result of thinking about what you have to do when you return home, which causes some kind of mental spark to jump from the idea of home to the memory of turning on the garden hose.  That spark is not necessarily intentional, but is a bit more than a happy accident, because you are a person who tries to avoid goof-ups.

The unconscious is a huge reservoir of stuff that sometimes becomes conscious. As Galen Strawson said, “The conscious/non-conscious border is both murky and porous.”  We symbolize this by double-headed arrow in our Map.  Some unconscious material emerges (from our conscious perspective) apparently spontaneously.  But it is probably the case that we have a large number of unconscious watchers, each with permission to interrupt consciousness when certain things happen.  This could be a long list, including internal and external perceptual events: sudden loss of balance;  the urge to sneeze; large object approaching fast; hearing your spouse’s voice; bell on the shop door rings. Basically our conscious bell gets rung because of unplanned events related to a whole spectrum of biological drives, emotions, motives and preplanned goals or intentions, all of which live in the unconscious part of the mind.

Other changes to the conscious stream are deliberate, activating dormant parts of the mind to do specific jobs.  You might need to do some arithmetic, or estimate your chances of getting away with something, or make a pizza crust, or type on a keyboard. These mental parts have been variously characterized.  For Minsky they were the utility parts of the society of mind.  Metzinger calls them “virtual organs of consciousness.“ The psychologist Robert Ornstein said they are “small minds” of different types, made up of major and minor abilities and biases.

All of your memory is unconscious until you actually recall something from it.  The act of trying to recall can provide a simple demonstration that conscious thinking is not necessarily done in words.  When you try to recall a name or other word you cannot say the word mentally because you don’t at that time know it.  But in some sense you know about it, as a hole or gap in your knowledge.

Suppose you have an intention of telling someone that the new English teacher’s given name is Myrtle.  You might want to ask Dan if he’s met Myrtle.  Mentally you start to formulate a sentence like, “Have you met <what is her name?> the new teacher?”  But you don’t even finish the plan for saying the sentence out loud because you can’t supply her name.  So you try to remember her name, a mental effort that doesn’t use words, but instead is sort of a mental reaching that is associated with thoughts about the missing name.  Maybe you think that the name had an “r” sound, or you can “see” (i.e., recall) her face, or you know where you first met her.  Your conscious mind is doing something like a search engine query, and it somehow knows when the right answer is recalled.  This is because other memories come along with it that confirm the episode in which you learned the name.

Very often we begin with recall of an episodic memory, and then we consciously wander through other memories, exploring the past.  The conscious stream can also be taken over by explorations of the future.  We start to imagine plans and fantasies: how the future might be shaped by actions or events.  We compare this future with the past or present.  Conscious wandering in the past and future is, we think, one of our big adaptive advantages compared to other animals.  But we need our unconscious watchdogs to also take care of the present.  Early humans who were too lost in thought also got “lost” to predation, or walked off a cliff.

There is another category of conscious activity that seems obvious to me, although I have not yet seen it described.  At times when we are experiencing or thinking, we seem to mentally annotate those experiences or thoughts.  We do this by thinking or feeling more deeply about those mental events, perhaps extracting meaningfulness or assigning importance, either of which might change how the “remembering self” has access.

It is probably obvious that any overview of the mind’s structure and function, and certainly this overview, has to be shallow, ruthlessly pruning away sometimes famous material.   For instance, if you were taught Freud or Jung back in the day, you might miss them here. Some important concepts didn’t make the cut yet, but will be developed later if they help with our theme of personal identity.


[1] Waking, Dreaming, Being: Self and Consciousness in Neuroscience, Meditation, and Philosophy. Evan Thompson, 2015.

[2] Self Knowledge for Humans – Beginner’s Guide. Quassim Cassam.  http://www.self-knowledgeforhumans.com/beginners-guide.html

[3] Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology. Gregory Batesony. University of Chicago Press, 1972.

MIND: Identity rides this horse (2)

The BodyMind

These days the mind is rarely considered in isolation from the body.  Even in science fiction, with mind transplants, uploads, and teleportation, they write about how mind and body are entwined, so that new minds adjust to bodies, or they miss the old bodies.  It’s also true that when people speak about the brain and the mind, it’s a shorthand because what matters is really the whole nervous system.  As time goes on, research has expanded the physical basis of the mind.  We now know that bacteria in your gut can do things like make you want to eat sugar, apparently by affecting the large part of the nervous system in the gut.

So, in Figure 1 we don’t hold body and mind that much apart.  The map has one big beehive shape, the BodyMind, that is their combination.  Its orange outline is more or less the border of the body.  The rest of the diagram is a square-topped shape representing the Outside World.

Within the BodyMind there is the Ego Tunnel, and inside it, all that we are aware of: what the philosophers call the contents of consciousness.  The rest of the BodyMind, outside of the Ego Tunnel, represents the unconscious mind, which is everything from: the sensory organs and their output, to our past stored as memories, and all the other work of the BodyMind of which we are not consciously aware.

mindmodelV02

Figure 1: Map of the Embedded Mind

The Ego Tunnel’s surface (the blue, wavy line) looks like a border between conscious and unconscious territory, but it’s not just a border.  The Ego Tunnel is also a computational process, our own personal virtual reality generator that consumes some huge fraction of the processing power of our nervous system.  Also, in our visual metaphor, the inside of the Ego Tunnel is part of it, the reason for its existence — the Jamesian Self.  We have talked about the function of its parts before.  Let’s now look into how they fit into the overall holon called the embedded mind.

Metaphors for Me*

Figure 1 shows the I* immersed in/surrounded by the yellow space of the Me*The I* can be focused on things outside of the Self, via the senses, and it can be focused on itself, the Me*.  Some information, such as raw bodily sensations, no doubt flows directly to the I* from the unconscious, but many things observed by the I* are mediated by the self model, the Me*. Anything we say about the Me* is of course just a metaphor.  The Me* is a thing unlike any other, so we struggle to classify or describe it.

The narrative metaphor discussed earlier is helpful, but it’s doubtful that we could push that any further and think of the Me* as a book, even a book that continues to write itself.  People just don’t keep their whole life available to re-examine at any time. At its most reduced the metaphor would be something like Dennett’s “center of narrative gravity.”  But Dennett is a contrarian.  This is just one of his ways of saying that consciousness doesn’t really exist as a thing, that there is nothing about it that we need to explain, that there’s no Hard Problem of Consciousness.

Another powerful metaphor for the Me* is that of a model or simulation, as proposed by Metzinger.  Either term has the sense of something that stands for something else by virtue of simplifying it.  Prior to the digital age, use of either “model” or “simulation” would have meant that the simplification was rather extreme.  Even a mathematical simulation would lack dynamism and detail.  These days, however, computation allows us to simulate something to an astonishing degree of likeness, and our thinkers routinely imagine simulations of alternate realities as being possible.  So a simulation implies the dynamism that is lacking in a narrative.

But wait a minute — if you simulate part of the Self, what is the thing being simulated?  It can’t be some Self that is more real.  Philosophy’s usual answer to that question is that the simulation is of “what it is like (remember Nagel’s conscious bat from the previous chapter?) to be you.”  This still sounds circular: the Self is a simulation of the Self?  That’s why it always seems so helpful to say that there is an I*, which at least gives us something that is aware of the simulation.

Further understanding the functioning of the Ego Tunnel needs to consider what is outside the Tunnel and how it gets into consciousness.  Let’s turn to what is outside the Ego Tunnel’s border, a border that is, according to Galen Strawson “both murky and porous.”  As our diagram shows, there are two realms outside the Ego Tunnel: the outside world and the internal, unconscious mind.

Perception and Sociality.

Figure 1 shows perceptual processing, largely automatic and unconscious, as the gateway through which the simulation of the outside world arrives into the Ego Tunnel.  As we have seen, crisp and live as that world seems, it is only a model, poor in detail, with invisible gaps.  I have been hearing that refrain since my college psych courses, but always I (actually, my Me*) thought, ‘So what, it works well enough and any goofs (illusions) that it makes are just curiosities.’  The reality behind this unreal simulation is something else, however.  First of all, quantum theory completely destroys the solidity of the outside world.  All those particles that make up our illusory environment are continually forking into different parts of the multiverse as their wave functions collapse.

Suppose you shrug off the idea of quantum theory as being reality’s clown suit, and assume that the body of reality somehow stands naked and exposed to us. There is still gobs of evidence that we see and hear and feel things that aren’t there, yet not some things that are there.  Our abilities are designed to maintain the illusion of a stable outside world, even though the raw data we start with represent only bits and pieces of that world.  We all know about the visual blind spot, but we never see it.  Our eyes continually make tiny movements, but we never are aware of it.  When we scan across a scene our minds fill in the gaps between the images seen at the beginning and end of a scan.  The fine detail of the world’s visual appearance is only there for focal vision.  Peripheral vision is an entirely different, lower resolution simulation, but we believe that it and the focal world are continuous, one and the same.  We finely discriminate colors when they are side by side, but can’t identify them later in a line-up.  The same color perception can be caused by different combinations of light wavelengths.

There is a theory, called critical realism, that evolution has forced us to evolve senses that accurately model the outside world, because how else would we find our food and mates, and escape danger.  This theory is being replaced by one based on a mathematical proof[1] that what matters is identifying something more quickly while expending less energy to do it.  That would mean that we would evolve perception that matches reality only if reality was already structured in a way that meets our needs.  Most of us do not think that the universe was designed in some way that favors human beings over trees or rocks.  Therefore it’s most likely that we are perceiving the world in our species’ own quirky ways.  This is nature’s way.  For my dog, half of the world is food and the rest is a parking lot.

The model in Figure 1 does suggest that, for our minds, other people are as important, or more so, than is the rest of the outside world. Like other social species, our senses are tuned to immediately pick out and identify our conspecifics from an early age.  When we are touched by another person it lights up a different part of the brain than other touches do.  Robin Dunbar, a primatologist, popularized the notion[2] that we have big brains because they are needed to deal with the complex soap opera called other people.  He also looked at neocortex size in primates and found that it correlated with the size of their social networks.  For humans the number is 150, which pops up all over the place, from hunter-gatherer clans to army companies, to the number of people with whom we can maintain personal relationships.  Dunbar and others then looked at levels of social intimacy and found that, starting with an inner clique of 5 best friends, the sizes of our expanding social circles increase by a factor of 3 as levels of intimacy decrease.  This is shown on our Map as concentric arcs in the social environment.

As a onetime primatologist, my heart is warmed by the fact that Dunbar (who did his thesis on the Gelada baboon at the same time that I was studying rhesus monkeys) using primate data, contributed perhaps the most widely known quantitative theory in all of social and psychological science.  But he went further than that, all the way out to philosophy, when incorporating the concept called Levels of Intentionality.  We know that a child first realizes at age 4 that other people have their own mental states.  We say that the child now has a Theory of Mind (ToM), meaning that the child believes that other people have mental states as well.

Philosophers such as John Searle say that some subjective (mental) states are directed to the outside world, creating a relationship between us and that world called intentionality.  There are levels of intentionality, denoted by how many such relationships hold simultaneously.  A child may have a theory of mind about her doll with whom she is having tea.  This is level two: she thinks (level one) that her doll likes (level two) tea .  Dunbar thinks that many animals have level one (they believe something, or intend something); some might have level two.  So how far can this levels business go?  Listen to Dunbar as he describes the magic of storytelling.  Count the italicized verbs to follow the levels.

  1. “… the audience must understand that Iago intends that Othello believes that his wife Desdemona wants to run off with Cassio (which would probably not be much more than idle fantasy by Desdemona were Iago not able to convince Othello that Cassio himself also wanted the same outcome) … if they also have to factor Cassio’s complicity into the equation to make the deception convincing for Othello, the audience has to be able to work at fifth order intentionality.  But to do this, Shakespeare himself must operate at one level higher: he must intend that the audience understands …etc.  Shakespeare was having to work comfortably at sixth order intentionality, and this is now one level beyond the normal limits for most adult humans. “

So now we know one more way in which Shakespeare was a brainiac.  Dunbar and others emphasize the cognitive processing power (hence bigger brains) that is needed for higher order intentionality. Hence bigger brains exist.  In primates, relative neocortex size correlates with social group size and other social behavior measures.   But is all that work being done consciously, i.e. by the Me*?  Perhaps not.  Metzinger and others have noted that many things we perceive about other people happen immediately, like other, seemingly simpler, perceptions that don’t require conscious thought.  The standard example of simple perception comes from philosophers, who nearly always cite seeing and touching a book in front of them. One can only wonder why they pick that example.  On the other hand your brain can heat to the smoke point when figuring out the maddeningly subtle and indirect intentionality of the characters in a John LeCarre’ novel.

So in our Map, “sensory processing” is part of the unconscious part of the mind.  We recognize birds, books, smiles and eye contact without thinking. Indeed it’s likely that all body language and other nonverbal communication is usually below the level of conscious notice entirely.  Higher level social impressions, such as any exchange involving conversation, require some conscious reflection to be understood.  Much to do with our personal identity involves higher order intentionality, as in the sociologist Charles H. Cooley’s concept of the Looking Glass Self.  There are 4 levels (just count the verbs) in his prototypical statement about social influence:

“I am what I think that you think I am.”  Charles Horton Cooley, Human Nature and the Social Order, 1902.


[1] Natural Selection and Veridical Perceptions. JT Mark, BB Marion, DD Hoffman. Journal of Theoretical Biology, 2010.

[2] The Social Brain Hypothesis and its Relevance to Social Psychology. RIM Dunbar, Annals of Human Biology, 2009.

MIND: Identity rides this horse (1)

The Mind Map

Personal identity rides the mind.  The mind rides the brain.  The brain rides the body (surprised you there?).  The body rides whatever conveyance (horse, skateboard, car) that the identity chooses.  So it’s not riding “all the way down”, as the old joke goes.  However, the joke is about the concept of nested  levels of organization.  That concept is a powerful aid to understanding, especially when we think of holons.  A holon is a whole, made of parts whose integration into the whole creates an entirely different thing from the parts.  And of course every part is also a holon made of other parts. So it really is holons “all the way down”. What makes this so useful is what Ken Wilber claims: anything that exists or that you can name is a holon.   We are interested in personal identity.  I can summarize a lot of esoterica by stating that a personal identity is a holon made up of a human organism and a mind, embedded in a multi-layered social system.

But what’s a mind made of?  What are mind parts?  The previous chapter on the durability of personal identity made a start in naming some key parts of the mind.  But if we want to understand identity, and its most mysterious and intriguing part, indeed the very horse it rides, is a mind, then we need to know what’s included in a mind and how it all fits together.  Any solid concepts that we find could help later when talking about current and future changes to identities, and to what extent a person might have more than one.

Actually attempting to map the parts of the mind in public like this is risky because it is contested territory; the authorities, experts all, agree to disagree.  The risks might not be like walking in the proverbial mine field, but they are like walking in a dog park where the canines themselves are in charge of cleaning up: messy and likely to be embarrassing.  Still, we are engaged in chimerealism, so let’s get to it.

mindmodelV02

Figure 1: Map of the Embedded Mind

Behold Figure 1, a diagram of the holon called embedded mind.  It’s jam packed with concepts that we are going to need.  To some people it will be scary looking, having too much going on at once.  But consider it to be a map.  We can all read maps.

Our map is a necessarily simplistic guide to the territory called mind.  It’s also a way to anchor a vocabulary that will allow us to explore that territory in detail without having to deal with so much of the technical language used by the sciences and philosophy of the mind.   The map refers to models made by different experts with various purposes, so some things that might be implied by it will doubtless be wrong, or in expert eyes, incompatible.  And, as us nerds sometimes say, wrong for multiple values of wrong.  However, like any good map, it is replete with interesting places to stop off and visit.

What’s in Your Ego Tunnel?

Let’s start in the middle. There’s a circle made of a wavy blue line.  This is the surface of Thomas Metzinger’s Ego Tunnel[1].  Each one of us is enclosed inside of our own simulation of reality, which Metzinger likens to a tunnel, because all that we experience is confined to that tunnel. It’s as if we are moving through the tunnel while over time our experience changes. He might have picked another metaphor: a jail cell (too restrictive) or a cave, but that’s Plato’s metaphor and so might be confusing.  Metzinger implies that the walls of the tunnel (the “surface” in the diagram) are like a movie screen upon which our model of reality appears.  But to whom does it appear?  In the last chapter we called it “the “I”, a term that goes back to William James.  But something like it has been called other things, including the ego, the illusory self, and pure awareness.  Let’s call it the I* (think “eye-star”) just to keep it distinct. The I* in Figure 1 is a gray oval, but we think that it doesn’t have any real physical boundaries.  Research says that it is not a fixed part of the brain, but some kind of fluid, ever-changing process that provides us the illusion of a point of view.

Thinking about that “point of view” can lead us back to misunderstanding.  The error goes all the way back to Rene’ Descartes, who said that someone must be perceiving what our senses bring into the mind.  He described the perceiver as a homunculus, a little man inside your head.  Now generations of college students have been told that old Rene’ was dead wrong, because how could the homunculus perceive anything unless it had another homunculus inside it?  And then what about the third little person inside the second one, and so on “all the way down”?  The problem is that once we walk out of the classroom our experience says that someone is home inside of us, that we have, or are, a Self.  So all the philosophy and research disproving the existence of a self is hard to grasp, to put it mildly.

Some who hold to the illusory self theory still find the I*/Me distinction useful, where the I* is the pure, moment-to-moment awareness and the Me* is the observed, ongoing content of our internal lives. Let’s call it the “Jamesian Self” model, since William James stated the modern version of it[2].  There is supporting evidence for it from a number of directions.

Self Talk.

There’s a hint about the Jamesian Self in everyday life. Sam Harris reminds us that we talk to ourselves.  He notes that once children start to gain language, we hear them engaging in long monologues. The function of this might be to practice speaking out loud, but to whom are they speaking?  One possibility is imaginary friends, a subject to which we shall return later.  For now it’s worth asking, what would be the beginning of an imaginary friend except some part of your Self?

Talking to yourself, out loud or not, is common enough throughout life.  You might announce, that “I found it!”, “That was tasty”, or “I need to get going.”  If in fact someone else hears us, we can be somewhat embarrassed, depending on what was said and whether we know the hearer.  These announcements seem to be part of a private conversation.  Could it be the I* talking to the Me*? No, we have assumed that the I* is a pure observer, so it is mute. Maybe “self talk” is the Me* reporting to the I*.  This is all bound up somehow with what we know from people with split brains, that language issues from the left hemisphere.  That also is a story for later.

Meditation Can Isolate the I*.

And every morning we are chased out of bed by our thoughts. — Sam Harris, Waking Up.

Sam Harris has pointed out that the traditional rationales for meditation often come with medium to huge doses of religious ornamentation, but that the practice of meditation, while difficult, gives reproducible results.  Internally its goal, and its result when successful, is the isolation of awareness itself (the I*) from the thoughts and perceptions (the Me*) of which we are aware.  Meditation has externally measurable effects on brain function and body physiology.  Meditators have been in great demand by researchers for years now, as serious (”respectable”) interest in consciousness swelled like some kind of economic bubble.

Research has found that meditators produce strong oscillations in their gamma band brain waves.  These waves apparently reflect how the brain synchronizes data coming in from different senses with slightly different delays.  In that way it can put together information to meld sensations together, so that, for example you would both hear someone talking and see their mouth moving at the same apparent time.  The intensity of these waves is higher when meditators think that their meditation is deeper.  So perhaps gamma band synchrony is a big part of the moment to moment perceiving I*.  Studying meditators allows scientists to see the synchrony.  Outside of meditation, the brain activity of the ever busy, chattering Me* probably obscures our ability to externally measure the simpler process of the I*.

The lore of meditation says that there are levels at which the practitioner starts to perceive the universe directly, as a kind of ultimate reality.  This goes beyond our scientific understanding at this time.  Nothing we know from external study would suggest that the brain, no matter how synchronized its internal workings might get, would be able to sense anything except through the sensory equipment (eyes, ears and so on) that we all agree that we have.  Thus current mainstream theory sees no way for us to bypass those senses and “get under the skin” of reality to perceive it more like it “really is.”  The same unanswered question applies to some experiences from psychedelic drugs.

There are of course plenty of thinkers who already offer explanations, and perhaps some of these will provide a spark to work on a new consensus when mainstream science and philosophy become ready to tackle the next level. On the other hand, maybe the final understanding will somehow derive without destroying the current consensus.  The current Dalai Lama, who might be said to speak for meditators in the same way that the Pope speaks for pray-ers, has said out loud, in public, that maybe even the highest forms of consciousness must depend on the physical brain.

The Experiencing and Remembering Selves.

Research from a Nobel prize winner in Economics is a surprise fit to the theory of a Jamesian Self.  Daniel Kahneman, a cognitive psychologist and not an economist, won the prize in 2002 for work showing that people do not make decisions in the rational, self-interested way that had been assumed by economists.  These days Kahneman gives TED talks[3] about a much bigger topic: how a division between an experiencing self and a remembering self affects how we evaluate all parts of our lives.  This division helps explain many puzzling cognitive biases (thinking and judgement errors) that we all have.  Few researchers have earned the right to have a theory this broad be widely accepted.

Kahneman starts his story with a classic description of the “present moment” aspect of conscious experience.  We know that our sense of what is “now” lasts about 3 seconds.  He dramatically points out that we have about 600 million of these moments in a lifetime and yet there’s a sense in which they all flow away, lost to us like a tear dropped in a river.  As they are happening these moments are apprehended by what he calls the experiencing self.  We may naively think that we remember important ones of these moments as they were at the time, but many experiments show this is not so.

Earlier I wrote about Julian Jaynes showing us graduate students that our simplest memories, such as when we went swimming, were not recalled in a form that reproduced what we actually experienced when we swam.  Kahneman’s theory also says that memory is handled by a remembering self that summarizes what happened in ways that are definitely not the literal truth.

One of his classic studies involves the memory of pain experienced by (voluntary) immersion of a subject’s hand into cold water.  Ina typical study all subjects got a fixed period, say 60 seconds, at a painful temperature of 57 degrees Fahrenheit.  This period was continuous with another 30 seconds during which (unknown to them) the water had been warmed slightly, to 59 degrees. Half the subjects had the warmer period at the beginning, and half at the end.  Only 7 minutes later all were asked which part of the experiment they would be more willing to repeat.  The idea was that they would be more willing to repeat the part that was remembered as less painful.  Eighty per cent of them preferred to repeat the part that ended with warmer, less painful water.  This is even though both parts, as moment-to-moment experiences had exactly the same amount of time in colder and warmer water.

Kahneman did this and other studies, including ones of a naturally painful medical procedure, to show that, basically, memory is biased to emphasize the more recent experience in an event.  If a painful episode ends with a decrease in discomfort, or even some reward (as when we give a treat to a child or pet who has had to endure an unpleasantness), then the episode will be recalled as less painful than it actually was.  Note that in the cold water study this happened only 7 minutes or 140 “now moments” after the painful experience.

In study after study, the separation between memory and experience seemed to be very strong. Yuval Harari[4],in recounting Kahneman’s theory, prefers to say “narrating self” instead of remembering self, apparently because memory is often likened to story creation.  Harari points out that in our self narration we often create stories of the future.  These are plans, everything from what to do to get the kids in the car, to New Year’s resolutions.  Plans are made by the narrating self, but for them to get executed they have to engage the experiencing self.  As Harari puts it, just as the narrating self cares not for what really happens to the experiencing self, the experiencing self is not bound to the plans of the narrating self.  Thus so many times immediate experience overrules those plans.  We don’t eat the right things or we don’t maintain our cool under stress.

Kahneman’s theory of the experiencing versus remembering selves is based on findings of cognitive biases and failure of plans. There are plenty of other psychological explanations of these, but Kahneman’s theory of the two types of self fits them into a very broad picture.  How much alike are our Jamesian (I/Me) Self theory and Kahneman’s theory?  Certainly the experiencing self seems very much like the I* of pure awareness. Note however that to say the experiencing self cares for something, or “must be engaged” for something to happen, goes beyond an I* of pure awareness.  Still there are philosophers who identify Kahneman’s experiencing self with the I* that is the subjective side of the Jamesian Self.

Before going on to the Me* there is one other tidbit about the I*.  Metzinger actually gave us an evolutionary adaptive reason why the I* needs to be so focused on the present moment.  He says that it’s the present moment that carries information about immediate risk (such as an approaching predator, falling rock, falling stock price).  Therefore we have to be aware that we are in the present moment, and that the moment is more real than “our memories and fantasies” being entertained by the Me*.  So it’s no wonder that Kahneman found that, in a manner of speaking, the I* (experiencing self) does not give a cr*p about anything but right now.

We next have to ask: how does the remembering/narrating self compare to the Me*? To answer this we have to be clear about what aspect of memory is involved.  Our concept of memory includes everything from piano solos to phone numbers to faces and stories of the past.  the Me* is all about episodic memory, which is our ability to recall personal experiences of an autobiographical type; essentially, what happens to us or what we did.

We have been talking about episodic memory when trying to understand Self persistence (previous chapter) and now, self narration.  The Me* is the part of consciousness responsible for the continuing story of our lives.  Therefore the Me* would have to involve the processes of retrieving, using and forming episodic memories.  This is a big part of what Kahneman’s remembering self would have to do. So Kahneman’s theory seems to support the Jamesian Self that we are using in Figure 1.  The I* is his experiencing self, and the Me* has as part of its job what the remembering self does.

(A self disclosure: I just stood up and said out loud, without premeditation: “This is just what I wanted to do.”  Nobody else here but me and the dogs.  To whom or what was my utterance directed?  By whom?)

Split Brains and Narration.

Another strong research program fits the Jamesian Self idea:  five decades of work on people whose brains have been surgically divided in half, the so-called split brain studies.  Those and related studies have shown that on the left side of the brain there is a “left-brain interpreter[5]” that produces narrative interpretations of why new information fits with what a person already knows.  This theory is from Michael Gazzaniga, who helped originate the split-brain research and remains its most important figure.  The interpreter is always on duty when we are awake because new things are always happening.  This sounds like the Me*, and indeed people have identified[6] the interpreter with the increasingly common idea that there is a narrative self[7] that gives our identity continuity by narrating a sort of never-ending story of our lives .

Gazzaniga and others find that the right side of the brain, on the other hand, helps to insure that such interpretations conform more closely to facts than to beliefs, and is necessary for making morally sound interpretations. Most important for the I*/Me distinction, however, is this, from Gazzaniga[8]:

“Our right hemisphere behaves more like the rat’s. It does not try to interpret its experience to find the deeper meaning; it lives only in the thin moment of the present.” (italics added)

This quote suggests that at least a good chunk of the experiencing self occurs in the right hemisphere.  We can’t say that the experiencing self is confined only to the right side because plenty of split brain experiments show the left hemisphere able to report on its experience of sensory events.

Narration as a mechanism of the self seems pretty widely accepted.  For example, the highly regarded and popular philosopher of the mind Daniel Dennett wrote about[9] “The Self as a Center of Narrative Gravity”.  He develops a detailed metaphor that the self is a fictional character, like an ongoing autobiographical novel in your mind.  As one of many these days who say that the self is only an illusion, he likens it to another abstract concept, the center of gravity.  The center of gravity of a worldly object is an abstraction, not as real as the object itself, but nonetheless useful in daily life.  You implicitly calculate a center of gravity when you want to set your coffee cup down near the table edge without the cup tipping over. It’s a center of gravity mistake when you swerve your tall truck to avoid something, and the truck rolls over. Dennett calls the self a narrative center of gravity, in that all your internally generated stories revolve around that imaginary self.

Use of the term, narrative, means we are talking about the realm of language.  In the sciences of the mind it is not totally settled whether you can be conscious without language.  Some of us seem to think mostly in words, while others excel at “visualizing” stories of the past or future.  Steven Pinker says[10], “Consciousness surely does not depend on language. Babies, many animals and patients robbed of speech by brain damage are not insensate robots; they have reactions like ours that indicate that someone’s home.”  However, his examples seem to be about the awareness possessed by the I*.  The Me* is the story-maker.  Pre-verbal infants and animals might be able to create a self narrative without using words, maybe analogous to a stick-figure cartoon with only non-verbal sounds.  Dreaming dogs look and sound like they are doing this.  Our own dreams are often eerily lacking dialog.  For now let’s say that the “language” of consciousness might be in some proportions verbal, pre-verbal or non-verbal.

However, consciousness science has not clearly decided whether the I* is only an observer, as it is when one is meditating, or whether it is also responsible for decisions to act.  I hinted at this when talking about Kahneman’s experiencing self, and how it can cause actions contrary to the plans of the remembering self.  Experts often cite mindfulness meditation as the pure “I”.  But when they talk about controlling action (which the philosophers, as we shall see later, call “agency”) they cite the I* as its source, apparently because action always occurs in some present moment.  But this doesn’t make any sense.  Everything in the conscious self occurs in the present moment; that’s one of the defining characteristics of consciousness.  It makes more sense to stick with the proven existence (via studies on meditation) of the pure awareness that we call I*.  This leaves action/agency to the thinking, narrating, remembering, the Me*, which is processing the information needed to decide about actions.  There are fMRI studies that are starting to localize all these functions in particular parts of the brain.  Hopefully such studies can eventually resolve the question.

It seems clear enough that what we have called the Jamesian Self of the I* and the Me* is pretty well accepted.Before moving on we have to carefully interpret this narrative thing.  Ever since the postmodernist social critics, the word narrative has been applied to many things.  Among those who study the self there are those who think that the self narrative is a life story that we try to keep consistent.  Some say that a self narrative is the only way to live a life of which you would be proud: the old “unexamined life is not worth living” idea.  Skeptics say that neither of these things is true, that, for example, some people live more in the moment, so that their past experiences are only implicit (usually not recalled to memory) in how they affect conduct and thinking. According to the philosopher Galen Strawson[11], people just differ in how much they think about the past. Some critics think that the Me*s narratives are really short, trivial,and disconnected: on the order of, I’m hungry so I should decide what to eat.

For our purposes I think we should be flexible about the scope and continuity of what we think is self narrative.  For different people, or a person at different times, the extent to which they try to connect the present to their overall life story is going to vary.  The hungry person might decide to eat just because, or because there won’t be time later, or because the mood changes from being too hungry have led to regrettable social interactions, or because Dad always said you had to “clean your plate” to the child that you once were, or because food should not be wasted when some people in the world are going hungry.  Our stories may be short and disconnected, or thoughtfully grounded in who we are, or anything in between.  None of these differences seem pertinent to our map of the mind.

In this and the previous chapter we have seen a number of theorists and theories that map well to the Jamesian Self of the I* and the Me*.  Ken Wilber calls an idea that is generally accepted an “orienting generalization”: something that is true enough to use in understanding other things.  Now that we accept the Jamesian Self as an orienting generalization we can keep on filling out our map around that center.


[1] The Ego Tunnel: the Science of Mind and the myth of the self. Thomas Metzinger, Basic Books, 2009.

[2] Principles of Psychology, William James, 1890.

[3] https://www.ted.com/talks/daniel_kahneman_the_riddle_of_experience_vs_memory

[4] Homo Deus: a Brief History of Tomorrow. Yuval Harari, Project Gutenberg, 2017.

[5] The Interpreter within: the Glue of Conscious Experience. Michael S. Gazzaniga. http://www.dana.org/Cerebrum/Default.aspx?id=39343

[6] Philosophical conceptions of the self: implications for cognitive science.  Shaun Gallagher. http://ummoss.org/gallagherTICS00.pdf

[7] Dennett, D. C. (1992). The self as a center of narrative gravity. In Self and consciousness: Multiple perspectives. Hillsdale, NJ: Erlbaum.

[8] The Interpreter within: the glue of conscious experience. Michael S. Gazzangia. http://www.dana.org/Cerebrum/Default.aspx?id=39343

[9] The Self as a Center of Narrative Gravity.  Daniel Dennett.  In Self and Consciousness: Multiple Perspectives, F. Kessel, P. Cole and D. Johnson, eds, Erlbaum, 1992.

[10] The Brain: the Mystery of Consciousness. Steven Pinker. Time, 1/29/2007.

[11] Against Narrativity.  Galen Strawson, Ratio (new series) XVII 4 December 2004 0034–0006

 

Artificial Harm

Imam_Ali_and_the_Jinn_Cropped

There’s been a lot of talk lately, from big names like Musk, Hawking, and Gates, that humanity might face some future threat from the intelligent software systems, aka artificial intelligences, that we are likely to build.  Kevin Kelly, a longtime pundit on things digital since co-founding Wired magazine, just published an essay saying that these fears are way overblown.  He listed 5 common assumptions that people make about  the growth of superhuman AI, claiming that there is no evidence supporting any of them.   Therefore he thinks that we might be waiting superstitiously for super AI like the 20th century Micronesian tribes waited fruitlessly for the WW II cargo planes to return with trade goods.   His article is worth reading, but in case you don’t, the 5 unsupported assumptions are these:

  1. Artificial intelligence is already getting smarter than us, at an exponential rate.
  2. We’ll make AIs into a general purpose intelligence, like our own.
  3. We can make human intelligence in silicon.
  4. Intelligence can be expanded without limit.
  5. Once we have exploding superintelligence it can solve most of our problems.

People jumped all over this, many of them taking the position that Kelly’s arguments were straw men.  I was shocked to realize that my recent studies for this blog’s book project actually gave me an informed opinion.  I posted same, and repeat the opinion here below.

[Kelly’s post is …]   Right in many ways but wrong on risk of the superhuman.   Here are two risky scenarios where AI exceeds the abilities of either single humans or groups, without ever needing to be the superhuman artificial general intelligence straw man.  Either scenario or both could be imminent. [ I meant imminent in a historical sense, but probably not the next decade. ]

[ My first point below relates to Kelly’s argument that intelligence alone can’t increase knowledge very much without having a way to do research and engineering in the real world.  ]

(1)  Yes, new knowledge often requires real world experiments.  However, models and simulations can and do help zero in on which experiments to do.  A better, faster facility for gathering and integrating existing knowledge will do better at picking the simulations to try.  Sims are already an existing strength of current silicon systems.  Give such a system effectors for doing experiments (there are many ways to do this, including help from cooperative or coerced humans), and it learns more about the real world.  Because it would probably not have human cognitive or emotional biases, the peer review that we use to eliminate those errors would not be needed.  It could do this faster than any team of humans, and with a focused agenda that might be kept secret from us.  The resulting knowledge is power, which might be wielded to our detriment by any system whose goals don’t align well with ours.

(2)  Yes, the strongly established society of mind concept means that our ability to solve problems partakes of a variety of knowledge-extracting abilities that we lump under “intelligence”.  We very poorly understand how this gets coordinated in a single human mind.  But people are working on it.  And we likely understand better how problem-solving coordination gets done in groups of human minds.  For a AI, there is no difference between the group or individual situation [ an AI can easily be a “group” ], so principles derived from either or both will help it.  If we give strong coordinating power to an AI with any fruitful set of intelligent abilities, then what it can do will have emergent properties above and beyond the mere sum of its component abilities.   Such a system could emerge at any time, and lacking the functional equivalent of moral reasoning and a conscience, could do us damage.   It would not need new knowledge, just an understanding of Machiavelli and access to the internet.

UNITY: Coherence of the manifold self (5)

Third Person Narrative

Experts now don’t deny the importance of memory in maintaining our personal identity over time, but they do not find memory to be any more sufficient for the purpose than is the mere persistence of the body, the human organism itself. Philosophers and neuroscientists alike find it necessary to fill the gap by joining the social scientists, who have long asserted, while pounding their lecterns, that identity is a social construct. We can skip their theories about the social origin of identity. It’s enough just to look at the external facts.

Take the above-mentioned interruptions in memory. If we get any help at all in filling the gaps, it will come from other people (”You should have seen what you did just before you passed out” “I remember when you were just two and a half and you looked up at me and said …”).

Culture surrounds us with reminders, talismans, and even enforcement of our identity. This starts very early. We all know the bitter fight about when in the period prior to birth that a nascent human becomes a person. The usual pattern is that a family prepares the way for a baby, both in setting up material possessions for its care, and announcing to their social circle that the new person is coming, and possibly its sex and name. Then birth certificates nail down who we are, who our parents are, and where we entered the world. We become a recognized person with legal rights. This document is drawn on throughout life to validate our identity in new contexts. It is also common for hospitals to store a part of a newborn, in the form of cells from a cheek swab or blood from a heel prick. DNA findings at this time can reveal health conditions, knowledge of which might need to be retained for life. Parents might decide to store umbilical cord blood for stem cells that can be used to repair the body of this new person indefinitely into the future.

The end of life is interesting because it is not the end of identity, although for most people, at least up until now, identity gradually gets unwound, making a smaller and smaller cultural footprint. Other people may memorialize us shortly after death, and collect memories and artifacts that demonstrate the continuity of our identity over time. A few more accomplished or notorious people have their lives and deeds more or less immortalized. An open question today is whether digital culture might grant a longer post-mortality to the non-famous, particularly people who are active on social media. Certainly those media are starting to move in that direction as more of their patrons die.

As we play our different roles in life — parent, student, customer, worker, boss, citizen — each of the corresponding  constituencies want to mold us, often pulling us in conflicting directions. But none of them want our continuity as a person to change, and indeed they reinforce it over and over. We are always showing up, wearing the badge, signing our work. If we go away on vacation, those identities will be waiting for us, eagerly wagging their tails in greeting, or cracking the whip to catch up and meet deadlines. When he was inventing the modern theory of identity, John Locke said that it was “a forensic term, appropriating actions and their merit” as well as “all the right and justice of reward and punishment”. In other words, our identity grounds our accountability to society. Social science now would add that it also goes the other way: our accountability to others is a big contributor to our identity.

Society (at its whim, of course) punishes falseness of identity. Mistaken and stolen identities were a big thing in Elizabethan times, amid a historical rise in individualism. Think of all the mistaken identities in Shakespeare’s plays. Only 10 years before Elizabeth I became queen, the culture of the time was rocked with an archetypal case of identity theft.

Martin Guerre disappeared from his Pyrenees home in 1548. Years later another man showed up and took Martin’s place, living with his wife and family as if he were Martin. Eventually suspicions mounted and his validity was questioned in the legal system. Suddenly the real Martin Guerre showed up. The impostor, who was actually from a neighboring village (it was a small world back then) was hanged. So much of this story intrigues that it has been re-hashed many times as fact and in fiction. We wonder — how could his wife and kids not know? This is a puzzle because we know, down to our bones, how embedded our continuing identity is in the minds of those close to us.

Society still enforces its interest in our identity with occasional harshness. Some impersonations can be felonies, others, if they fail, just prevent you from buying booze. Every time you are arrested, the cops do their best to nail down who you really are. These days it’s woe to you if your faked passport is detected.

Turning back to the inward Self, what circumstances of social isolation would cause drift that is significant enough to erase identity? We may not know enough to predict this, but we are fascinated by stories of hermits in the woods, being washed up on a desert island, chained in a dungeon, and the like. The common belief is that people have to exert extreme mental discipline to come out the same person at the other end. This at least reflects our conviction, often implicit, that our identity is maintained by contact with others.

Unlike the Self, which is internal by definition, identity is two-faced. There’s your social identity, visible to the outside world, and tagged by various markers, like debit cards and diplomas; artifacts, like your clothes and your money, and your narratives, spoken and written . Other people see you as friend, mate, rival, voter, and you internalize this, owning it or resisting it as you struggle to build and harmonize your internal identity. The Self reflects identity back to the outside as you attempt to reinforce the identity that you want others to believe. This was a big emphasis in twentieth century social science: people had “identity crises”. Then the postmodernists said that it was all out of control, that the pressures were too great, the influences too pervasive, so that identity was “fractured.”

Technology has given us new channels through which we can project our image. For all too many, the channel flows inward as “celebrities” fight to capture their attention. Celebrity worship now has a new name, parasocial relationships, used by the marketers to normalize the practice and its cynical manipulation. The rest of us are encouraged to seize the same social media channels and promote ourselves. It’s an antidote to twentieth century dependence on one’s employer/job for identity. But these days not only is there no bad publicity, there is also no bad attention, so drivel and shock multiply like maggots in meat. The best teachers to counter this trend will be those who show how to use the medium to present yourself with authenticity, which allows genuine reinvigoration and reinforcement of identity.

UNITY: Coherence of the manifold self (4)

First Person Singular

What do you see when you turn out the light? I can’t tell you but I know it’s mine.- The Beatles

You strive and sweat to maintain your physical body. It in turn protects and feeds a mind that is like a whole ecology inside your head, with myriads of actors like organisms, major and minor. Is there anyone in charge? If not, what accounts for our feeling that we are distinct, durable entities?

Many ancient mystical traditions say that our essence is a non-material soul. Plato made this a concept for all future philosophy when he said that our immaterial soul was indivisible, a whole without any parts. Anything that has no parts cannot decay, so the soul was therefore eternal, without beginning or end. Hundreds of years later this idea was still so popular that it was adopted by third century Christian authorities as dogma. The Church, as well as many other mystics and thinkers, still believe in a soul.

Not, however, the Buddhists. They say that the existence of a first person, perceiving self is an illusion, and there is also no unchanging, permanent thing, material or not, that could be called a soul in humans or other living beings. Indeed they believe that our constant embrace of this illusion is the source of all suffering. The sciences of the mind have at least come to largely agree with the illusory aspect.

There is no discrete self or ego living like a Minotaur in the labyrinth of the brain. And the feeling that there is — the sense of being perched somewhere behind your eyes, looking out at a world that is separate from yourself — can be altered or entirely extinguished
[Waking Up: A guide to spirituality without religion. Sam Harris, Simon and Schuster, 2014.]

Some scientists also embrace the Buddhist belief. Nearly all scientific work on meditation, for example, uses the ancient Buddhist technique of vipassana, usually translated as mindfulness. The latter term is also widely used and misused (watered down) in pop culture. Sam Harris, the atheist cultural gadfly, is a lifelong vipassana meditator and advocate of the practice. Theoretical biologist Franceso Varela (mentioned in part three of this chapter) co-founded The Mind and Life Institute with the 14th Dalai Lama to foster dialog and research between scientists and contemplative practitioners.

Speaking of illusions, the soul concept does morph to reappear in some current accounts that say the brain is not the substrate of the mind, that there must be something else, some other mysterious thing involved. Overwhelmingly, scientists don’t buy that. There is too much evidence that specific mental functions correlate with measured brain activity, and changes in mental functions with brain lesions. Ditto for mental changes due to direct artificial stimulation of the brain, either with electrical current or psychoactive chemicals. This sort of evidence might have been on the Dalai Lama’s mind when he wondered that even the highest form of meditative awareness was dependent on (i.e., had as a material cause) the activity of the brain. For a religious leader this is a shocking break with deep tradition.

What’s new about the new sciences of the mind is that it is not as common as it once was to be a “reductionist”: to claim that the mind is “nothing but” brain activity.  More and more the lab people and the theory people are writing about the contents of consciousness and how they are being studied as mental phenomena. These two types of boffins now often work together. There seems to be a broad understanding that the correlation between mental activity and brain activity is a special realm, where each side can inform the other’s work.

Even forty years ago the mentalists and the physicalists were not speaking. Of course many still aren’t. The change might have happened, in part, due to a sort of hedge philosopher named Ken Wilber, (for a readable intro, see A Brief History of Everything [A Brief History of Everything, Ken Wilber, Shambala Publications, Inc., 2000.]) who started writing in the mid seventies. One of Wilber’s main themes as an integrative philosopher/psychologist is that mental experience is just as real as the tangible concrete things that we perceive outside of ourselves, objective and measurable, the traditional food of science. Unlike some spiritual guides, who eschew the tangible as uninteresting or as a useless illusion, Wilber believes that we should study the mental and the physical as equally valid sources of knowledge for personal and social development. Citation of Wilber in popular books on the mind is hard to find. He is barely mentioned (one line) in the Internet Encyclopedia of Philosophy. He apparently is ignored by mainstream scholars, yet his 25 books have sold well enough that translations have been made in 30 languages. It’s hard for me to believe that he had no influence, direct or indirect, on the respectability of mental phenomena in current science.

Near death experiences and round trips to heaven remain a bone of contention with those of a bent that “there must be something else” to the mind, but their examples do not require that conclusion. Suppose someone has a flat EEG (”brain wave”) and their heart has stopped. Are they dead yet? Initially no, although they might be soon. But until they are dead some metabolic energy is still available inside cells, which will continue to try to function. That includes brain cells, which one reasonably might assume will have enough energy for a stunted level of functioning. Such a low level of activity might not add up enough to create measurable electrical signal (the EEG) at the outside of the head. As for anyone who has been in a vegetative state and lived to say they visited another realm, well — they actually lived, didn’t they?  Therefore their brain was keeping the automatic body processes going.  It was there, it was busy, it could have hallucinated as well.  There is also hard evidence.  Steven Pinker reported back in 2007 that “… a team of Swiss neuroscientists reported that they could turn out-of-body experiences on and off by stimulating the part of the brain in which vision and bodily sensations converge.”

Pinker’s article in a popular magazine is a concise and accessible review for anyone wanting to catch up on the sciences of mind and the attempts to understand consciousness. The territory of these studies is tripartite.

All Gaul is divided into three parts.
– Julius Caesar, The Gallic Wars

First of all, the mind is not all conscious. There’s a huge amount of computational work that is unconscious, therefore often called automatic, but that can be shown to exist logically or in experiments. The unconscious mind is constantly and quickly piecing together data about what we see, hear and touch, coordinating the contraction of our muscles and assessing our position in space. There’s even well-developed research showing that many decisions we make actually occur neurologically before we are aware of consciously deciding them. In other words, conscious decisions are rationalizations after the fact, not deliberate causes of action.

Stuff like this is what makes the idea of free will seem like a farce. Inside each of us there is a big unconscious machine, chugging away based on inputs from our memories and the world around us, coming up with what we must inevitably do or think. What would widespread understanding of this mean to a world that needs, desperately, equally widespread moral guidance of behavior?  Pinker quotes Tom Wolfe on the consequences of science killing the soul: “the lurid carnival that will ensue may make the phrase ‘the total eclipse of all values’ seem tame.”  But Pinker thinks that the flip side is that widespread understanding of consciousness will increase empathy, reducing our ability to demonize, dehumanize or ignore other people. My guess is that it will depend on how new knowledge of consciousness is taught, and whether the knowledge itself gets vilified like other inconvenient truths have been of late.

The conscious part of the mind, these days usually called “the Self” (capitalization is part of the term), has been understood in a number of different ways. Typically it’s considered to have two aspects (our other two “parts” of the mind), but experts do not all agree on what they are. However, the modern origin of all such divisions of the conscious mind seems to come from an analysis by the nineteenth century American philosopher William James. He was an early practitioner of the direct introspective analysis of mental contents (”stream of consciousness” is his term) , which was a practice that the behaviorists eclipsed for a while in America after WW II.  James said that on the one hand there’s the subject, thinking part, the knowing “I.”  On the other hand there’s the “Me”, which is an object, the known part: what the “I” knows about itself.

For Bruce Hood [The Self Illusion: How the Social Brain Creates Identity, Oxford University Press, 2012] and others, the “Me” is a narrative, the running story of a person’s life by which the “I” maintains the continuity of its identity. Sam Harris describes the difference this way. When you wake up from sleep, you initially are just aware of sensations: groggy, bad taste in your mouth, need to pee, etc. That’s the “I”. But the “I” quickly starts thinking about the “Me” which just had a stupid dream, has to get ready for work soon, remembers a social conflict to resolve, and wants to eat more healthy food today, but not for breakfast.

You could try to say that the “I” is just a perceiving machine, what Thomas Metzinger and others think of as the moment to moment apprehender of conscious reality, whose point of view is only a second or two wide. But during those moments it is also thinking about the “Me”, spinning those tales of the Self, dredging up memories, and making decisions. Clearly the subject/object distinction is at least a little muddy.  In defense of the difference, note that the “Me” addresses the question of the continuity of personal identity, while the “I” seems to reflect the uniqueness of identity. This is because nobody — but nobody — else has access to your “I.”  It’s unique and it’s yours alone.

Consciousness may be one of the most challenging topics of all time. One positive aspect is that the thing itself is the very definition of easily accessible. It’s with you virtually all the time. You are not always conscious, but note that even dreams are conscious. The serious obstacles to study are: your consciousness is not accessible to me, and none of us can step outside of our consciousness to examine it as a process. We can only examine what the philosophers call the contents of consciousness, the stuff that happens in that private world.

In 1994 a young denim-clad, decidedly non-stuffy Australian philosopher, David Chalmers, blew the minds of a gathering of consciousness luminaries in Tucson.   He said that research and analysis would let  us eventually understand much of the mind and the brain, but there was one and only one hard problem: how each of us has that singular, first person perception of our own world.  [Why can’t the world’s greatest minds solve the mystery of consciousness? Oliver Burkman, The Guardian, 2015] . This issue had been posed twenty years earlier by another philosopher, Thomas Nagel, who noted that his peers often ignored the question of why it is like (i.e., it feels like) something to be conscious — why does subjectivity exist [ What is it like to be a bat?  Thomas Nagel, Philosophical Review, 1974].  But in 94 the time was right.  Soon every expert interested in conscious was slapping themselves on the forehead, realizing that this was indeed (capitals not optional) the Hard Problem of Consciousness.  Hardly a learned or popular discussion of the mind since then has failed to make note of the Hard Problem.

The most lucid and engaging account I have found about the Hard Problem and the related concept of the Self is Thomas Metzinger’s The Ego Tunnel.  [The Ego Tunnel: the Science of Mind and the myth of the self. Thomas Metzinger, Basic Books, 2009.] He starts by noting that reality is incomparably richer and more complex than we perceive. This is not necessarily a spiritual or drug-induced idea, but is just what we have been able to infer from the study of physics. In our environment here are no colors or objects as such, only differing activities and densities of innumerable particles, sparking in and out of existence at unbelievable speeds. That we see, hear and touch things is because our senses, indeed any animal’s senses, make a very simplified model of our immediate environment. This method of knowing the world is not just an option, a choice from among other methods made by evolution.  No, Metzinger points out that for philosophers knowledge itself is the fact of representation. What evolution did choose for our model of reality is still extraordinary in its apparent detail and in how we are able to use it.

Metzinger calls it the phenomenal self model, or PSM. The content of the human PSM also includes a model of our Self as a representational system (i.e., as a “knowing” entity). The PSM contains not only our moment to moment perceptions but access to memories and a sense of location within our body. There is a focusing mechanism to reflect on particular memories or perceptions, so that we can run the model backwards, thinking about the past, and forwards, thinking about the future. The scope of the PSM thus encompasses both James’s “I” and “Me”. The trick of consciousness is that it is “transparent”, which is philosopher-speak for the fact that we see right through it, without any access to the fact that it is a model. This transparency creates our first person point of view, which Metzinger and others call the “Ego”. When we open our eyes, the world is just there, immediately, without any mental access to the underlying truth that it is a continuous, real-time construction.

This Ego is the central mystery of the sciences of mind: how do we explain that patterns of neuronal firing create experience (the Hard Problem from a science point of view)?  We can find neural correlates of experience, but all agree that there is an “explanatory gap” about how the experiencing Ego can be produced from physical events in the nervous system. Quite a few experts subscribe to what is, perhaps jocularly, called “mysterianism”, the possibility that we will never know. That’s heady stuff from scientists. No wonder others are willing to step in and bring back the soul as an explanatory prop.

With this background let’s return to our concern about what creates the durability of personal identity. The beginning of modern thought on the subject is often traced to John Locke, the great British empiricist whose wide influence included the views on liberty held by the founders of the United States. Locke had the idea, revolutionary for the time (1690) that what sustains a person’s identity is psychological continuity, which he said was based on personal memories.

Psychological characteristics as the source of our continuity seem to folks today to be the obvious common sense answer. Studies usually present subjects with some kind of story about brains being transplanted to another body, or some variation on the Star Trek transporter. Subjects are then asked whether, after transfer of psychological characteristics to some new body, whether a person moved to the new body and what this implies. People generally think that the person’s location will follow their memories to a new body. However, they may waffle if the story includes something about bad consequences to the body that was left behind. Common sense is not consistent, especially when presented with stories about mind or brain transfer that really are currently impossible. However, it’s easy to prove to someone that their body and brain are not sufficient for survival of their Self.  Just ask them if their Self would survive permanent coma or senile dementia.

As a philosopher Locke was looking for certainty beyond common sense. A man of his times, he still believed in the soul, but as the earliest Empiricist he wanted to ground his philosophy in facts of human experience. He claimed that, whatever the soul was, its experience of continuity was based somehow on remembering previous aspects of the person’s life.

Locke, who was English, did not live to receive the big smackdown of this idea, which came a few decades later at the hands of a truly dour looking Scotsman, one Thomas Reid. The essence of his contradiction of Locke came in a simple parable. He imagined an elderly general who remembers a courageous charge when he was a young cavalry officer (this was getting personal, for Locke’s father was a captain of cavalry!). The young officer remembers himself as a lad, getting beaten for stealing apples. The general, however, has no recollection of the apple incident. Reid said that Locke’s theory fails to confirm that the general and the lad are the same person, therefore memory is not sufficient to sustain the enduring Self.

Reid’s example is just one such, since we know of numerous ways that memory just cannot be the whole story. Most of us can hardly remember anything at all before we were three years old. At the other end of life, memories fade or else they are made into hash by dementia. Our psychological continuity is interrupted by dreamless sleep, by drunkenness, by being knocked out, by anesthesia, by coma, by disease, by brain damage, by dementia, by psychological trauma, and by sudden drops into, and back out of, amnesia for no apparent reason.

Furthermore, research shows that memory itself is all too often a tissue of lies, a second-order construction only loosely based on our first-order model, the PSM. A common view is that memory is a narrative whose details are invented to summarize and make sense of things, not to record them in veridical detail.  I first ran into this at a seminar by Julian Jaynes, a biologist who is famed for promoting a wild theory [The Origin of Consciousness in the Breakdown of the Bicameral Mind, 1976] that humans have only been conscious for about the last couple of thousand years. Jaynes said to us, “I want you to imagine as best you can, what it was like when you last went swimming.” He then asked us whether we remembered (A) our conscious point of view with things like eyes at water level, being wet, the feeling of moving and breathing in the water, or (B) an image from a point of view looking down at ourselves, a body in the water in the setting where we swam. For most of us it was, you guessed it, (B), an experience that we never had, but instead was a fictitious mental photograph, a third person perspective “edited for clarity.”

UNITY: Coherence of the manifold self (3)

Material Causes

Aristotle said that explanations should include a material cause. By this he meant whatever substances are needed for a particular event or thing to happen. Suppose I ask the question, ‘What makes you a unitary personality and allows you to persist as a distinct individual?’  Those who are a little science-minded, or a little fond of the obvious, might say: “Well I’m enclosed in this body. It has a boundary between me and the world around it.” Good for you. That is indeed your material cause.

These days we know that our material uniqueness starts with our DNA, which is also becoming the ultimate ID badge. DNA is a necessary cause of individuality. It is not however a sufficient cause, because each of us also have a unique environment that also affects how we develop. Note that regardless of how many writers try to dumb down the concept, DNA is not a “blueprint”. The closest analogy might be a computer program optimized over and over so that each piece of it is used in numerous and interacting ways. Discarded sections are left in but tagged as obsolete. Random parts are repeated and randomly mutated. Then the computer code is run through a compression algorithm to pack it down, and then it’s encrypted. Placed in a sack of nutrients, it somehow unpacks itself, creates its own interpreter, and runs its program, with constant feedback from the environment (the cell, the body, the encompassing society) affecting the execution and indeed modifying, through epigenetics, the program code itself. There’s different outcomes for different tissues, as well, while the random changes to the working interpreters inside our zillions of cells continue throughout life. That’s how not simple DNA is.

So how do we extract our unique identity from the DNA program? The theoretical biologist and Buddhist mystic, Francesco Varela, has perhaps the most poetic account. His concept is called autopoiesis: how living systems self-organize so that the mechanisms of the system are encapsulated in a boundary that is created by those same mechanisms, and the boundary is what enables the system to function. You see the circularity, almost a self-referencing definition there?  Well …it might go down easier with some concrete examples. Varela says that each person has three distinct, but coexistent autopoietic systems: cellular structure, the nervous system, and the immune system. Cells of course have a membrane defining their extent, but they also get together and form the largest organ in the body, the skin. The skin is the bag that holds us together physically and acts as both barrier and transceiver to the outside world. So it clearly is one material cause of our physical integrity and persistence over time.

The second autopoietic system is the immune system. It’s very nature is to define the difference between self and non-self, and to enforce that as necessary. While it uses the body as its boundary, it also is a virtual boundary, repelling or disposing of invaders. It’s like Gandalf telling the Balrog, “You shall not pass!” (If only there was a non-copyrighted image to stick here!)

The third autopoietic system, the nervous system, has the brain as its extremely high-value, high-function core. Your brain is bounded by the cup of the skull as its primary physical protection. It also has a triple outer layer of membranes, as well as a complex of tightly fused cells around most of its blood vessels, the so-called blood-brain barrier. The penetration of these boundaries is necessary for direct computer to brain interfaces. This is why, even though such interfaces have great promise, and indeed are helping people already, most folks of any sense would find them creepy and even frightening.

And the brain does so much more for our integrity and continued existence. It guides our behavior (usually, hopefully) attempting to ensure that we survive and thrive. It also actually constructs the first-person Self — your very “I” that is your phenomenological essence — as part of a brain-made internal model of the perceived and understood world. This model sets us off (yet another boundary) from whatever ineffable cosmic reality it is that actually surrounds us.

So we have these boundaries, and uniquely programmed biological machinery, that preserve some aspects of an enormously complicated physical pattern over our lifetimes. Is this sufficient to take care of our self continuity? Not really. Any vertebrate animal has of the above boundaries. Some might even have the rudiments of a Self. When we think of ourselves as a person having an identity we are talking about much more, including a self concept (the answer to Who am I?) and the above-mentioned first-person conscious point of view. Philosophers have really dug their teeth into persistence of the Self.

UNITY: Coherence of the manifold self (2)

Mental Multiplicity

Minds are simply what brains do.-
Marvin Minsky, The Society of Mind

The human brain has a lot to do, and so, therefore, does the mind. Most of it we are not aware of. Oddly, common sense and science disagree on the meaning of these simple statements.

In science we think that the brain, or at least the nervous system: heads up the body’s “automatic” functions like breathing and metabolism, makes the muscles move the body, decides what’s for dinner, has emotions, does math, and everything in between. Most of these numerous functions are only loosely connected to one another. Science also thinks that only some functions are conscious, that there is no real central control, that the conscious self is largely an illusion, and that all of its so-called decisions are actually determined by numerous, non-conscious factors, even before we are aware of making a decision.

On the other hand, people in general, including children and the blind, think that their essence is a disembodied point of existence behind their eyes and between their ears. This starts in early childhood at about the time (age 4) that we start to believe that other people have mental points of view, too. And even long years after we have first started being taught about the brain (age 9), we believe that the brain is a sort of mental multi-tool, there just to help out the real, feeling self which is, metaphorically if not in fact, located in the heart.[What do you think you are?] We say that if you want to make the right decision, you make it with your heart. But regardless, your decisions are your own, in the sense of both your responsibility for them, and in the sense of your conscious self being their primal cause. Bottom line is: folks are native Cartesian dualists but they get that the brain is useful in some vague way.

These days the scientific approach is where we find new understanding of identity. Let’s start breaking the science part down by addressing the “brain has lot to do” thing.

In 1985 the AI scientist Marvin Minsky wrote a book, The Society of Mind, which said that there was no big mystery in what a mind does, because it emerged from a host of simpler parts that he called agents. Each agent accomplished a particular task, but in a simple, mindless way. A mind was the “society” of these agents. Mental activity emerged from their interactions just like life emerges from myriad chemical interactions and structures that are, by themselves, not alive. Thus Minsky proposed a solution to how a mind can emerge from non-mental stuff.

The ancients did not see how a mind could be born out of matter, so they invented the soul. Because a soul was not matter and did not contain any parts, it was not subject to the inevitable decay of matter, and it therefore was eternal. Plato’s formulation of this was so influential that centuries later the Christian church adopted the it as a core belief. The idea has never died, although it is out of favor with virtually all of science and much of philosophy.

Minsky’s formulation was an early example of modern systems-oriented thinking that complexity emerges from the behavior of multiple, quite highly numerous, parts. This is apropos since the brain/mind complex is widely considered to be the most complicated thing of which we know. The Society of Mind was one of the cracks that released a flood of new work on mind and brain that is still with us today.

Just two years before Minsky’s book his philosophical colleague Jerry Fodor, also at MIT, wrote a book, Modularity of Mind. In it Fodor listed nine properties that would be evidence that a function of the mind was “modular”, which meant that, like Minsky’s agents, it was single-purpose and somewhat independent of other modules. The criteria must derive from a computational point of view because seven of them sound like well-known principles of good software design.

Since these two pioneering efforts the explosion of efforts to understand the mind has put a lot of emphasis on identifying its functional parts. Within that work there is intense debate about: How many parts? What are they? How big are they? How connected or independent are they?

On one end is “massive modularity” with lots of special purpose, quite autonomous parts that are presumed to be evolutionary adaptations. An example would be that very new infants have an inbuilt compulsion to look at patterns that resemble a human face. The modularity criteria here are (as in most evolutionary psychology examples) restriction of a function to specific inputs and rigid timing of occurrence in development.

On the other end of a rather fuzzy spectrum might be cognitivists, who emphasize much fewer, higher level mechanisms such as attention, learning, and memory. These lead to creation of conceptual knowledge and the mental construction of beliefs and cognitive biases to guide actions.

There are many and nuanced positions in between these two, but there appears to be a general belief in something like partial functional independence of many parts of the mind, based a variety of observations and experiments. For example, some illusions continue to happen even after the illusion has been explained to someone. So the perceiving part is independent of the understanding part, thus fitting Fodor’s criterion that a module is not guided by information at higher levels.

Other evidence relates to Fodor’s criterion of localization of modules to dedicated neural architecture. We have all heard the stories of bizarre effects from damage to the brain, even though these are rare and sometimes, but not always, one of a kind. Nearly every day there is a new story of some psychological concept being verified by a consistent pattern in the imaging of neural activity. Even though the concept of functional localization still has its critics, clearly we can no longer think of the brain as just one big tangled, plastic mess.

Psychologists have discovered many cognitive biases that actually reduce the accuracy with which we understand the world. For example, we are unduly influenced to value more those things that we already have. We can be “primed” to make a certain choice, judgment or perception just by prior passive exposure to emotion-laden stimuli. These and many other biases fit Fodor’s criteria that (a) a module is “mandatory”, i.e., operates automatically, and (b) is independent of other processes, in this case reasoning and even reference to previous experiences and beliefs.

In this time of high growth in the sciences of the mind, there are few things that most researchers would agree on. However, if you pick up any popular book about the mind, they nearly all will describe it in ways that mean: there are many interacting parts, most of which function without our needing to think about them, and often without us knowing about them. Given that, we have to ask, what can make such a thing the essence of a person? What holds all those parts together?

UNITY: Coherence of the manifold self (1)

Introduction

You asked, “How does the self learn to relate to the world?” But actually the self starts by dividing itself off from the world. Some parts, some sources, of stimulation are always here, others are not. That difference is one basis or beginning of selfhood. Then we elaborate it more, adding a social identity and relationships, and other things.
— Roy Baumeister,  Quid pro quo: the ecology of the self

Why in fact is there just one of you instead of a probability smear across a hive mind of some kind? The understanding of personal identity has been a puzzle for centuries. To grasp all that identity means now, we need to know what personal identity is at its core. Identity has continuity, cohesion and uniqueness, but each of these qualities can be somewhat hard to pin down.

All the evidence is that our minds consist of many interacting pieces, many of which are not even conscious. Nevertheless, the vast majority of us feel like, and present as, a single cohesive person, continuing over a lifetime. Much scientific and philosophical effort has been spent of late on the nature of the self and its conscious mind. It turns out that there is no single explanation for what holds you together. Instead, it is a coordinated meshing of multiple levels of reality: physical, personal, and social.

The physical level starts with DNA, but it’s really about how biological processes create encompassing boundaries within which are built the foundations of our uniqueness: a cellular structure, a nervous system and an immune system. We think that the physical body continues through our lifetime. In fact our component matter and structure changes continually, so that there is not really any specific physical thing we can claim to be us throughout life. Furthermore, quantum theory doesn’t even allow that any one of your atoms is the same from time to time. There essentially is no “same” to an atom. The prevailing theory, then, is that what persists physically is a pattern, not specific matter.

But there’s another level. Using our physical framework we develop a mental one. This is the reality interpreter called the Self. Thereby, we know things. The peculiar aspect of the human Self is that we self reflect. We know that we know. That Self intuitively believes in its own continuity over time but, like in the physical case, change is continual in the mind. This leads to interruptions and paradoxes that belie the continuity that we imagine ourselves to have.

What’s left to (finally!) hold us together? It’s our social environment. It’s our roles, our relationships, our tribes and institutions. We are the same person as last month because others who know us say that we are. We are the same person as an infant decades ago because our family knows it to be true. Our culture gives us an identity even before we are born, and it preserves it in some important ways even after death.

Throughout life these three levels of reality mesh to hold us together. The result is not only continuity, but uniqueness among our fellow humans. That uniqueness, as opposed to our place within family and society, is becoming more and more a focus of our conscious Selves. In the second decade of the 21st century, our “true self” has devolved into a personal brand, an attention-seeking marketing missile aimed at other people. It’s getting a little frenetic.

Chimerealism? Really?

This blog will be a home for a book-sized project.  Other topics may appear from time to time.

Some deep background to kick it off.

Future histories will be written about the effect of the information revolution on us and our planet.  They will say that nearly everything changed, and because of that, most writers will focus on parts, not the gargantuan whole.  We can’t know how they will carve it up, or what concepts will come to dominate opinion about this early period through which we are currently being dragged.  If the Singularity actually happens then history books, like so many artifacts of our world, might become only quaint curiosities.  Points of view on the past might be the encyclopedic interpretations of god-like beings, or they might be held by unfathomable hive minds.

Meanwhile here we are, curious primates, picking tweets from each other’s fur and gossiping.  We can’t astral project ourselves outside of this culture to get a clear overview, but we have to try for some perspective.  This blog uses the concept of personal identity as a lens to look for changes in how we understand ourselves and deal with each other.  The goal is to connect observations and concepts that don’t go together (make a chimera) into new ideas that might better reflect our reality. ==> Chimerealism.