The planetary egregore passes you by

The planetary egregore passes you by

By Erik Hoel

Every so often scientists do something like this: they take a bunch of listeners to classical music and monitor their vitals as they sit in a concert hall together. Then they notice something strange, which is that not just people’s movements but their actual vitals themselves begin to synchronize—measures like heart or respiration rate, even their skin conductance response. Similar findings are easy to come across, such as how, when looking at brain-to-brain neural synchrony, romantic couples are more neurally synchronized during a conversation than strangers are. This isn’t fringe stuff—both those links go to Nature, one of the premier scientific journals.

 

These sort of studies always remind me of an issue in consciousness research called the binding problem. You experience a single stream of consciousness, one in which everything, your percepts and sensations and emotions, are bound together, and the “problem” is that we don’t know how this works. It’s difficult to figure out because this binding is fractal, all the way down; you don’t experience colors and shapes separately, you experience a colored shape. But how do the contents get affixed together in consciousness in all the complex ways they’re supposed to? Via what rule does it work? One popular answer in the neuroscientific literature is that binding occurs via a process best described as “information transmission plus synchronization.” Neurons fire at a particular frequency in one region of the brain, which then synchronize with another region’s firing. In other words, parts of the brain dance.

 

According to this answer to the binding problem, if you synchronize different parts of the brain, you get a single consciousness bound together. So following the idea’s logic: if you synchronized different people, what do you get? Is it not at least imaginable you could get some sort of experience that goes beyond any individual person’s consciousness? A group mind?

 

If one looks at cultures from an anthropological perspective, one sees ecstatic behavior—seizures, loss of consciousness, dramatic personality shifts—almost exclusively within atmospheric rituals of synchronization and repetition. Like in the famous Trance and Dance in Bali, a short film made by anthropologists Gregory Bateson and Margaret Mead when visiting Indonesia in the 1930s. The elaborate dance culminates in a scene where women fanatically press and slash their sharp kris knives against their bodies, and yet seem strangely unharmed and uncaring as they careen about.

 

Ecstatic Balinese dancers whip their hair and impale themselves

Ecstatic Balinese dancers whip their hair and impale themselves

 

In occult practices, such joint ritual and concentration has traditionally been the way to summon an egregore—the occult term for a psychic entity much like a group mind. The poet William Butler Yeats used to attempt this as a member of one of the most influential groups of Western magic practitioners, The Hermetic Order of the Golden Dawn, which began in secret in 1887 London. For the members, magic and mediumship were not really about commanding supernatural forces to cast spells, but rather magic was entirely a psychic phenomenon, in that it was all about changing consciousness—whether your own, or the group’s—and this was done through ritual, dance, and drugs. As Dion Fortune, one of the most famous and influential female occultists of the time, wrote:

 

Occult science, rightly understood, teaches us to regard all things as states of consciousness, and then shows us how to gain control of consciousness subjectively; which, once acquired, is soon reflected objectively. By means of this conscious control we are able to manipulate the plane of the human mind.

 

Yeats himself believed that egregores were the source of the magic of a coven, and that all magic was really the creation of group minds by “recurrent meditation.” But he also warned that, if not made explicitly benevolent, an egregore could be disastrous, as

 

this personality, if it has any continued life at all, is bound to grow stronger, to grow more individual, and to grow more complex, and to grow at the expense of the life about it, for there is but one life. Incarnate life, just in so far as it is incarnate, is an open or veiled struggle of life against life. . .

 

I’m definitely not endorsing the veracity of old occult techniques, merely pointing out that if we keep the proposed solution to the binding problem from cognitive science in mind, then using things like ritual, dance, and drugs for these purposes is unsurprising.

 

Of course, the binding of parts of consciousness into a greater whole, even within a single brain, could require more than mere synchronization and information transmission—cognitive science’s proposed solution to the binding problem could be too simple. Nor does this supposed answer tell us what degree of transmission or synchronize would be meaningful. Perhaps it’s a very high bar which no lower-bandwidth brain-to-brain physical communication could ever overcome. All that could be true. But still, I think it’s worth looking around in our culture and our technology for where these forces are most at play. A certain answer springs to mind, which is that the place where we moderns are most synchronized while also transmitting the most information is not at football stadiums or in concert hall, but via our screens. Which would imply. . .

 

Listen, I admit this is a wild or even “spooky” idea. But it is Halloween today, and I’ll also note the very owners of social media sites appear to have considered it.





It’s undeniable that social media does feel very “mind-like” to interact with, in a way that (again, just how it feels) goes beyond the minds of the individuals. E.g., on social media there’s clearly a window of attention (“the current thing”), and as a network it can distribute tasks and ideas and come to conclusions, at least of a sort. Debates are considered, then are over, as played out as an already-handled thought. At its best this aspect of social media is joy to experience, the information so pertinent, the vibes so high. At its worse we call it “online mobs” or “cancel culture.”

 

It’s also undeniable that social media use has potent effects in individual psychologies such as rising depression, rising polarization—you can track all this stuff statistically (and notice that pretty much no other factors can explain it), or you can just look at the trends of TikTok teens speaking in fake voices pretending to have multiple personality disorder (possibly not even a real disorder, and if it is, incredibly rare). If something has that much ability to affect psychologies, perhaps it’s worth considering a far more speculative effect (or risk?): that it is slowly conglomerating us, all of humanity, into some sort of group mind. In the early stages, of course, like fetal axons finding each other on an agar plate, forming only a raw and fibrous and unorganized thing mere years into a centuries-long growth.



ARE GROUP MINDS EVEN POSSIBLE?

Obviously this whole concern would require that group minds are more than just conceivable. Just because something is conceivable, doesn’t mean it’s true.

 

In my view there are basically two good arguments, or types of argument, for the existence of group minds. (And to be clear: we should reserve the term “group mind” to mean minds made out of other entities that could reasonably themselves be called minds, and we must mean something that falls within the purview of science not magic, just like our own minds.)

 

The first common argument for group minds is the argument from equivalence. I.e., a neuron is a very efficient and elegant way to transmit information. But one can transmit information with all sorts of things. There’s nothing supernatural about neurons. So could not an individual ant act much like a single neuron in an ant colony? And if you find it impossible to believe that an ant colony might be conscious, that it couldn’t emerge from pheromone trails and the collective little internal decisions of ants—if you find the idea of a conscious smell ridiculous—you have to then imagine opening up a human’s head and zooming in to neurons firing their action potentials, and explain why the same skepticism wouldn’t apply to our little cells that just puff vesicles filled with molecules at each other.


One can go further. What if, as philosopher of mind Ned Block has asked, each citizen of China devoted themselves to carrying out the individual signaling of a neuron? This would then create a “China brain” which mimicked in functionality a real brain (although you would need about two more orders of magnitude to get close to approximating a full human brain in terms of numbers of neurons/citizens). There would be, at least functionally, an equivalence.

 

 

Beyond the argument from equivalence, there’s another argument for group minds. This is the argument from modularity. Modularity is a common hypothesis in cognitive science (and evolutionary psychology), which is essentially that parts of your mind act like mini-minds themselves, and then pass on their results. As a hypothesis it was spearheaded in the 1980s by books like Marvin Minsky’s The Society of Mind and Jerry Fodor’s The Modularity of Mind, arguing that minds are constructed out of cooperating (and occasionally competing) “agents.” This is again a popular view in cognitive science, which, also again, may only be true to some degree, but it does imply very naturally that at the highest level your own mind is a group of different mental units. And if true, this would mean that your own mind is evidence that group minds are possible, as you yourself would then be a group mind in the strong sense of literally being made from other viable mini-minds.

 

In the 1940s, during the height of the lobotomy craze, which was used on everything from depression to schizophrenia, a different radical treatment was introduced: the corpus callosotomy. This cutting of the corpus collosum, that bridge of nerves between the two hemispheres, stopped the propagation of seizures. Patients with terrible epilepsy underwent it, and it worked, but it had some very odd side-effects.

 

In the 1960s, neuroscientists Roger Sperry and Michael Gazzaniga began to study these “split-brain” patients, exploiting the fact that different halves of the visual field go directly to different hemispheres. Meaning that in such patients one side of the brain could be presented with information the other side didn’t have access to. And in fact, sometimes patients would demonstrate odd behaviors like “alien hand syndrome” wherein one side of their body (which is controlled by the contralateral hemisphere), would be in discord with the other side (e.g., the patient’s left hand might try to unbutton a shirt as the right hand is trying to button it). Sperry and Gazzaniga’s conclusion was that splitting the brain physically split the stream of consciousness in two. And while recent research has challenged the idea that split-brain patients really do have two consciousnesses (after all, subcortical communication is still intact), the results have been held up as canonical for decades.

 

Less remarked upon was that the work on split-brain patients implied a radical conclusion: imagine that you could use futuristic neurotechnology to reintroduce communication between the two hemispheres. Wouldn’t there then be some moment when—pop!—the two streams of consciousness go back to being one?

 

Go further: imagine future scientists connecting two separate brains with the same technology, perhaps via dense optical fibers strung out between the test subjects carrying the signals of neurons. Upon cranking up some dial controlling the amount of communication and synchronization, wouldn’t there also be a pop?

 

Okay, go even further: there are no fiber optics, merely communication, but it’s incredibly high-bandwidth and all the time, text and images flashing in front of eyes, and it involves everyone on this blue and green globe.

 

The pop heard round the world.

 

THEORIES OF CONSCIOUSNESS WEIGH IN

In an ideal universe we could simply query current neuroscientific theories of consciousness to get a better grasp of if, or when, we’d expect a pop to occur. A true theory of consciousness could tell us (a) if group minds are possible, and (b) if they are, where they begin and end, when the pop happens.

 

Unfortunately, there is no well-accepted theory of consciousness. But I did get my PhD in neuroscience trying to answer these questions, working on Integrated Information Theory. The theory’s answer to the binding problem is that consciousness is bound together when some set of elements (like neurons) is more integrated in how they exchange information than any further subset of those elements. Then there is a bunch of math about how to actually assess this, which is generally pretty complex. But putting aside all the math, Integrated Information Theory has a similar conclusion to what we reasoned about the split-brain patients: if you are a part of a system that gets connected into a much larger consciousness, your consciousness is subsumed. Like a smaller soap bubble merging with a larger one.

 

My point is not that Integrated Information Theory is true (I think it has multiple problems, which I discuss in The World Behind the World), but just to say that there are theories of consciousness that allow for group minds. And it’s not just a handful of weird consciousness researchers who think group minds might technically be possible, at least hypothetically—weird philosophers of mind also think so! Like Eric Schwitzgebel at the University of California, who writes in “If Materialism Is True, the United States Is Probably Conscious” that:

 

Of course it’s utterly bizarre to suppose that the United States is literally phenomenally conscious. But how good an objection is that? Cosmology is bizarre. Microphysics is bizarre. Higher mathematics is bizarre. The more we discover about the fundamentals of the world, the weirder things seem to become. Should metaphysics be so different? Our sense of strangeness is no rigorous index of reality.

 

All to say, if you ask “Do we have strong scientific evidence that group minds are possible?” the answer is no. But from a “Do people in the academic field seriously consider group minds to be a real possibility?” the answer is a resounding yes.

 

GROUP MINDS MIGHT BE DIFFICULT TO RECOGNIZE

Perhaps we needn’t hypothesize based on the edges of scientific thought. If social media were actually transforming us into a group mind, wouldn’t this be obvious? Unfortunately, I think not. In fact, I’d even propose there are good reasons to think that the process might instead be extremely subtle.

 

First, group minds might be epiphenomenal, meaning that they don’t affect the parts that make them up. It’s unclear if this view of minds is even coherent. But if it were to be true, perhaps we humans are unknowingly generating such epiphenomenal group minds all the time, and during game days above each football stadium hangs a huge incorporeal ghost, a dull brute experiencing only enthusiasm or disappointment, then dissipating as the crowd leaves.

 

If group minds do change the behavior of their constituents, and so are not epiphenomenal (and more, presumably, like our minds), they must do so in some mechanistically explainable way. That is, just as neurons make the decision internally to fire an action potential or not, a member of a group mind would make decisions internally too. So it wouldn’t feel particularly strange—just making decisions like you normally do, but those decisions would be of a particular type that aids in the functioning of the greater group mind.

 

In this most radical case, for the participating parts (here, individuals), continued greater and greater participation would eventually imply subsumption into some larger consciousness, like the two re-connected hemispheres of a split-brain patient.

 

Surely such an occurrence would be noticeable! Possibly, but it’s worth pointing out that people are extremely bad at discussing the changes or gaps in their consciousness. Consider your blindspot. In theory you have a hole right near the middle of your visual field. How often do you notice it? Never, as it just gets automatically filled in. It’s extremely difficult to notice the changes in your consciousness because you are never conscious of consciousness itself. Does a hemisphere once split, upon being reconnected, scream internally at its own subsumption? Or is the pop! only noticeable from the outside?

 

Stating this concern more broadly, we might not have any internal mechanisms that can differentiate who our thoughts and emotions belong to. After all, why would we? Let us call this the uncertainty of origin problem that individual minds who are part of group minds might face. Evolutionarily, normally everything in your stream of consciousness is by default yours, and since consciousness is notoriously blind to itself, it could easily be the case that, rather than some “voice from above” bicamerality like that of the sonorous Borge hive-mind in Star Trek, members of a group mind would just accept that the group mind’s thoughts and feeling are their own. Especially because, at least in the case wherein the group mind is being generated by information transmission and synchronization with a larger group, you yourself would be experiencing some minor form of this conscious content already. A group mind might feel a little like a great human mind, attending to the same subjects, and feeling many of the same feelings (hatred, mockery, sympathy, and so on).

 

So if there were somehow a stranger in your stream of consciousness, something other that had wormed its way in, you might not even be able to tell.

 

WOULD THEORETICAL GROUP MINDS BE GOOD OR BAD?

An admittance. The language I’ve been using has purposefully had a negative valence. Worm is bad. Stranger is bad. But would becoming a group mind even be a bad thing? After all, much of our meaning in life comes from moments that feel suspiciously like being part of a group mind—beyond ecstatic dance, there is sex with someone you love, dancing together in the desert, singing in a chorus, listening to music, a mother holding her baby just recently physically separated.

 

Depiction of an egregore created from listening to the music of Charles Gounod. From Thought-Forms, 1905, by Annie Besant & C.W. Leadbeater.

 

Depiction of an egregore created from listening to the music of Charles Gounod. From Thought-Forms, 1905, by Annie Besant & C.W. Leadbeater.

For Jesuit priest and paleontologist Pierre Teilhard de Chardin (1881-1955), the author of The Phenomenon of Man and originator of the term “noosphere,” a global group mind was the ultimate destination of humanity. In The Phenomenon of Man, Teilhard de Chardin quotes renowned scientist J. B. S. Haldane, one of the greatest biologists of the 20th century and a founder of neo-darwinism, as making the argument for equivalence for why our future entailed a civilizational group mind:

 

Now, if the co-operation of some thousands of millions of cells in our brain can produce our consciousness, the idea becomes vastly more plausible that the co-operation of humanity, or some sections of it, may determine what Comte calls a Great Being.

 

Teilhard de Chardin imagined a future where this Great Being takes up not just Earth, but the entirety of the universe (what he called the “Omega Point”), a final state he argues is (essentially) the Christian idea of God. It’s a beautiful vision, and Teilhard de Chardin was one of the closest to being what one might call a “secular saint,” offering a vision of the material universe teleologically evolving toward something worthy of worship.

 

But in the harsh light of reality, I think most of the time becoming a group mind, especially accidentally at a civilization level (such as being forced into it via technological changes), would be almost certainly bad in the initial steps—Yeats’ warning about “incarnate life” might apply. It would matter greatly if we go about it aware vs. unaware. One reason is simply that our evolved individual minds have been honed by immense evolutionary pressure over hundreds of millions of years. A group mind at a global scale would be under no selective pressure at all (and even if you can argue it is, it would be of a weaker kind). As I’ve pointed out for AI: if you look a priori at the space of all possible minds, most of the space consists of minds that are diagnosably insane. Without evolutionary pressure, or outside manipulation (like reinforcement learning from human feedback), we shouldn’t expect minds to just default to sanity.

 

Another issue is that the process could be bad for the individual minds themselves. Most of what an individual does, all the little foibles and quotidian habits that distinguish their personality, are as useless to a group mind as an ant’s individual preferences are to a colony. Which likely means pressure to abandon those. E.g., one hypothesis is that evolution has lead to a “complexity drain” of individual cells as life has become multicellular. As Duke University professor Daniel W. McShea writes:

 

. . . in evolution, as higher-level entities arise from associations of lower-level organisms, and as these entities acquire the ability to feed, reproduce, defend themselves, and so on, the lower-level organisms will tend to lose much of their internal complexity.

 

I think this is probably true of networks in general, including neural ones—as the overall network gets smarter, the intelligence necessarily becomes more distributed, and therefore the parts of the network must get dumber to make room and become better slaves to the gestalt.

 

Now, our culture as a whole definitely has far more niches and fractal spaces of e-fame and online interests and distributed communities, at least compared to when I was growing up in the 90s. Intelligence has been distributed. And there is so much information in culture that everything moves faster now, as huge news fades in a couple days.

 

But what about individuals? For if there were a sort of “complexity drain” going on in people’s personalities, it would probably look a lot like people parroting simplistic political ideologies, becoming more interchangeable in their opinions, becoming more archetypal and self-similar. Obviously this is impossible to quantify—one must use anecdotes, vibes, and our rose-tinted memories of times gone by. Here’s just one such observation: I’ve heard friends on dating apps complain that there are only a few “types” left that they run into again and again, almost as if there is now just a handful of beings, although ones wearing many different faces, who come to sit across from them at coffee shops and bars.

 

Maybe the reason people seem to be losing their heads on social media is that they are actually losing their heads?



A SKEPTICAL END

 

An even greater admittance. The idea we are suffering from a “complexity drain” on people’s personalities due to being secretly subsumed by a group mind via social media is, most likely, a ridiculous rationalization of the times we find ourselves in. Of the state of our culture. Personally, I think people act strangely on social media. Not like themselves. It’s as if they have “alien mouth syndrome” where they babble things they wouldn’t otherwise say, not to anyone, in any circumstance. Or parrot beliefs I just can’t believe they really hold, since I know they are smart and can see the nuances of the world. It is simply too attractive of a hypothesis to speculate that they really aren’t themselves. It’s easier to propose a science-fiction idea of some all-devouring global group mind than recognize that humans were never rational to begin with, and that for most of human history social mobs ruled and individuals cowered before them.

 

Still, I don’t think the idea can be discarded entirely on a priori skeptical grounds. There are good arguments, albeit speculative ones, from cognitive science and neuroscience that group minds are possible, and that they involve the kind of information transmission and synchronization that does, at least arguably, occur on social media. And the uncertainty of origin problem would mean that if your consciousness were indeed being mixed with something larger, something you were just a part of, the process could be very difficult to notice. Combining these two should make one pause and, at minimum, reject the idea with something stronger than brute skepticism.

 

Even then, the frame itself might be useful. At a personal level perhaps it’s worth remembering that those feelings of outrage—you know the kind, the ones that fill you with such anger you just have to speak out right now, the kind where you’re summoned as if by strings to contribute your little piping neuronal voice to that huge ongoing mind of the internet—those feelings could not be yours at all. Rather, they might just be a glimpse of something larger and darker passing like a giant out of sight.





If you want to help us answer this question reserve a headset and join our movement.