Mulling: class, mythologies, performance

Slightly looser theme than usual. I got sick this weekend and botched up my usual reading/writing rhythm. It may be a day or two before another post.


I. Downton Abbey

I finished the new season of Downton Abbey with my girlfriend this week (it premiers in the US next month, but it’s finished it Britain except for the customary Christmas Special). It’s kitschy but I’m not afraid to say that it was entertaining.

The only thing that I can’t really look past is that every protagonist, even the most reactionary one, has an absolutely 21st century stance on the many controversial issues. Oftentimes, it’s as though their opinions on social issues are contemporary but some invisible system constrains them to have to consider acting against their conscience.

It’s too obviously wrong. Spoilers ahead.

It’s like a weird historical re-writing where the oppressors reveal that they’ve been on the right side of history all along, they just had to tangle with such heavy burdens as tradition and what will our conservative society think of us now. These characters converse with (and marry!) their underlings, they don’t freak out about interracial relationships, have attitudes towards homosexuality and rape and abortion that would be too liberal to get them elected in many states in current-day America. I guess it wouldn’t be terribly palatable television if we really did have to deal with early 20th century political and social norms. I suppose it’s something to be expected, just as we should expect that the Grantham family seems to have a connection to every major historical event that high school students might recognize from that period.

One other odd note: They seem to be aware of which side of history is the “right” one, in real time. The most common phrase in the whole of the series is something to the effect of “the times are changing” or “the old ways just won’t do anymore” or “this is a new world now”. I’m no historian, but I almost would’ve expected phrases like these in the Post World War I era to be sighs of resignation more than breathless anticipation.

I’m not demanding purity of portrayal for every show, movie, and book. It just struck me especially hard with this series and its drama, perhaps because I don’t watch many shows like this.


II. 101 Objects


A few poor souls trod for an instant on this rock, and it has become famous, it is prized by a great nation; fragments are venerated, and tiny pieces distributed far and wide. What has become of the doorsteps of a thousand palaces? Who cares for them?

Alexis de Tocqueville, on Plymouth Rock (he visited 1830-31)


I listened to one of the Long Now Foundations most recent SALT seminars, Richard Kyrin on the Smithsonian’s “101 Objects that Made America“. Great stories. Plymouth Rock was first ever mentioned in historical record in 1775, over a century after the actual landing of the separatist pilgrims (1620). It’s an odd and arbitrary sacred object in the American founding myth.

The story of Benjamin Franklin’s walking stick was also humanizing and interesting.

My favorite story (that I heard, anyway,) was of a use of the early telegraph, for what could be the world’s introduction to the power of modern crowdsourcing/”citizen science”. Apparently, early telegraphs printed out their results (it wasn’t until later that it was primarily used to transmit the familiar auditory signals- those were unintentional at first).

Joseph Henry, the first Secretary of the Smithsonian Institute, began to construct what would become the national weather service, sending telegraphs and instruments across the agrarian nation to map out weather patterns:

When Henry came to the Smithsonian, one of his first priorities was to set up a meteorological program. In 1847, while outlining his plan for the new institution, Henry called for “a system of extended meteorological observations for solving the problem of American storms.” By 1849, he had budgeted $1,000 for the Smithsonian meteorological project and established a network of some 150 volunteer weather observers. A decade later, the project had more than 600 volunteer observers, including people in Canada, Mexico, Latin America, and the Caribbean. Its cost in 1860 was $4,400, or thirty percent of the Smithsonian’s research and publication budget.

The Smithsonian supplied volunteers with instructions, standardized forms, and, in some cases, with instruments. They submitted monthly reports that included several observations per day of temperature, barometric pressure, humidity, wind and cloud conditions, and precipitation amounts. They also were asked to comment on “casual phenomena,” such as thunderstorms, hurricanes, tornadoes, earthquakes, meteors, and auroras.

I’m not sure I’ll find the time to read the actual book anytime soon, unfortunately, but the lecture was fun.


III. “Spraching”

In college, my roommates and I (the same party behind the “narrative escape velocity” concept) had this odd idea for a verb we witnessed a lot but couldn’t quite name. We’d call it spraching, bastardizing the German word for speech. Spraching overlaps somewhat with Frankfurt’s definition of bullshit (a disregard for truth, without particular focus on truth/falsity- as opposed to the knowing and deliberate subversion, the lie.) But the connotation/angle is a little different.

But to have “thus sprach” is a big deal. It’s not just bullshitting to advance oneself. It’s already assuming the authority, and speaking as if your words are actually accomplishing something by merely speaking them, as if you fully expect your words to go down in history as the pivotal moment in… something. Yahweh says “let there be light” and we don’t hear about the mechanism that let’s light be there. His words are all we need to know about. In a literal sense He’s the ultimate spracher.

A manager who has decided that something is already done (without any particular regard to whether that’s true) to end a discussion, for example, is spraching. He is using his words to openly suggest a reality that may not even be so, (and this fact might be readily apparent to the listener!) and he doesn’t care. Anyone who has ever presumed that by being overheard expecting something to simply be because he just said so is spraching. The most impressive spraching, though, is done by those who don’t actually have authority to make things so. Pharoahs can be forgiven for believing that if it is said, it shall be done. 

The bullshitter is deliberately disregarding truth to impress others. The troll brazenly disregards truth to disrupt others. The spracher brazenly disregards truth to impress his words upon others, usually with a comical, unwarranted authority.

If you don’t know what I’m talking about, I’m not sure I can help you understand this concept much further.


IV. “Smarm”


“What is smarm, exactly? Smarm is a kind of performance—an assumption of the forms of seriousness, of virtue, of constructiveness, without the substance. Smarm is concerned with appropriateness and with tone. Smarm disapproves.

Smarm would rather talk about anything other than smarm. Why, smarm asks, can’t everyone just be nicer?”


“The plutocrats are haunted, as all smarmers are haunted, by the lack of respect. Nothing is stopping anyone—any nobody—from going on a blog or on Twitter and expressing their opinion of you, no matter who you think you are. New media and social media have an immense and cruel leveling power, for people accustomed to old systems of status and prestige. On Twitter, the only answer to “Do you know who I am?” is “One more person with 140 characters to use.””


“Anger is upsetting to smarm—real anger, not umbrage. But so is humor and confidence. Smarm, with its fixation on respect and respectability, has trouble handling it when the snarkers start clowning around. Are you serious? the commenters write. Is this serious? On Twitter, the right-thinking commenters pass the links around: Seriously?”


Report: 7 December

I have a few projects I’m working on.

Apologies for the secrecy- for most of my personal projects it’s unwarranted but I don’t think I really like my effort being too apparent in the things that I do. It’s an odd behavioral tick that was rewarded in college.

Anyway, in the immediate future I’ll be posting about the following projects:

  • My girlfriend and I just started developing a sort of strategy game. We have put a larger cinematic platformer (think: old Prince of Persia) on hiatus because I thought this strategy game could produce some quick wins sooner, being easier to animate for and having more recyclable assets.
  • I’m working with some colleagues to assemble materials and do a test run for a podcast/video series on Game Studies. We came up with this idea for a few reasons: For one, there isn’t much accessible stuff out there on serious game studies. There are a few series about game criticism or game design, and even they tend to be more about the craft and the culture. There’s a lot of academic literature out there, though, and it’s mostly poorly publicized and understood. Second, it seems like a great niche to dig into and develop some expertise (and ideally attract some attention to that expertise)- we’re all interested in that side of things. Third, for me anyway, a deeper exploration into the field could inform my work and reading in my usual blog topics- the intersection between people behavior and system behavior.
  • I will probably not talk much about my actual job, or the healthcare startup I’m driving. I don’t think that’s terribly appropriate.
  • I have many friends also doing interesting work. With their permission I might scribble on lessons I learned from their adventures.
  • Maybe I’ll be more granular about my progress on my Five Week Plans that I set out dictatorially at the end of every month.

I think that this part of the site will be distinct in tone and theme from my other blog, probably, although my daily goings-on obviously will feed into both. I’ll see how this goes as I do it, and later I’ll evaluate how I’m doing with this, or if it’s necessary, or if it should feed back into the main blog.

Mulling: Governing Systems

Curated quotes/notes, mostly spun out of yesterday’s new posts from Ribbonfarm and The Last Psychiatrist.


I. Voight Against the Machine

First- A story told by Sebastian Deterding, one of my favorite no-bullshit games researchers and Gamification experts (and whom I will eventually talk about at length?) Most of Gamification is bullshit so I’ve been careful not to throw words around on it without a lot of hedging and explaining. Back to that someday.

In 1906, Friedrich Wilhelm Voigt, a con man, was released from prison.

Reformed, he actually wanted to become a good citizen. But he quickly ran into a problem: to get an apartment, he needed a document that he had a job. To get a job, he needed a work permit. But to get a work permit, he needed to document that he had an apartment. And the Prussian bureaucrats wouldn’t make an exception for him. They stuck to the rules- a bit like a computer, really. So Voigt was caught in a loop.

So, on October 16, 1906, Voigt puts on a Captain’s uniform, grabs a group of soldiers from the street, marches over to the townhall of Köpenick, and occupies it- and in the course, he has his work permit signed and stamped. This stunt immortalized Voigt in German folklore as the “Captain of Köpenick”. [lifted from spoken accompaniment to “Ruling the World” presentation]

The next step Deterding makes is an obvious one: any exception is an allowance, and thus all exceptions are part of the rules. He then talks about his own adventures stuck in computerized systems that couldn’t foresee and properly address his unique situation, leaving him stuck until he could fight his way through exception-making and reaching out for a human somewhere in the system.


II. “Last Psychiatrist” Author Does Her (His? Their?) Thing Re: Catching Fire

In fairness, “Hunger Games: Catching Fire” a movie I haven’t seen based on a book I’ll likely never read, but I did see the trailer. And c’mon, it’s not a German film, I’m sure the plot is familiar enough to assume. (I haven’t decided if this attitude is actually “okay”).

Number of people killed: 15
Number of people Katniss kills: 1
Number of times she is saved by someone else: 6
Number of times she saves someone else: 0


An insightful, even optimistic retort is that at least she’s not killing, at least she’s made the ethical choice to not kill anyone.

But this insight is exactly what you are supposed to think, it is an illusion, and it is why my tally above is also a lie.  She kills one person, but she is responsible for all of their deaths.  From the very beginning of the Game it was immediately true that everyone but one got killed.  From the very beginning, before anyone dies, you are guilty of  everyone’s death.

That’s the Game.  It’s not like they went in there thinking, “I’m not going to kill anyone because I am planning to escape this Game.” No one backed up their pacifism with suicide.   Katniss’s thinking is basically, “I’m not butcher, but I am going to try and survive.”    The movie elevates her passivity into a moral act, which it isn’t, that’s the trick.  This is a closed system.  Whether she shoots them down herself or waits for the psychopath in the group to do it for her, it’s the same. […]

The true criticism of the movie isn’t that it is too violent, but that it is not violent enough– it is Disney violence, and whenever you see the word Disney you should instead see “100% in the service of the existing social structure.”  The movie presents “not murdering anyone” as if it were a moral option, as if it were true; so that you are not revolted by the fact that you did kill 15 people; so that you do not fight to change the system that forces you to kill 15 people. […]

But in totalitarianism, there are no individual acts– that’s the whole point of the totalitarian structure, that’s what it wants, what it wants you to become. Your acts appear personal and individualized but conform beautifully, they are no threat. [source]


III. “Algorithmic Governance” by Sam Bhagwat, on Ribbonfarm

This new post introduces different flavors of the Network form of human organization [summary] as a solution to kludge and the organizational death-by-entropy problem with a slightly unfamiliar angle.(“One prominent example, McDonald’s, began development in the 1950s. Sometimes confused with a restaurant, this is actually a piece of licensed software 600 pages longwith a QA department, currently running around 14,000 instances in America.”)

 Moore’s Law has granted to 21st-century organizations two new methods for governing complexity:  locally powerful god-algorithms we’ll call Athenas [hedgehogs] and omniscient but bureaucratic god-algorithms we’ll call Adjustment Bureaus [foxes]

… the problem of “keeping the paper-pushers in line” is timeworn, complex, and vital. Rulers tend to deploy any and all tools available in a frantic battle against entropy.

“Bureaucracies ate Obama,” proclaimed one article, noting that crises blamed on the current president actually happened in the civil service five or six levels below. […]

In 2009, the president appointed prominent economist Cass Sunstein [author of Nudge– the book that coined “Libertarian Paternalism” into public lexicon, and Simpler– which I read this year] to head the Office of Information and Regulatory Affairs. The OIRA is in charge of “regulatory review,” ensuring regulations work as intended; essentially, the government QA department.

OIRA has 50 full-time personnel. It is outspent by other regulatory agencies by a factor of 7000:1.

Software may be designed to hide complexity, but can’t make it magically go away. There is no free lunch.


IV. Libertarian Paternalism

People live inside of complexes of designed and un-designed systems. Even without any single tyrant, day-to-day choices are already framed for us:

  • by our own biases and ignorance before beginning a negotiation or decision
  • by the interfaces and artifacts that present options to us [again with the functional and rhetorical affordances]
  • by our understanding of the choices available
  • by cultural expectations about what is desirable, what is acceptable, and what isn’t
  • by availability heuristic
  • by our perception of the significance of the decision
  • by time and energy pressures that might suggest to us to change our decision pattern.

Is it a moral imperative that people must deliberately and personally come to conclusions about what goods and services would best benefit them? Is it a weakening of our resolve or a threat to our freedom to be given non-imperative (transparent and explainable) nudges towards certain actions?

If you answered yes:

Is advertising tyrannical? Is advisement? Is peer pressure? Do there exist un-designed interfaces, or choices without defaults and arguments embedded in their representation? Are some other entities currently designing decisions unilaterally, and is that okay? Are “nudges” really categorically similar to “shoves”?

There are other, maybe more interesting concerns:

  • Is the Choice Architect so super-rational that we know his nudges are well-informed? (No, of course not.)
  • Will the Choice Architect attempt to design decisions that will especially benefit the Choice Architect? (Sure, expect it.)

These are reasonable concerns, and ones that can be measured and monitored. A decision that is promoted that is not beneficial ought to be 1) known to be the promoted option and 2) can be traced to a sub-optimal outcome.

To that end, smart Choice Architecture is:

  • Clear/Concise: Eliminate as much noisy, complicated rule-making as possible. Otherwise things will never work how you think they will.
  • Documented: Instances of Choice Architecture should be recorded and perhaps subject to review by other groups.
  • Incremental: Sweeping changes make it hard to see what in a change has worked.
  • Suggestive but not dictatorial: Bloomberg’s soda ban is not soft paternalism. It’s old fashioned, hard stuff. This is a constant misunderstanding by critics.


Another minor point about the politics of libertarian paternalism:

This way of thinking really is useful. But to be worthy of its name, “libertarian paternalism” must go farther than Brooks does in insisting on a bright line between enlightened defaults and paternalistic mandates with no opt-out.

As well, a true adherent of “libertarian paternalism” wouldn’t just focus, as Brooks does, on ways that government can increase its paternalism. The appeal of the philosophy is partly rooted in the respect it has for individual autonomy — the libertarian half of the name — and if we’re going to embrace the philosophy, various paternalistic policies ought to be made more libertarian.

I ought to be permitted to buy the light bulb of my choice if I seek it out and pay for its externalities; to smoke marijuana if I am fully aware of the health risks; to purchase only catastrophic medical insurance if my considered preference is paying out of pocket for routine doctor’s visits and minor procedures; and to buy raw milk and unpasteurized cheeses if that’s my thing.

In theory, libertarian paternalism would seem to present libertarian-minded Americans and paternalism-minded Americans with compromises that would leave both groups better off overall, and advance the common good. In practice, it often seems like advocates of libertarian paternalism want libertarians to countenance more paternalism without paternalists granting more liberties.

That’s no way to build a coalition.


V. Misc.

I was looking back through my post “That Vision Thing“, which I wrote before I read any Jaynes but now has a weird vibe with a Jaynesian reading.

I realized that I let November pass by without my college tradition of marathoning conspiracy theory movies. I feel horrible for turning my back on that activity. If anyone has some suggestions, by all means ping me. I prefer very high production quality, but I’ll take a huge range of batshit-ness of content.

Another thing I’m letting slip: Global Game Jam entrance date is upon us. I won’t be going back to Pittsburgh to win the old local one (it’s expensive to get there, the ends don’t justify it anymore really), and I’m not interested in doing it in NYC with strangers to be honest. Maybe I’ll jam out a little game with my girlfriend on the same weekend, instead.

2014 Book to look forward to: “The Gameful World“, by Steffan P. Walz and Sebastian Deterding (author of the part I quotes). In the meanwhile, “Manifesto for the Ludic Century” by Zimmerman and Chaplain and, curated by Chaplain, responses to the Manifesto from the industry/academics. (Note: Zimmerman and Katie Salen’s “Rules of Play” is one of the few probably-actually-canon books of “Game Design Fundamentals”);

“MIT Media Lab Tool Lets Anyone Visualize Unwieldy Government Data”

The Bicameral Mind

Note: I was structurally changing the blog a bit. Nothing seems to be broken, but who really knows anything. Apologies in advance.



The speculative thesis which I shall try to explain in this chapter- and it is very speculative- is simply an obvious corollary from what has gone before. The bicameral mind is a form of social control and it is that form of social control which allowed mankind to move from small hunter-gatherer groups to large agricultural communities. The bicameral mind with its controlling gods was evolved as a final stage of the evolution of language. And in this development lies the origin of civilization. 

This is the rest of my summary on Book One (of Three) of Origins of Consciousness. Here continues Jaynes’ case for Gods talking to ancient men and consciousness (as he defines it) being a new phenomenon. After this, I’m going to switch tracks for a little while.


The Brain (briefly)

Most parts of the brain are generally bilaterally symmetrical (redundancy is great for robustness)- with the exception of the higher, more recent layers. In fact, although both hemispheres of the brain are independently capable of understanding language, only the left hemisphere (generally) houses the common language centers (Wernicke’s areaBroca’s area, etc.) Further, Jaynes contends that stimulation to these areas on the right shadowing the left’s language centers can create (in some) verbal hallucinations (although as a caveat, stimulating other places also may create this effect). Jaynes argues that these higher, non-symmetric layers of the brain are highly plastic, and thus the environment can allow for a wild variety of plausible structural configurations.

Psychotic minds are also used as an example of a kind of mind possibly closer to the minds of antiquity. People who hear voices in their heads feel a closeness and a compulsion by those voices. These voices may feel realer than anything else. As Jaynes is concerned, these voices may have been common in the human population not terribly long ago. Perhaps this kind of mind was stimulated in parts of the brain now dormant or re-purposed in the holistic, relational “right brain” and instructed back to the ancient man’s awareness through verbal communication back to the “left brain” when stressed.


The Iliad

Now, as I’ve said before, Kevin Simler at Melting Asphalt is currently covering Jaynes much more elegantly than my notes here. His series isn’t done yet, but at this point we’ve hit an overlap. I’ll instead quote his take on the Iliad:

[…] Most translators assume that the ancient Greeks had the same mentality that we have today, with roughly the same concepts for introspecting and expressing mental concepts. But Jaynes says we can’t take this for granted, and I think he’s right. So what happens when we abandon that assumption?

[…] As he sees it, the Greeks only developed a modern mentality around 600 B.C. Before that time, e.g. in the Iliad (~1200 to ~900 B.C.), they had very different mental concepts. The earlier Greeks may have used the same words as the later Greeks, but the earlier meanings were much more literal.

The Iliad, as seen by Jaynes, portrays a folk-theory-of-mind that’s fragmented into a variety of different cognitive ‘organs’ and localized in different parts of the body.

Ancient Greek “Organs”, paraphrased from Melting Asphalt:

The phren (or plural, phrenes), which referred originally to the lungs or the breath. What cognitive process took place ‘in’ the phrenes? A: Being surprised by something, or having a surprise realization. […] Greeks might have considered surprise to take place ‘in’ the breath.

The noos (or nous), referring originally to sight or the eyes. ‘Seeing things’ obviously makes sense as a function of the noos, but by metaphorical extension, so does perception (a kind of abstract ‘seeing’), and perhaps memory and imagination as well. We still speak of an organ like this, on occasion, as “the mind’s eye.” But although the Greek’s noos was localized in the head, it was not a general-purpose cognitive organ. Very important functions like judgment and decision-making, for example, can’t take place in the noos, even metaphorically, just as today your mind’s eye can’t ‘decide’ to do something.

The thumos, localized sometimes in the chest, which was a decision-making organ (of sorts) capable of initiating action, especially on the basis of emotions. Gods would sometimes “cast strength” in a person’s thumos, which would rouse him to action. Often a warrior would consult his thumos to see if he was ready to fight.

The psyche, the word that would later come to mean ‘soul’ or ‘mind’ as we think of it today, and from which we derive “psychology,” the study of mind. But in the Iliad, psyche referred not to a cognitive structure at all, but rather to a “life substance” akin to blood or breath. A warrior, for example, bleeds his psyche out onto the ground.

What these examples illustrate is that, during the Iliadic period, all the Greek words for mental concepts had concrete meanings, and most were associated with specific parts of the body. But by the time of Socrates in ~500 B.C., many of them (phrennoos, and psyche in particular) had coalesced in meaning, so that they all came to refer to the modern concept of the ‘soul’ or ‘mind.’ Thus did the Greeks slowly and gradually develop their notion of a Unified Private Headspace, a notion that’s still with us to this day, part of Western civilization’s classical inheritance.


[…] According to Jaynes, the ancient Greeks had no word for ‘body.’

Yes, the more modern Greeks had soma, which we still use today, e.g., when referring to ‘somatic’ cell lines or ‘psychosomatic’ illnesses. But in the Iliad, soma referred only to a dead body, that is, to a corpse. Of course the Iliadic Greeks had words for all the different parts of the body, but no word to refer to the entire thing.

This is very illuminating. Without the notion of a single unified ‘mind’, there’s simply no demand in the language for a word that refers to the body. A man simply was his body. It would be redundant to say that Achilles’ body was tired. Achilles and his body are synonymous, and he is tired — period, end of story. I don’t need to say “My body is covered in paint,” when I could simply say, “I’m covered in paint.”

It’s the same way we almost never talk about the ‘bodies’ of our pets. It sounds strange to say, “Fido’s body is tired,” because we don’t typically think about a dog as having a mind separate from and independent of its body. Fido is just one big, continuous, organic process.


Language and Civilization

Jaynes doesn’t seem to think that sophisticated human language is that old. Climate change may have acted as a selective pressure on language development. Each new wave of words create new perceptions and attentions, resulting in new communication and cultural shifts that ought to show up in the archaeological record.

Communication among primates is mostly postural/visual, but newer, darker climates forced a shift into auditory signaling among early humans as well. “Speech” first began as “the final sounds of intentional calls differentiating on the basis of intensity.” Higher intensity sounds would indicate a nearer (immediate) threat, for example, but lower intensity sounds might indicate far-ness. Higher-intensity and low-intensity vowel-sounds are early modifiers, modifiers without nouns. This age ends around 40,000 BCE, when the age of retouched tools/weapons began.

It wasn’t until after the evolution of modifiers to the calling system that modifiers could be applied to other, less intense communication. Colder climates and increased need to hunt drove the need for communication of hand-axe specialty. “Sharper” and other commands (modifiers of some call signifying action) could have come into being. Tools exploded from 40,000 to 25,000 BCE.

Animal drawings came into existence from 25,000 to 15,000 BCE, in an age where nouns were invented: animals early on, and later nouns for things: pottery, pendants, etc proliferated as we had words to describe and command and teach how to do them.

Verbal hallucinations may have become “a side effect of language comprehension which evolved by natural selection as a method of behavior control.” Without being able to narratize the situation after being issued a command by his chief, early man instead imposes self-control through a hallucinatory auditory loop re-issuing the command. “Instinctive”, common behavior needs no language, but new, complex learned activities do require a cognitive innovation to maintain (temporal priming).

Around 10,000 to 8,000 BCE, proper names first occurred, during a global warming, where populations were more stable (and increasingly sedentary), and more people needed to be interacted with than ever before. Giving a person a name also molds the idea of a mental recreation of that person in her absence. Ceremonial graves become common (although they existed before, on occasion). Further, the hallucination is considered a social interaction now.

The Natufian culture, Jaynes argues, were not conscious- they were still signal-bound, stimulus-response creatures without an ability to narratize. Hallucinatory voices, whether perceived to be the leader’s or the person’s own, were capable of creative problem solving. Stress-inducing events may trigger these hallucinations: death of loved ones, for instance. Double-burials might follow the death and then the voice-death (perhaps, anyway). The death of the local leader, whose voice kept the social group together, was a social force that afflicted everyone involved. Dead kings might still act as living gods.

Over at Ribbonfarm, Simler starts to touch upon Book Two of Origins in the excellent post “Projected Presence“:

How could a basalt statue, or a small wooden figurine, command such power and attention that it would come to be worshipped? What kind of human would “bow down” to such an artifact, or attempt to “serve” it? And how could the practice become so common, in the early-historic Levant, as to require a special injunction against it — in no less privileged a location than the Ten Commandments?

In his mind-rending epic The Origin of Consciousness, Julian Jaynes offers an answer to all of these questions. It’s an answer that’s hard to take seriously, but worth examining — if for no other purpose than to expand our hypothesis-space.

According to Jaynes, idols were worshipped in the early Biblical period because they werehallucinogens, i.e., triggers for inducing hallucinations of the gods they represented. When a worshipper concentrated on an idol, his god would often literally appear before him — sometimes in visual form, but more often as an auditory hallucination. Moreover, if we take the theory in full, these hallucinations were the means by which the right hemisphere of the brain communicated to the left hemisphere.

Certainly, this is crazy. And my intention is not to argue that it’s true. Instead, I merely wish to point out, following Daniel Dennett, that it’s a theory worth taking seriously. It’s a legitimate explanation, put forward in earnest by a serious scholar, in full light of the archaeological record. And the fact that it’s not dismissible out-of-hand tells us something important: that the mentality of early-historic humans was profoundly different from ours — inscrutable and perhaps unknowable.


But that’s it for me on this, for a bit.

Julian Jaynes On What Consciousness Is(n’t)

Again, I’m reading Jaynes’ Origin of Consciousness in the Breakdown of the Bicameral Mind because it’s an odd and influential work, described as genius and madness by the same people. I’ve found it a very accessible and interesting read, regardless of the literal truth claims.


Metaphors and Understanding

 Generations ago we would understand thunderstorms perhaps as the roaring and rumbling about in battle of superhuman gods. We would have reduced the racket that follows the streak of lightning to familiar battle sounds, for example. Similarly today, we reduce the storm to various supposed experiences with friction, sparks, vacuums, and the imagination of bulgeous banks of burly air smashing together to make the noise. None of these really exist as we picture them. Our images of these events of physics are as far from the actuality as fighting gods. Yet they act as the metaphor and they feel familiar and so we say we understand the thunderstorm.

So, in other areas of science, we say we understand an aspect of nature when we say it is similar to some familiar theoretical model.

Theory: relationship of the model to the things the model is supposed to represent. “Bohr’s theory was that all atoms were similar to his solar system-like model.” The theory is not true. But a model is never true or false. “A theory is thus a metaphor between a model and data. And understanding in science is the feeling of similarity between complicated data and a familiar model.”

Consciousness has often been poorly defined by metaphorical thinking. Geological metaphors and the subsumed unconsciousness, chemical metaphors thereafter (James Mill, Wundt, Titchener), mechanical/steam-powered metaphors, etc.

Finally, an analog is a model, but a special model wherein every point is generated by the thing the object is an analog of. Example: a map, which is not a hypothetical metaphor to an unknown, like Bohr’s model is.


What Consciousness Isn’t

Obviously not:

  • Consciousness is not “Reactivity” (the thing you also lose when you get knocked on the head and “lose consciousness”)
  • Consciousness is not “The sum total of mental processes occurring now” (Titchener’s definition) As Jaynes later says, “Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of.”
  • It is not behind all of the operations we perform (we don’t want acute finger awareness while playing the piano, or acute letter awareness while reading- they are supposed to be lower-level operations than where our focus is).
  • Probably not a property of matter, or protoplasm, or a metaphysical imposition. Not convincingly a helpless spectator.

Less obviously:

  • Consciousness is not a copy of experience: can you really recall extremely familiar rooms with eidetic detail? Are your memories even in the first person, looking out of your eyes? Or do you imagine some vague version of yourself as others presumably see you?
  • Consciousness is not necessary for an idea of “concepts”. You may have a concept of a “tree” as far as you’re concerned, but so does (evidently) a dog or a squirrel. “Every bee has a concept of a flower” as far as a generalizable category of things.
  • Consciousness is not necessary for learning. Pavlov.
  • Consciousness is not necessary for thinking. “One does one’s thinking before one knows what one is thinking about.”
  • Consciousness is not necessary for reason. Incubation, for example.

If our reasonings have been correct, it is perfectly possible that there could have existed a race of men who spoke, judged, reasoned, solved problems, indeed do most of the things that we do, but who were not conscious at all.

The question Jaynes suggests is “if consciousness isn’t any of these things, what is it and does it even exist? Jaynes believe it does exist. The next chapter goes into Metaphors, which I started on Monday.


The Features of Consciousness

  1. Spatialization: “The first and most primitive aspect of consciousness is what we already have had occasion to refer to, the paraphrand of almost every mental metaphor we can make, the mental space which we take over as the very habitat of it all. […] When we introspect (a metaphor of seeing into something), it is upon this metaphorical mind-space which we are constantly renewing and ‘enlarging’ with each new thing or relation consdousized.”
  2. Excerption: “In consciousness, we are never ‘seeing’ anything in its entirety. […] We excerpt from the collection of possible attentions to a thing which comprises our knowledge of it.”
  3. The Analog ‘I’: “A most important ‘feature’ of this metaphor ‘world’ is the metaphor we have of ourselves, the analog ‘I’, which can ‘move about’ vicarially in our ‘imagination’, ‘doing’ things that we are not actually doing.
  4. The Metaphor ‘Me’: Our third person vision of ourselves.
  5. Narratization: We build stories out of our own analog “I”. A stray fact is narratized to fit with some other stray fact.
  6. Conciliation: “What I am designating by conciliation is essentially doing in mind-space what narratization does in mind-time or spatialized time. It brings things things together as conscious object just as narratization brings together things as a story. […] In conciliation we are making excerpts or narratizations compatible with each other, just as in external perception the new stimulus and the internal conception are made to agree.
    [example, because this one is odd]: If I ask you to think of a mountain meadow and a tower at the same time, you automatically conciliate them by having the tower rising from the meadow. But if I ask you to think of a mountain meadow and an ocean at the same time, conciliation tends not to occur and you are likely to think of one and then the other. You can only bring them together by a narratization. Thus there are principles of compatibility that govern this process, and such principles are learned and are based on the structure of the world.”

Anatomy of a Metaphor

This week’s post are likely to be mainly about my recent book reading.

Essences and Surfaces argues that analogy is the basis of all thinking. This was an idea I already entertained, so maybe that’s why I have found the book to be a bit too repetitive although some of the examples were fun (ex. the zeugma: “You are free to execute your laws, and your citizens, as you see fit.”). I will probably still finish the book, but it’s really #2 reading priority, usurped by the aggressive (and interesting!) histrionics of the Origins of Consciousness.

The Origins of Consciousness in the Breakdown of the Bicameral Mind argues that analogical thinking, enabled by complex language, is the basis of consciousness [which has a chapter devoted to what it is not, coming soon], and that while our ancestors were often capable of a kind of logic, they were not conscious, not capable of narrativizing their inner lives, until roughly three thousand years ago after a certain linguistic threshold (and I will run through how this is not exactly the Whorfian hypothesis at another time). Before this consciousness was possible, there was a two-tiered proto-consciousness, wherein verbal hallucinations (“Gods”) were the key experience in complex executive decision-making. The breakdown of this bicameral mind occurred around 1200BC, due to “chaotic social disorganizations, to population, and probably to the success in writing in replacing the auditory mode of command.” Needless to say, it’s a tall order.


Parts of a Metaphor

Metaphor was an important concept to both books. Origins spends some ink breaking down the idea of metaphor into parts.

A metaphor is comprised of an understood metaphier and a less understood metaphrand. A metaphor intends to give that target metaphrand some of the attributes of the metaphier (the vocabulary is modeled off of the multiplier/multiplicand). Deeper still, metaphiers have paraphiers (connotations and attributes), and metaphrands have paraphrands (target connotations).

Consider the metaphor that the snow blankets the ground. The metaphrand is something about the completeness and even thickness with which the ground is covered with snow. The metaphier is a blanket on a bed. But the pleasing nuances of this metaphor are in the paraphiers of the metaphier, blanket. These are something about warmth, protection, and slumber until some period of awakening. These associations of blanket then automatically become the associations or paraphrands of the original metaphrand, the way the snow covers the ground. And we thus have created by this metaphor the idea of the earth sleeping and protected by the snow cover until its awakening in spring. All this is packed into the simple use of the word ‘blanket’ to pertain to the way snow covers the ground.

A cardinal property of an analog is that the way it is generated is not the way it is used — obviously. The map-maker and map-user are doing two different things. For a map-maker, the metaphrand is the blank piece of paper on which he operates with the metaphier of the land he knows and has surveyed. But for the map-user, it is just the other way around. The land is unknown; it is the land that is the metaphier, while the metaphier is the map which he is using.

And so with consciousness. Consciousness is the metaphrand when it is being generated by the paraphrands of our verbal expressions. But the functioning of consciousness is, as it were, the return journey. Consciousness becomes the metaphier full of our past experience, constantly and selectively on such unknowns as future actions, decisions, and partly remembered pasts, on what we are and yet may be. And it is by this generated structure of consciousness that we understand the world.


Let me summarize as a way of ‘seeing’ where we are and the direction in which our discussion is going. We have said that consciousness is an operation rather than a thing, a repository or a function. It operates by way of analogy, by way of constructing an analog space with an analog ‘I’ that can observe that space, and move metaphorically in it. It operates on any reactivity, excerpts relevant aspects, narratizes and conciliates them together in a metaphorical space. Conscious mind is a spatial analog of the world and mental acts are analogs of bodily acts. Consciousness operates only on objectively observable things. Or, to say it another way with echoes of John Locke, there is nothing in consciousness that is not an analog of something that was in behavior first.

Tomorrow: Jayne’s delineation on what consciousness is/isn’t.