I had a great year last year, although I didn’t return to publish much of anything here (3 pretty short posts in 2015).

The previous year (2014), I published 74 posts, some of which I was even pretty proud of. I felt that writing helped me to crystalize my thinking, even when I didn’t publish. When I did publish, the feedback was useful. I want that back this year. I’ll try for at least one post per week, on the usual topics. Whatever those are.



We consistently overestimate the power of our own judgment. There is no reason that you or I would have looked at the nascent automobile, airplane, radio, penicillin, or the internet, and understood any second-order implications that seem obvious now that they’ve happened. Few at the time had much of any idea, even for years after the idea’s conception. We have records of unimpressed or confused (or even contemptuous) responses by authority figures to these technologies as they developed. Many crackpot websites share quotes of these misjudgments to suggest that their particular wares are in the same category as other revolutionary, once-ridiculed technologies or ideas.

We also have many records of weird wishlists and wayward projections of where the “inevitable” and “totally not path dependent” technology tree will grow and blossom. Various tomorrowlands are forced to close by economic/technical constraints or changing priorities. (Vaguely related, and worth checking out: “Web Design: The First 100 years“, and observations on longing for closed technological frontiers.) Jetpacks and flying cars, etcetera.

In any of the common threads of “progress”- economic, technological, or ethical– there is no particular reason that a given person would be on what we today might consider to be the right side of history. I certainly have preferences. There is no law of nature defending my preferences into posterity.



Conflated schools of thought against dogmatism

There are a variety of philosophical responses to dogmatism and overconfidence in one’s beliefs. Here are a few that are often conflated with one another, in order to draw distinctions at a later time.

The relativist claims that there is no absolute truth at all, that each person can have some proposition p be true “only for them”. A lot of casual lip service is given to relativism but I think it is not usually espoused in good faith.

The epistemological skeptic holds that regardless of whether there is an absolute truth, our access to it is uncertain, and therefore no beliefs are justified. This is a pit many smart people have tried hard to navigate around, often with some form of Foundationalism (anchoring onto something that they insist we can actually know for certain and building from there).

There are other positions that sometimes get conflated, especially when stumbling through ethics. “Stumbling through”, by the way, is a kind way of portraying my relationship with ethics as a body of knowledge. The ethical nihilist believes that there are no moral facts, that nothing has moral properties; the emotivist claims that ethical statements are actually more like utterances to express attitudes (things are not “wrong”, but I might have really bad feelings about them); the cultural relativist has a more intersubjective understanding of the matter, where things are true in particular times and places, like the weather.

The fallibilist believes that “all beliefs are only, at best, fallibly justified”. This is different from skepticism in that there is some concept of satisficing– there is no final review and closing of the case, but there is a judgment of “provisionally true”. Descartes’ apparent search for a foundation of knowledge that is “firm and constant in the sciences” and to avoid that which is “not entirely certain and indubitable” is admirable but too high a bar for actionable knowledge. I sympathize with the fallibilist.



Better Living Through the Sith Way

Fallibilism has implications. We can find a string of reasoning justifiable or provisionally true, so we don’t have to abandon knowledge or anything drastic like that. One of the implications of fallibilism is a tighter embrace of ‘bounded rationality’.

When discovering cognitive biases for the first time, I and many people I know believed that understanding these pitfalls would help us to transcend them. It’s a Jedi-like view of the world, sometimes calling itself “rationalism”: System 1 thinking should be suppressed, and System 2 thinking should be strengthened at all times. In time, I eventually settled into a decidedly more Sith point of view (sometimes called postrationality): Embrace System 1 for what it is, and work with it instead of against it. Realize that not only is the world insane but that I am probably insane, too. I should focus my efforts on harm reduction (e.g. intelligent environments, trusting experts, saving time/energy) instead of quixotically”becoming sane” at great cost with indiscernible benefits.

In my life last year this has lead to:

I have reason to believe that these behaviors tend to contribute to circumstances I prefer. To my mind they are also an embrace of limitation, asking me to do less instead of more: less time and effort allocating investments (poorly), less time and effort news-reading idly, more automatic behaviors that contribute to my wellbeing.

To a happy 2016. Until next week.