I’ve been overworking a bit. I still wrote a little bit but nothing too coherent. (What else is new.)
Part of my problem was that I temporarily stopped flying, and those three-hour sessions of confinement (plus the free drinks) were a big part of my usual writing ritual for this blog. I’m in the air again, so here we are.
I’ve been reading Piketty’s Capital in the Twenty-First Century, like everyone else on the planet. I’m interested in finally reading Perez’ Technological Revolutions and Financial Capital (-plenty of notes out there on the internet, may not read the source material directly?) I’ve purchased Sandy Pentland’s Social Physics, which I might put notes up on if it’s sufficiently different and interesting.
(The next post that looks ready to finish is dipping back into some counterculture stuff, but this time about games.)
Broadly speaking, there are two types of intelligences I expect to wrangle with regularly on a given project. They can speak to each other, albeit a little clunkily.
The first kind is not very industrious but is very adaptable. We like to put them to work on fuzzier, more poorly-defined tasks.
The second kind is very literal-minded but incredibly industrious. If a process can be crystallized enough, this kind of intelligence will blow the first kind right out of the water.
Communication errors abound, naturally. The onus of outlining project requirements and communicating effectively is still on the first intelligence, the humans.
The second kind is evolving to be able to emulate the first kind in a limited but growing subset of tasks.
II. “Knowledge but no theories”
Taleb claims that “theory” is fragile (despite being a hedgehog himself, he is explicitly against totalizing schemes). “Phenomenology” is more robust, because it makes fuller contact with reality. “Heuristics” and small fox-sized bits of practical knowledge or antifragile, as each little piece can be shaped by their contact with reality, and whole support beams and metaphysics and conjecture aren’t unnecessary and can even be harmful. Heuristics can benefit from errors, phenomenology can withstand errors, but generally theories are damaged by errors.
We often build computers to make theories that we can understand, and to relay the idea back to us using the language we provide for them.
It’s interesting to consider computers with alien knowledge, never to be communicated to us.
But a lot of HFTs simply don’t know what their strategy really is. They hunt for patterns in prices or orders, find a pattern that seems to work, and trade on it until it stops making money. They don’t have any idea why the pattern exists. Sometimes it only exists for a few seconds. In fact, if they stop to gather enough information about the pattern to figure out why it’s there, it often disappears! Actually, there are deep mathematical (information-theoretical) reasons to suspect that lots of HFT opportunities can only be exploited by those who are willing to remain forever ignorant about the reason those opportunities exist. It’s mind-bending (and incredibly interesting).
High Frequency Trading is an interesting avenue, and one that I know next to nothing about. David Brin has long been speculating aloud about the possibility that the money poured into HFT might accidentally drive the birth of emergent AI, and not the Friendly kind.
For perspective: Mike Travers also long ago wrote this bit on “Hostile AI” that exists here, today.
I was listening to a series on World War II.
Theory (Ideal, 1930’s): Strategic bombing would destroy the enemy’s capacity with precision, ending the war faster. Thus, bombing is a morally superior military option.
In Practice: Bombers cannot hit anything without hitting everything. Might as well area-bomb.
Rationalization: Fine. “Morale Bombing” will destroy the enemy’s will to fight and end the war faster through destruction and civilian outrage.
Practice: But “Terror Bombing” – that’s German for “Morale Bombing”– inflicted on us will not destroy our resolve, and in fact only makes us hungrier for retaliation.
Problem: Bombers can be attacked by fighters during the day.
Solution: Bomb at night.
Implied: “We will now target at night what we could not hit during the day”.
A more nearby example here.
Continuous communication and highly overlapping worldviews are conditions for attempts at increased informational efficiency: jargon, mutual trust, epistemic closure, (and likely increased distrust for challengers to the bubble). The same systems that give us camaraderie among police officers or soldiers or members of the Intelligence community also give us institutionalized corruption, good ol’ boy networks, etc. These systems crop up because they work and they benefit their constituent actors. Explaining bad behavior doesn’t excuse it, sure.
In 2008, [Elizabeth] Warren joined a five-person congressional-oversight panel whose creation was mandated by the seven-hundred-billion-dollar bailout. She found that thrilling and maddening, too. In the spring of 2009, after the panel issued its third report, critical of the bailout, Larry Summers took Warren out to dinner in Washington and, she recalls, told her that she had a choice to make. She could be an insider or an outsider, but if she was going to be an insider she needed to understand one unbreakable rule about insiders: “They don’t criticize other insiders.” That’s about when Warren went on the Jon Stewart show, and you get the sense that, over that dinner, she decided to run for office.