Вы находитесь на странице: 1из 660

The Rust Age

ribbonfarm.com, 20072012

The Rust Age


ribbonfarm.com, 20072012
Venkatesh Rao

Ribbonfarm Inc.
2014

Copyright 2014 by Venkatesh Rao


All rights reserved. This book or any portion thereof may not be reproduced or used in
any manner whatsoever without the express written permission of the publisher except
for the use of brief quotations in a book review or scholarly journal.
First Printing: 2014
www.ribbonfarm.com

Contents

Part 0: Legibility..............................................7
A Big Little Idea Called Legibility.....................8
Part 1: The Art of Refactored Perception.........12
The Art of Refactored Perception..................13
The Parrot....................................................16
Amy Lin and the Ancient Eye.........................20
The Scientific Sensibility...............................24
Diamonds versus Gold..................................26
How to Define Concepts................................28
Concepts and Prototypes..............................30
How to Name Things.....................................32
How to Think Like Hercule Poirot...................44
Boundary Condition Thinking........................47
Learning From One Data Point.......................50
Lawyer Mind, Judge Mind..............................53
Just Add Water.............................................58
The Rhetoric of the Hyperlink........................62
Seeking Density in the Gonzo Theater...........66
Rediscovering Literacy.................................74
Part 2: Towards an Appreciative View of
Technology..........................................................83
Towards an Appreciative View of Technology..84
An Infrastructure Pilgrimage.........................87
Meditation on Disequilibrium in Nature..........88
Glimpses of a Cryptic God.............................89
The Epic Story of Container Shipping.............90
The World of Garbage...................................91
The Disruption of Bronze..............................92
Bays Conjecture..........................................93
Halls Law: The Nineteenth Century Prequel to
Moores Law.....................................................94
Hacking the Non-Disposable Planet...............95

Welcome to the Future Nauseous..................96


Technology and the Baroque Unconscious......97
The Bloody-Minded Pleasures of Engineering. 98
Towards a Philosophy of Destruction.............99
Creative Destruction: Portrait of an Idea......100
Part 3: Getting Ahead, Getting Along, Getting
Away.................................................................101
Getting Ahead, Getting Along, Getting Away 102
The Crucible Effect and the Scarcity of
Collective Attention.........................................105
The Calculus of Grit....................................106
Tinker, Tailor, Soldier, Sailor........................107
The Turpentine Effect..................................108
The World is Small and Life is Long..............109
My Experiments with Introductions..............110
Extroverts, Introverts, Aspies and Codies.....111
Impro by Keith Johnstone............................112
Your Evil Twins and How to Find Them.........113
Bargaining with your Right Brain.................114
The Tragedy of Wiios Law...........................115
The Allegory of the Stage............................116
The Missing Folkways of Globalization.........117
On Going Feral...........................................118
On Seeing Like a Cat...................................119
How to Take a Walk.....................................120
The Blue Tunnel..........................................121
How Do You Run Away from Home?..............122
On Being an Illegible Person........................123
The Outlaw Sea by William Langewiesche.. . .124
The Stream Map of the World......................125
Part 4: The Mysteries of Money.....................126
The Mysteries of Money..............................127
Ancient Rivers of Money.............................129
Fools and their Money Metaphors................130
Time and Money: Separated at Birth?..........131
The Eight Metaphors of Organization...........132
The Lords of Strategy by Walter Kiechel......133

A Brief History of the Corporation: 1600 to 2100


......................................................................134
Marketing, Innovation and the Creation of
Customers......................................................135
The Milo Criterion.......................................136
Ubiquity Illusions and the Chicken-Egg Problem
......................................................................137
The Seven Dimensions of Positioning...........138
Coloring the Whole Egg: Fixing Integrated
Marketing.......................................................139
How to Draw and Judge Quadrant Diagrams. 140
The Gollum Effect.......................................141
Peak Attention and the Colonization of
Subcultures....................................................142
Acting Dead, Trading Up and Leaving the Middle
Class..............................................................143
Can Hydras Eat Unknown-Unknowns for Lunch?
......................................................................144
The Return of the Barbarian........................145
Glossary.....................................................146

Part 0:
Legibility

This is an edited collection of the first five years of ribbonfarm (2007-2012), retroactively
named the Rust Age.
The Rust Age also generated a book, Tempo, and two ebooks: The Gervais Principle and
Be Slightly Evil.

A Big Little Idea Called Legibility


July 26, 2010
James C. Scotts fascinating and seminal book, Seeing Like a State:
How Certain Schemes to Improve the Human Condition Have Failed,
examines how, across dozens of domains, ranging from agriculture and
forestry, to urban planning and census-taking, a very predictable failure
pattern keeps recurring. The pictures below, from the book (used with
permission from the author) graphically and literally illustrate the central
concept in this failure pattern, an idea called legibility.

States and large organizations exhibit this pattern of behavior most


dramatically, but individuals frequently exhibit it in their private lives as
well.
Along with books like Gareth Morgans Images of Organization,
Lakoff and Johnsons Metaphors we Live By, William Whytes The

Organization Man and Keith Johnstones Impro, this book is one of the
anchor texts for this blog. If I ever teach a course on Ribbonfarmesque
Thinking, all these books would be required reading. Continuing my
series on complex and dense books that I cite often, but are too difficult to
review or summarize, here is a quick introduction to the main idea.
The Authoritarian High-Modernist Recipe for Failure
Scott calls the thinking style behind the failure mode authoritarian
high modernism, but as well see, the failure mode is not limited to the
brief intellectual reign of high modernism (roughly, the first half of the
twentieth century).
Here is the recipe:

Look at a complex and confusing reality, such as the social


dynamics of an old city
Fail to understand all the subtleties of how the complex reality
works
Attribute that failure to the irrationality of what you are looking at,
rather than your own limitations
Come up with an idealized blank-slate vision of what that reality
ought to look like
Argue that the relative simplicity and platonic orderliness of the
vision represents rationality
Use authoritarian power to impose that vision, by demolishing the
old reality if necessary
Watch your rational Utopia fail horribly

The big mistake in this pattern of failure is projecting your subjective


lack of comprehension onto the object you are looking at, as
irrationality. We make this mistake because we are tempted by a desire
for legibility.

Legibility and Control


Central to Scotts thesis is the idea of legibility. He explains how he
stumbled across the idea while researching efforts by nation states to settle
or sedentarize nomads, pastoralists, gypsies and other peoples living
non-mainstream lives:
The more I examined these efforts at sedentarization,
the more I came to see them as a states attempt to make a
society legible, to arrange the population in ways that
simplified the classic state functions of taxation,
conscription, and prevention of rebellion. Having begun to
think in these terms, I began to see legibility as a central
problem in statecraft. The pre-modern state was, in many
crucial respects, particularly blind; it knew precious little
about its subjects, their wealth, their landholdings and
yields, their location, their very identity. It lacked anything
like a detailed map of its terrain and its people.
The book is about the 2-3 century long process by which modern
states reorganized the societies they governed, to make them more legible
to the apparatus of governance. The state is not actually interested in the
rich functional structure and complex behavior of the very organic entities
that it governs (and indeed, is part of, rather than above). It merely
views them as resources that must be organized in order to yield optimal
returns according to a centralized, narrow, and strictly utilitarian logic. The
attempt to maximize returns need not arise from the grasping greed of a
predatory state. In fact, the dynamic is most often driven by a genuine
desire to improve the lot of the people, on the part of governments with a
popular, left-of-center mandate. Hence the subtitle (dont jump to the
conclusion
that
this
is
a
simplistic
anti-big-government
conservative/libertarian view though; this failure mode is ideology-neutral,
since it arises from a flawed pattern of reasoning rather than values).
The book begins with an early example, scientific forestry
(illustrated in the picture above). The early modern state, Germany in this
case, was only interested in maximizing tax revenues from forestry. This

meant that the acreage, yield and market value of a forest had to be
measured, and only these obviously relevant variables were comprehended
by the statist mental model. Traditional wild and unruly forests were
literally illegible to the state surveyors eyes, and this gave birth to
scientific forestry: the gradual transformation of forests with a rich
diversity of species growing wildly and randomly into orderly stands of
the highest-yielding varieties. The resulting catastrophes better
recognized these days as the problems of monoculture were inevitable.
The picture is not an exception, and the word legibility is not a
metaphor; the actual visual/textual sense of the word (as in readability)
is what is meant. The book is full of thought-provoking pictures like this:
farmland neatly divided up into squares versus farmland that is confusing
to the eye, but conforms to the constraints of local topography, soil quality,
and hydrological patterns; rational and unlivable grid-cities like Brasilia,
versus chaotic and alive cities like Sao Paolo. This might explain, by the
way, why I resonated so strongly with the book. The name ribbonfarm
is inspired by the history of the geography of Detroit and its roots in
ribbon farms (see my About page and the historic picture of Detroit
ribbon farms below).

High-modernist (think Bauhaus and Le Corbusier) aesthetics


necessarily lead to simplification, since a reality that serves many purposes
presents itself as illegible to a vision informed by a singular purpose. Any
elements that are non-functional with respect to the singular purpose tend
to confuse, and are therefore eliminated during the attempt to
rationalize. The deep failure in thinking lies is the mistaken assumption
that thriving, successful and functional realities must necessarily be
legible. Or at least more legible to the all-seeing statist eye in the sky
(many of the pictures in the book are literally aerial views) than to the
local, embedded, eye on the ground.
Complex realities turn this logic on its head; it is easier to
comprehend the whole by walking among the trees, absorbing the gestalt,
and becoming a holographic/fractal part of the forest, than by hovering
above it.
This imposed simplification, in service of legibility to the states eye,
makes the rich reality brittle, and failure follows. The imagined
improvements are not realized. The metaphors of killing the golden goose,
and the Procrustean bed come to mind.
The Psychology of Legibility
I suspect that what tempts us into this failure is that legibility quells
the anxieties evoked by apparent chaos. There is more than mere stupidity
at work.
In Mind Wide Open, Steven Johnsons entertaining story of his
experiences subjecting himself to all sorts of medical scanning
technologies, he describes his experience with getting an fMRI scan.
Johnson tells the researcher that perhaps they should start by examining
his brains baseline reaction to meaningless stimuli. He naively suggests a
white-noise pattern as the right starter image. The researcher patiently
informs him that subjects brains tend to go crazy when a white noise
(high Shannon entropy) pattern is presented. The brain goes nuts trying to
find order in the chaos. Instead, the researcher says, they usually start with
something like a black-and-white checkerboard pattern.

If my conjecture is correct, then the High Modernist failure-throughlegibility-seeking formula is a large scale effect of the rationalization of
the fear of (apparent) chaos.
[Techie aside: Complex realities look like Shannon white noise, but in
terms of deeper structure, their Kolmogorov-Chaitin complexity is low
relative to their Shannon entropy; they are like pseudo-random numbers
or , rather than real random numbers; I wrote a two-part series on this
long ago, that I meant to continue, but never did].
The Fertility of the Idea
The idea may seem simple (though it is surprisingly hard to find
words to express it succinctly), but it is an extraordinarily fertile one, and
helps explain all sorts of things. One of my favorite unexpected examples
from the book is the rationalization of people names in the Philippines
under Spanish rule (I wont spoil it for you; read the book). In general, any
aspect of a complex folkway, in the sense of David Hackett Fischers
Albions Seed, can be made a victim of the high-modernist authoritarian
failure formula.
The process doesnt always lead to unmitigated disaster. In some of
the more redeeming examples, there is merely a shift in a balance of
power between more global and more local interests. For example, we
owe to this high-modernist formula the creation of a systematic, global
scheme for measuring time, with sensible time zones. The bewilderingly
illegible geography of time in the 18th century, while it served a lot of
local purposes very well (and much better than even the best atomic clocks
of today), would have made modern global infrastructure, ranging from

the railroads (the original driver for temporal discipline in the United
States) to airlines and the Internet, impossible. The Napoleanic era saw the
spread of the metric system; again an idea that is highly rational from a
centralized birds eye view, but often stupid with respect to the subtle local
adaptions of the systems it displaced. Again this displaced a good deal of
local power and value, and created many injustices and local
irrationalities, but the shift brought with it the benefits of improved
communication and wide-area commerce.
In all these cases, you could argue that the formula merely replaced a
set of locally optimal modes of social organization with a globally optimal
one. But that would be missing the point. The reason the formula is
generally dangerous, and a formula for failure, is that it does not operate
by a thoughtful consideration of local/global tradeoffs, but through the
imposition of a singular view as best for all in a pseudo-scientific sense.
The high-modernist reformer does not acknowledge (and often genuinely
does not understand) that he/she is engineering a shift in optima and
power, with costs as well as benefits. Instead, the process is driven by a
naive best for everybody paternalism, that genuinely intends to improve
the lives of the people it affects. The high-modernist reformer is driven by
a naive-scientific Utopian vision that does not tolerate dissent, because it
believes it is dealing in scientific truths.
The failure pattern is perhaps most evident in urban planning, a
domain which seems to attract the worst of these reformers. A generation
of planners, inspired by the crazed visions of Le Corbusier, created
unlivable urban infrastructure around the world, from Braslia to
Chandigarh. These cities end up with deserted empty centers populated
only by the government workers forced to live there in misery (there is
even a condition known as Brasilitis apparently), with slums and shanty
towns emerging on the periphery of the planned center; ad hoc, bottom-up,
re-humanizing damage control as it were. The book summarizes a very
elegant critique of this approach to urban planning, and the true richness
of what it displaces, due to Jane Jacobs.

Applying the Idea


Going beyond the books own examples, the ideas shed a whole new
light on other stories/ideas. Two examples from my own reading should
suffice.
The first is a book I read several years back, by Nicholas Dirks,
Castes of Mind: Colonialism and the Making of Modern India, which
made the argument (originally proposed by the orientalist Bernard Cohn),
that caste in the sense of the highly rigid and oppressive, 4-varna scheme
was the result of the British failing to understand a complex social reality,
and imposing on it their own simplistic understanding of it (the British Raj
is sometimes called the anthropological state due to the obsessive care it
took to document, codify and re-impose as a simplified, rigidified,
Procrustean prescription, the social structure of pre-colonial India). The
argument of the book obviously one that appeals to Indians (we like to
blame the British or Islam when we can) is that the original reality was
a complex, functional social scheme, which the British turned into a rigid
and oppressive machine by attempting to make it legible and governable.
While I still dont know whether the argument is justified, and whether the
caste system before the British was as benevolent as the most ardent
champions of this view make it out to be, the point here is that if it is true,
Scotts failure model would describe it perfectly.
The second example is Gibbons Decline and Fall of the Roman
Empire, which I am slowly reading right now (I think it is going to be my
personal Mount Everest; I expect to summit in 2013). Perhaps no other
civilization, either in antiquity or today, was so fond of legible and
governable social realities. I havent yet made up my mind, but reading
the history through the lens of Scotts ideas, I think there is strong case to
be made that the fall of the Roman empire was a large-scale instance of
the legibility-failure pattern. Like the British 1700 years later, the Romans
did try to understand the illegible societies they encountered, but their
failure in this effort ultimately led to the fall of the empire.
Aside: if you decide to attempt Mount Everest along with me, take
some time to explore the different editions of Gibbon available; I am

reading a $0.99 19th century edition on my Kindle all six volumes with
annotations and comments from a decidedly pious and critical
Christian editor. Sometimes I dont know why I commit these acts of
large-scale intellectual masochism. The link is to a modern, abridged
Penguin edition.
Is the Model Relevant Today?
The phrase high-modernist authoritarianism might suggest that the
views in this book only apply to those laughably optimistic, high-onscience-and-engineering high modernists of the 1930s. Surely we dont
fail in these dumb ways in our enlightened postmodern times?
Sadly, we do, for four reasons:
1. There is a decades-long time lag between the intellectual highwatermark of an ideology and the last of its effects
2. There are large parts of the world, China in particular, where
authoritarian high-modernism gets a visa, but postmodernism does
not
3. Perhaps most important: though this failure mode is easiest to
describe in terms of high-modernist ideology, it is actually a basic
failure mode for human thought that is time and ideology neutral.
If it is true that the Romans and British managed to fail in these
ways, so can the most postmodern Obama types. The language will
be different, thats all.
4. And no, the currently popular pave the cowpaths and behavioraleconomic choice architecture design philosophies do not provide
immunity against these failure modes. In fact paving the cowpaths
in naive ways is an instance of this failure mode (the way to avoid
it would be to choose to not pave certain cowpaths). Choice
architecture (described as Libertarian Paternalism by its
advocates) seems to merely dress up authoritarian high-modernism
with a thin coat of caution and empirical experimentation. The
basic and dangerous I am more scientific/rational than thou
paternalism is still the central dogma.

[Another Techie aside: For the technologists among you, a quick (and
very crude) calibration point should help: we are talking about the big
brother of waterfall planning here. The psychology is very similar to the
urge to throw legacy software away. In fact Joel Spolsky's post on the
subject Things You Should Never Do, Part I, reads like a narrower version
of Scott's arguments. But Scott's model is much deeper, more robust, more
subtly argued, and more broadly applicable. I haven't yet thought it
through, but I don't think lean/agile software development can actually
mitigate this failure mode anymore than choice architecture can mitigate
it in public policy]
So do yourself a favor and read the book, even if it takes you months
to get through. You will elevate your thinking about big questions.
High-Modernist Authoritarianism in Corporate and Personal Life
The application of these ideas in the personal/corporate domains
actually interests me the most. Though Scotts book is set within the
context of public policy and governance, you can find exactly the same
pattern in individual and corporate behavior. Individuals lacking the
capacity for rich introspection apply dumb 12-step formulas to their lives
and fail. Corporations: well, read the Gervais Principle series and Images
of Organization. As a point of historical interest, Scott notes that the
Soviet planning model, responsible for many spectacular legibilityfailures, was derived from corporate Taylorist precedents, which Lenin
initially criticized, but later modified and embraced.
Final postscript: these ideas have strongly influenced my book
project, and apparently, Ive been thinking about them for a long time
without realizing it. A very early post on this blog (I think only a handful
of you were around when I posted it), on the Harry Potter series and its
relation to my own work in robotics, contains some of these ideas. If Id
read this book before, that post would have been much better.

Part 1:
The Art of Refactored Perception

The Art of Refactored Perception


May 31, 2012
When I made up the tagline, experiments in refactored perception,
back in 2007, I had no idea how deeply that line would come to define the
essence of ribbonfarm. So in this first post in my planned month-long
retrospective on five years in the game, I decided to look back on the
evolution and gradual deepening of the idea of refactoring perceptions.
Ive never attempted an overt characterization of what the phrase
means, but over the years, Ive explored it fairly systematically. This
sequence of posts should help you appreciate what I mean by the phrase.
Ive arranged the sequence as a set of fairly natural stages:
Perceiving
1. The Parrot
2. Amy Lin and the Ancient Eye
3. The Scientific Sensibility
Preparing to Think
1. Diamonds versus Gold
2. How to Define Concepts
3. Concepts and Prototypes
4. How to Name Things
Thinking
1. How to Think Like Hercule Poirot
2. Boundary Condition Thinking
3. Learning from One Data Point
4. Lawyer Mind, Judge Mind
Writing
1. Just Add Water
2. The Rhetoric of the Hyperlink
3. Seeking Density in the Gonzo Theater
4. Rediscovering Literacy

Much of the refactoring happens in the second stage.


OODA for Thinking-by-Writing
I dont know if this is an accident, a case of context-specific
rediscovery, or some unconscious channeling, but this is pretty much an
OODA loop for thinking by writing. The preparing to think stage
corresponds to the crucial orientation piece of OODA. As with OODA,
this sequence is actually a ridiculously interconnected set of thought
processes. Each of the four stages feeds back to each of the others.
Staring at this sequence, I begin to understand why Ive never
seriously considered attempting to teach others this writing-to-think
model. Besides the obvious problem that Ive been figuring it out myself, I
dont think this is very teachable. Its just a crap-load of practice, to drive
certain patterns deep into mental muscle memory.
I suppose some of these pieces are amenable to translation into howto presentations, but I suspect the market for this comprises exactly 3.5
starving bloggers with perverse instincts. This is practically black-belt
level training in How not to Make Money Blogging. I suspect I manage to
survive financially despite this model, not because of it.
But if somebody wants to PowerPoint-ize this material into a teaching
resource, you have my blessing. I only ask that you make the thing
publicly available.
I will probably slap a preface onto this sequence and Kindle-ize it into
a cheap e-book when I get some time.
The Retrospective Process
For those of you interested in how I am doing this retrospective,
heres the brief description of the process so far. I started with a first-cut
selection. Of over 300 posts, just around 70 made the first cut. I cut out
everything that was badly written, off-voice or not part of a broader
exploration theme. I also cut out stuff that I revisited in more solid ways

later. I did not consider popularity at all, but most of the popular posts
made the cut.
So it was definitely a very personal and autocratic selection.
The yield rate was depressingly low, at less than 25%. But the good
news is that it has been steadily increasing. As you will see from this and
upcoming posts, the lists are dominated by later posts. In the first couple
of years, I wrote an awful lot of posts I would now consider terrible.
After the selection, I sorted the set into 5-6 clusters, and forced myself
to completely uncouple the clusters (i.e., each post can belong in only one
cluster). I then sequenced each in some meaningful way. I will be doing
one post on each sequence.
It was surprisingly (and depressingly) easy to do the pruning. I
expected to spend many agonizing hours figuring out what to include and
what to exclude, but it took me about 15 minutes to do the cutting, and
another 15 minutes to do a first, basic sorting/clustering. The hardest part
is developing a narrative arc through the material to sequence each cluster.
On Voice
Yesterday, I posted a beta aphorism on Facebook that many people
seemed to like: integrity is an aesthetic, not a value.
A blogging voice is not just an expression of a coherent aesthetic
perspective; it is also an expression of a certain moral stance. Developing
a high-integrity blogging voice is about learning to recognize, in a moral
sense, on-voice/off-voice drafts and developing the discipline to
systematically say no to off-voice material, no matter how tempting it is to
post it, based on the expedient considerations like topicality or virality. As
your filters develop, you write fewer off-voice drafts to begin with.
Eventually, you dont even think off-voice.
One of the hardest challenges for me in selecting posts for this month
of retrospectives was posts that were partly on-voice and partly off-voice.

I erred on the side of integrity and dropped most such posts, except for a
few that were logically indispensable in some sequence.
Learning to recognize off-voice stuff (especially while your voice is
still developing) is more like learning to be a tea taster than studying to be
a priest at a seminary.
Though I suppose, practiced at sophisticated levels, what would Jesus
do? is an integrity aesthetic rather than a 0/1 litmus test. Few religious
types seem to transcend the bumper-sticker value though.

The Parrot
August 13, 2007
This piece was written in Ithaca, in 2005, and is as accurate a
phenomenological report of an actual mental response to real events as I
am capable of. At the time I thought and still do that a very careful
observation of your own thoughts as you react to sensory input is a very
useful thing. Not quite meditation. Call it meditative observation.
Stylistically, it is inspired by Camus.
-1From my window table on the second floor of the coffee shop,
looking down at the Commons the determinedly medieval, pedestriansonly town square of Ithaca I saw the parrot arrive. It was large and a
slightly dirty white. Its owner carefully set a chair on top of a table and the
parrot hopped from his finger onto the back of the chair and perched there
comfortably. I suppose the owner wanted to keep it out of the reach of any
dogs. He gave it a quick second glance, and stepped inside a restaurant.
The parrot ruffled its feathers a bit, looked around, preened a little
(showing off some unexpected pink plumage on the back of its neck,
hidden in the dirty white), and then settled down
-2The Ithaca Commons is a ring of shops and restaurants around an
open courtyard, occupying the city block between Green and Seneca
streets. The shops are an artfully arranged sequence of mildly unexpected
experiences. Tacky used clothing and dollar stores sit next to upscale
kitchen stores, craft shops, art galleries and expensive restaurants. The
central promise of the Commons is that of the Spectacle. Street musicians,
hippies meditatively kicking hackeysacks, the occasional juggler they all
make their appearance in the Commons. A visibly political Tibetan store
and experiential restaurants such as the Moosewood and Just a Taste
complete the tableau. The Commons is crafted for the American liberal, a
cocoon that gently reinforces her self-image as a more evolved, aware, and

thoughtful creature than her parochial suburban, beer-guzzling, footballfan cousin.


But in any world, the presence of a large, dirty-white parrot is a
definite non sequitur. Wall Street, Hollywood, sell-out Suburbia (and
Exurbia), Southern Baptist congregations and the liberal Ithaca Commons,
are all equally at a loss to accommodate the parrot. The grab-bag of varied
oppressed Others that mill about University towns, I suspect, would also
be at a loss to handle the parrot. Those of us who claim to be governed by
eclectic, deeply considered and original world views and I count myself
among these are also forced to admit that for all our treasured
iconoclasm, we cannot accommodate the parrot. We are therefore forced,
out of sheer necessity, to look at it.
-3I am no deep observer of real life. When I work in public areas, it is
for the steady supply of low-intensity human contact. The mass of
unremarkable humanity does not register, except as a pleasant backdrop.
Pretty girls, babies, dogs and notably ugly people do register, leaving a
gentle and piquant trail of unexamined visual flavor. I am not a true people
watcher.
I didnt quite know what to do with a parrot though, so I was forced to
look at it. It triggered no runaway train of thought, so for a while it was
just me and the parrot, separated by a pane of glass, and about fifty yards.
The impression of parrot, did not fade, get filtered away or get
overwhelmed by free association. It lingered long enough that I began to
watch. The parrot seemed happy. It sat there, awake, but not alert or wary.
It looked straight ahead. Presumably it did not find the scene interesting
enough to strain its neck.
-4I wonder how Hegel would have reacted to the parrot. Would it have
triggered, through some improbable sequence of dominoes, a fresh insight
concerning the Self and the Other? Would he have gazed inattentively at
the parrot and chased gleefully after some new thought (bird freedom
)? Would it be just another little nudge powering the inexorable

progress of his snowballing philosophy of everything? Would it occur to


him that whatever lofty abstractions it triggered, the parrot qua parrot
would not make an appearance in the edifice he was building? Sadly, I
must suspect that the thought would not have occurred to him.
To be fair, I must also suspect that the existentialists would have done
no better, despite their protestations to the contrary. I must conclude that
Camus would have looked at the parrot and instantly exulted, There it is,
the Absurd manifest! The parrot would again have been lost, subsumed
here by the Absurd. As far as the parrot is concerned, Camus and Hegel
differ little.
-5The parrot, without its owner, was sitting there, qua parrot, indifferent
to its impact on passersby. Most people looked at it. Some did a double
take. One man stopped, turned to face it squarely and stared at it for a
minute, as if waiting for it to acquire some significance. A decrepit old
man in a wheelchair rolled by, glancing at it with a painfully slow motion,
before letting his head sink again to his chest, weighed down, I suppose,
by illness and unseen burdens. A black mother, pushing a stroller, walked
by, glancing at the parrot without interest. I wonder why black registered.
A pretty girl in faded red pants stepped out of a shop, talking on a cell
phone. She took in the parrot in the absent-mindedly, absorbed several
network hops away. She exited my field of vision, stage right, but returned
a few minutes later. This time she stopped and genuinely stared at the
parrot before heading back into a shop.
A hippie, dread-locked and tie-dyed, stopped and grinned delightedly
at it. There was no discernible transition from see to grin, and
something about that bothered me. There was something scripted about the
response; her engagement of the parrot was not authentic.

-6You know you have are a slave to the life of the mind if a phrase like
her engagement of the parrot was not authentic crosses your mind quite
naturally, and it takes you more than a minute to laugh.
But consider what it means if your response to the parrot is measured,
seemingly scripted, or otherwise deliberate in any way. A mind with
parrot on it should not look like anything recognizable. A frown might
mean you are trying to rapidly assimilate the parrot but in that case, the
process of assimilation, rather than the parrot itself, must be occupying
your mind. You cannot, at the same time, think parrot and engage in the
task of wrapping up the parrot in a bundle of associations and channeling
it to the right areas of long-term memory. The hippies grin is equally
symptomatic of a non-parrot awareness. The hippie is probably selfindulgently enjoying a validated feeling of one must be one with nature
or something along those lines.
So an authentic engagement of the parrot must have an element of the
unscripted in it. It can neither be deliberative, nor reactive. Furious and
active thinking will not do. Nor the Awww! you might direct at a puppy.
A puppy is a punch you can roll with.
-7Two moms with three babies wandered onto the scene. It being a nice
day, the babies were visible, one squirming in the arms of its mother and
the others poking their snouts out of the stroller. The mom carrying the
baby stopped immediately upon spotting the parrot and approached it (she
was the first to do so). As is the wont of moms, she immediately began
trying to direct her infants attention to the parrot, shoving its face within a
foot of the parrot. Mothers are too engaged in scripting the experiences of
their babies to experience anything other than the baby themselves. The
parrot obliged with a display of orange (I suspect it was stretching,
disturbed from its contemplative reverie). The baby, however, seemed
entirely uninterested in the parrot. Perhaps the parrot was unclear to its
myopic eyes, or perhaps it was simply no more worthy of note than any of

other exciting blobs of visual experience all around. At any rate, the mom
stopped trying after a few moments, and the five of them rolled on.
The pretty girl in faded red pants was back. This time, she had two
waitress friends along, and took a picture of the parrot with her cell phone.
The three girls (the other two were rather dumpy looking, but I suppose it
was the aprons) chattered for a bit and then stared at the parrot some more.
Two more pretty girls walked past, and though the parrot clearly
registered, walked past without a perceptible turning of their heads.
Something about that worried me. They were of the indistinguishable
dressed-in-season species of young college girl that swarm all over
American university towns. These could have been either Ithaca College
or Cornell; I cant tell them apart. Two more of the breed walked by, again
with the same non-reaction.
A black-guy-white-girl couple walked by. The girl turned to look at
the bird as they walked past, while the guy looked at it very briefly.
Shortly after, an absorbed black teenager walked by. She looked at it as
she walked past, with no change in her expression. The parrot was clearly
on Track Two. Track One continued thinking about whatever it was she
was thinking about. I suppose parrot might have consciously registered
with her a few minutes later, but she did not walk by again. Something
about black responses to the parrot was sticking in my mind. The owner
came back out of the store, carrying a cup of coffee.
-8Now, a parrot is not an arresting sort of bird. It does not have the
ostentation of the peacock, the imposing presence of the ostrich or the
latent lethality of a falcon or hawk. Even in context, at a zoo, a typical
white parrot is not remarkable in the company of its more gaudy relatives.
Any of these more dramatic creatures would, I suppose, instantly draw a
big gawking crowd, perhaps even calls to the police. Undivided attention,
active curiosity and action would certainly be merited (try to feed him
some of your bagel).
The parrot though, had neither the domesticated presence of a dog,
nor the demanding presence of a truly unexpected creature. A dog elicits

smiles, pats or studied avoidance, while an ostrich would certainly call for
a cascade of conversation into activity, culminating in the arrival of a
legitimate authority (though, I suppose, most communities would be hard
pressed to generate a legitimate response to an ostrich. Cornell though, is
an agricultural university, so I suppose eventually one of the many animal
experts would arrive on the scene).
So a dog elicits a conventional ripple of cognitive activity as it
progresses through the town square, soon displaced by other
preoccupations. An ostrich presumably triggers a flurry deliberation,
followed by actual activity. So what does the parrot cause, living as it does
in the twilight zone between conventionally expected and actionably
unexpected? You cannot have the comfort of either action or practiced
thoughts, with a parrot in your field of view. Yet, the parrot is not a threat,
so you clearly cannot panic or be overwhelmed. The parrot, I think lives in
the realm of pure contemplation. The parrot is rare in adult life. For the
child, everything is a parrot.
-9The return of the owner annoyed me briefly. With his return, the non
sequitur instantly became an instance of the signature of the Commons: a
spectacle. The owner was clearly used to handling his parrot. He had it
hop on his hand again and swung it up and down. The parrot spread its
wings and did various interesting things with its feathers which I do not
have the vocabulary to describe. With the owner, the context of a small
bubble-zoo had arrived. The owner chatted with the girl in faded red pants,
who had come out again. Fewer adults stared. The ensemble was now
clearly within the realm of the expected. Most people walked on without a
glance, while some, emboldened by the new legitimacy of the situation,
stopped and watched with interest. The owner tired of active display and
set the parrot back on its perch, and turned his attention to the girl.
For a minute, I was sorry, but then a girl, about six years old, walked
by with her mother. It was a classic little girl, in orange pants and ice
cream cone. She stopped and stared at the bird very carefully. It was not a
curious probing look, or the purposeful look that kids sometimes get when
they are looking about for a way to play with a new object. This little girl
did not look like she would be going home and looking up parrots on the

Discovery channel website. She did not look like she was gathering up
courage to pet it or imagining it in the role of a chase-able dog or cat. She
was just looking at it. Clearly her powers of abstraction had yet to mature
to the point where she could see the bubble circus.
A pair of middle-aged women stopped by the parrot. After an initial
look at the parrot, they turned and started chatting with the owner. I expect
the conversation began, Does he talk? or Doesnt he fly away? Shortly
after, I saw them wander off a little to the side, where there was a fountain.
One woman took a picture of the other, standing next to the fountain, with
a disposable camera. Local resident showing visiting Cousin Amy the
town, I guessed. All is legitimate on a vacation, including a parrot.
-10I dont think children are necessarily curious when presented with a
new experience. The little girl presented a clearer display of authentic
engagement of the parrot than all the adults. It was what I have been
describing all along as a stare. But stare doesnt quite cover it. Stare
does not have the implicit cognitive content of the hippys grin. Happy,
bemused, smiling, frowning, eager curiosity these are visible
manifestations of minds occupied by the workings of deliberative or
reactive responses to the parrot. Parrot flits too quickly the face to be
noticed, and is replaced by more normal cognitions.
So, here is a question: what is the expression on the face of a person
who has authentically engaged a parrot? I must propose, in all seriousness,
the ridiculous answer, it looks like the face of a person who has seen a
parrot.
-11The people talking to the owner had left. He now sat reading a book,
while the parrot ate seeds of some sort off the table. Three teenage
skateboarders wandered to a spot about a dozen yards away. One of them
nudged the others and pointed to the parrot. They looked at it in
appreciation. It wasnt quite clear what they were appreciating, but they
clearly approved of the parrot. That made me happy.

Now, a large brood of little black children came by, herded by two
young women who might have been nannies, I suppose. The black kids all
stopped and stared intently at the parrot. The nannies chatted with the
owner, who looked on approvingly at the children while he talked. The
conversation looked left-brained from fifty feet away. Some tentative
petting ensued. As the nannies led the children away, after allowing them a
decent amount of time to engage the parrot, one little boy had to be
dragged away; he managed to turn his head full circle, Exorcist style, to
look at the bird.
Now, five young black men, perhaps eighteen to twenty, walked by.
Theirs was clearly a presence to rival that of the parrot-owner duo as a
spectacle. Their carefully layered oversized sports clothes and reversed
baseball hats demanded attention. I suppose spectacles, be they manparrots or a group of swaggering young black men, do not supply
attention, but demand it. But you cannot really compete with a parrot. The
parrot is entirely unaware that it is competing. The black group almost
rolled past, but suddenly one of them stopped and turned around to look at
the parrot. He looked like hed suddenly reconsidered the studied
indifference that I suppose was his response to competing spectacles. A
visible recalibration of response played across his face, and suddenly, he
was authentically engaging the parrot in a demanding, direct way. The
other stopped and looked to. The first man then pulled out his cell phone,
still staring at the parrot, and took a picture. He then briefly interrogated
the owner about the parrot, and the group rolled on.
-12I wonder now, why are black responses to the parrot more noteworthy
than generic white responses? And while I mull that, why have the
responses of one other group pretty young girls stuck in my mind
(besides the fact that I notice them more)?
Now, for an authentic engagement of the parrot, there must be parrot
on your mind. Your face must look like the face of a person who has seen
a parrot. This is not an ambiguous face, or a face marked visibly by the
presence of other thoughts or a subtext. A parrot-mind may wrestle briefly
with cell phone mind or preoccupied-with-race-and-oppression mind, but

the outcome is all or nothing. There is no useful way a constantly active


subtext of race can inform your engagement of a parrot.
I suppose I was looking for evidence that there is room in the black
mind for at least a small period of unambiguous engagement with the
parrot. If your preoccupation with race and injustice occupies you so
completely that even the parrot cannot dislodge it, then it must be a sad
life. In a very real sense, your mind is not free, and therefore neither are
you, if there is not even temporary room for the parrot. The parrot can
only occupy a free mind. To my list of profundities, I will add the
following: a free mind is one which the parrot can occupy easily, and stay
in as long as it chooses.
Now, the little black children engaged the parrot as completely as the
little white girl. So if the little kids are born free and demonstrably remain
free until at least age six, as demonstrated by the parrot, why and when do
they choose to give away their freedom to a pre-occupation with the
subtext of race, which makes those happy six-year-old faces sad? Or is it
that the mist of preoccupation descends on them, whether they want it or
not?
-13I suppose enough actual watching eventually teaches you to observe
better. It suddenly occurred to me that the neck-language of parrotengagement said a lot.
The clearest response is the snap, or double-take. It signals
computation. A slight glance on the other hand, no different from the
casual scanning of everyday scenery, with no special attention, must mean
filtering. I refuse to believe that everybody has a nontrivial scripted
response to parrot, so it must mean that the scripted response simply treats
the parrot as noise to be filtered. In the casual glance, there is no parrot on
the mind.
Now, a more complex response, one signaled by a snap, is one where
there is a perceptible pause or break in stride, followed by a turning away.
That is a response that is looking for an explanation. The sort of response

that might be hooked by a lone parrot, but would ignore the contextually
appropriate owned parrot. Most of the time, when we look for an
explanation, we can only see an explanation. Sometimes, when the mind
hiccups on the path to the explanation, we see the parrot.
Viktor Frankl said, between stimulus and response there is a space.
In that space is our power to choose our response. In our response lies our
growth and our freedom. Self-improvement gurus like to use that quote to
preach, but to me, it seems that this space is primarily interesting because
the parrot can live there for a bit, so your mind can be parrot for a bit.
You might hesitate and never visit that space. You might react so fast
you leave the space before it registers on your awareness. Or you might
dwell there awhile.

Amy Lin and the Ancient Eye


March 23, 2010
Last weekend, I went to see Amy Lins new show, Kinetics, at the
Addison-Ripley gallery in DC (the show runs till April 24; go). Since I
last wrote about her [May 5, 2008: Art for Thought], she has started
exploring patterns that go beyond her trademark dots. Swirls, lines and
other patterns are starting to appear. Amys art represents the death of
both art and science as simple-minded categories, and the rediscovery of a
much older way of seeing the world, which Ill call the Ancient Eye. Yes,
she nominally functions in the social skin of a modern artist, and is also
a chemical engineer by day, but really, her art represents a way of seeing
the world that is more basic than either artistic or scientific ways of
seeing. Take this piece from the Kinetics collection for instance, my
favorite, titled Cellular.

Is it inspired by diffraction patterns? (image from gatan.com, this one


is an electron diffraction pattern from a Ti2Nb10O29 crystal recorded by a
CCD camera)

Or is it pure art in some sense? The question is actually deeply silly


(though she seems to have been asked it multiple times), since it assumes
that a superficial and recent social divide between art and science is a deep
feature of the universe.
The Ancient Eye is a precursor to both the scientific type of
imagination that invented diffraction patterns, and a specific kind of
artistic eye that can see this way without having ever encountered the idea
of diffraction. Possibly it emerges from the very structure of our minds (I
once watched a documentary about a math savant who could instantly tell
if a number was prime; he apparently saw numbers in his head as a sort
of landscape, within which primes appeared in some special way).
It is tempting to call this the Renaissance Eye (and Amy a
Renaissance Woman), but that would be a bad mistake, since the
Renaissance is what nearly killed it, by introducing the great art/science
schism. Da Vinci was the last possessor of the Ancient Eye before its
recent rediscovery, not the first possessor of the Renaissance Eye(s). If

you look in the period before Da Vinci, in the so-called Dark Ages,
youll see a lot more of this way of seeing than after. It strikes me that we
admire Da Vinci for the wrong reasons, for being what seems in our time
to be a multi-talented mind. No; the divisions that blind us didnt exist
in his age. He was just a seer, and what he saw is more impressive than the
fact that his seeing spanned a multiplicity of our 20th century
categories.
C. P. Snow and the Two Cultures
It is sad that writers like C. P. Snow (of The Two Cultures fame) in
the last century ended up widening and institutionalizing the chasm
between the humanities and the sciences while attempting to bridge it. To
be fair to them though, the humanists started it, by attempting to take
scientists, mathematicians and engineers down a social peg or two. C. P.
Snow quotes number theorist G. H. Hardy: Have you noticed how the
word intellectual is used nowadays? There seems to be a new definition
which certainly doesnt include Rutherford or Eddington or Dirac or
Adrian or me? It does seem rather odd, dont yknow.
The natural anxieties and suspicions of humanist literary intellectuals
are old and deep-rooted (Coleridge: the souls of 500 Newtons would go
to the making up of a Shakespeare or Milton), and cannot be wished
away by lecturing (see my post The Bloody-Minded Pleasures of
Engineering [September 1, 2008]). Humanism is a retreat to a secularized
notion of humans being spiritually special, as a way of combating a
sense of insignificance within our huge, mysterious universe. But perhaps
the way to bridge the gap and bring humanists back to this universe, that
we share with other atom-sets, is to show that the eye of science and the
eye of art are both descended from the Ancient Eye.
Before the Great Divide
Okay, C. P. Snow is yesterdays news; we need to dig further in the
archives to understand the Ancient Eye. The post-reformation notions of
both art and science were distortions of the Ancient Eye way of seeing
(and connecting to) everything from atoms to galaxies. Possibly what
created the disconnect was the rise of late englightenment era Christianity

(post Martin Luther (1483-1546)) and its disdain of the profane material
plane as a sort of waiting room in front of a doorway into a spiritual plane.
Or perhaps it was a result of the thoroughly meaningless idea of scientific
objectivity that was partly the fault of Descartes (1596-1650).
Either way, the result was an anomaly that caused a great divide. Lets
pick up the story just before the Great Blinding of the Ancient Eye, with
Da Vinci. My favorite Da Vinci piece is neither the Mona Lisa, nor his
amazing engineering sketches, but his iconic image, The Vitruvian Man
(public domain), dating from 1485, or two years before the birth Martin
Luther. Da Vincis image is a representation of anatomical proportions and
their relation to the classical orders proposed by the Roman architect,
Vitruvius. This is the Ancient Eye pondering anatomy and seeing
architecture.

This way of seeing the human body is reminiscent of another iconic


image: the image of Shiva in the Chola Nataraja (Lord of Dance)
bronzes. The Chola bronzes, which began evolving in the 8th and 9th
centuries, and stabilized into their modern iconic form by the 12th century,
were an attempt to see a creative-destructive cosmological metaphysics in

the dynamic human form. This is the Ancient Eye seeing cosmic order in
frozen human dance. When I was a kid, an art teacher taught me the
Nataraja formula (it starts with the inscription of a hexagon inside a circle;
Shivas navel is the center). You can create very stylized and abstract
Natarajas once you learn the basic geometry (this image is from the New
York Metropolitan museum, Creative Commons)

This Nataraja-Vitruvian Man story actually continues in interesting


ways with Marcel Duchamp (Nude Descending a Staircase) and another
local DC artist, Larry Morris. I wrote about this in The Solemn Whimsies
of Larry Morris [February 21, 2009]. You can think more about that rabbit
trail if you like, but lets go from the Vitruvian Man and the Nataraja
towards more abstract stuff.
Another Ancient Eye inscribed-circle image, which emerged across
the Himalayas from the Chola Nataraja, is the Yin-Yang symbol. It

represents roughly the same idea, creative-destruction. The white fish and
black fish chase each other. Their eyes contain their duals, and the seeds of
their own destruction, and the creation of the other.

I like to think that in some lost prehistoric time, the distant ancestors
of Da Vinci and the unknown creators of the Nataraja and Yin-Yang
symbols, got drunk together after a boar hunt, and talked about transience
and transformation, while pondering the fact that the death of the boar had
sustained their life.
The Ancient Eye truly comes into its own at a somewhat greater
remove from representation of reality or even metaphysical ideas like YinYang. One of my pilgrimage dreams is to visit the Alhambra in Spain,
reputed to contain depictions of all the major mathematical symmetries. It
will be the atheist hajj of an unapologetic kafir. The Alhambra (14th
century) provides proof that we could see the symmetries of the universe
within ourselves, long before Galois (1811-1832) and Sophus Lie (18421899) gave us the mathematical language of group theory, and the ability
to see the same symmetries in electrons, muons and superstrings.

This particular story of Ancient Eye seeing evolved through classical


tessellation, to the familiar art of Escher, to the weird non-repeating
Penrose tilings of the twentieth century. Here is a picture (Creative
Commons) of Penrose standing on a Penrose-tiled floor at Texas A&M
University:

This particular story had its grand finale of profound Ancient Eye
seeing only a few years ago, when the E8 symmetry group (the last beast,
an exceptional Lie group, in a complete classification of symmetries in
mathematics) was visualized (Creative Commons) :

The language invented by Galois and Lie helped launch the program
of cataloging all the universes symmetries, a program of breathtaking
mathematical cartography that finally drew to a close with the mapping of
E8.
And in case you have a naive view of symmetry and dissonance in
how we see, and disdain such symmetries as not artistic, consider the
enormously dissonant and messy beauty of an object called the
Mandelbulb, found along the way to a holy grail search among
mathematicians for a 3D Mandelbrot set.

No, this isnt a photograph of a cave in Antarctica. This is a


Mandelbulb detail that has been titled Hell Froze Over. And somehow
this thing must emerge from the more visually obvious symmetries of
things like the E8 group.
The Ancient Eye and the Ancient Hand
But it is perhaps in engineering that the Ancient Eye has been best
preserved, waiting to be rediscovered by Amy Lins generation of artists.
Engineering is so strongly associated with human doing that we
sometimes forget that it too begins with human seeing. Before there were
engineering schools, there was still this Ancient Eye seeing and an Ancient
Hand building. It was in the Dark Ages, not in ancient Greece or Rome,
that modern engineering was born, as Joel Mokyr demonstrates in The
Lever of Riches. Today, the Ancient Hand has become engineering and it
creates vastly more powerful things. But the Ancient Eye and Ancient
Hand still lurk in the background. I talked about this schematic of the
worlds largest railroad classification yard, Bailey Yard, in my recent
piece, An Infrastructure Pilgrimage [March 7, 2010].

But you dont have to go to Nebraska to appreciate the workings of


the Ancient Hand. Next time youre on a plane, pick up your in-flight
magazine, skip to the end, and ponder the airport terminal layout
drawings. It helps to turn the magazine upside down. Heres one of my
favorites, Miami International Airport. Turned upside down so you arent
distracted by the words.

But perhaps it would be good to finish this retrospective with Da


Vinci. I have heard no more heartwarming bridge-the-gap tale than this:
Da Vincis brainchild, the helicopter, was finally made real by Igor
Sikorsky, who was funded by the composer Sergei Rachmaninoff, who
supported Sikorskys research with a $5000 check. So much for facile
ideas that art is some kind of precious flower that must at once be
protected from, and funded by, the rapacious endeavor of engineering.
That killing machine of Vietnam and life-saving machine of emergency
rescues might not exist if a rich artist had not decided to support a starving
engineer. Perhaps that is why the enduring symbol of the Ancient Eye is
neither the test-tube, nor the paintbrush, but the notebook. Here is Da
Vincis notebook helicopter sketch from the 15th century (Public Domain),

And heres an Igor Sikorsky sketch (Library of Congress) from his


1930 notebook. Makes you think, doesnt it?

Maybe the gap between science/engineering and art isnt the vast gulf
C. P. Snow imagined it to be. Maybe it is merely the distance between the
2H pencil used in engineering drafting and the 2B pencil, the mainstay of
line art. It is a gap that can easily be bridged by something as simple as a
notebook. Doesnt seem that far, does it?

The Ancient Eye in the Age of TED


I feel deeply ambivalent about the current trend in information
visualization that somehow treats it as a mind-candy production discipline
designed to persuade, manipulate and titillate, to sell pretty illusions of
understanding. A great deal has been written about TED, the elitism it
represents and how it encourages a deep-rooted television-science
mentality among the best thinkers, by tempting them to pander. But it is

perhaps this, an elevation of a way of showing above a way of seeing, that


sometimes makes me uncomfortable about TED. Once more, we are
limiting the grandeur of the universal to the expediencies of the merely
human. Once again we are saying, Look at me! instead of saying Look
at that! We wont get to You ARE that! anytime soon. And yes, I am
aware of the irony of this sentiment being expressed in a very TEDesque
blog post.
Like anybody else fascinated by ways of seeing, I have my unread
Edward Tufte books reverentially placed in my bookshelf, but something
about the whole discipline he has spawned (and the ideas worth
spreading ethos it has spawned in the glossy technology-entertainmentdesign bridge-building project that is TED) bothers me at a deep level.
Perhaps it is this. To me, the most soul-stirring direction in which to
turn the Ancient Eye is towards the unknown, towards what we dont
know, towards doubt. Stuff that we cant even explain to ourselves, let
alone teach others or spread. Here are two such images, whose
significance I dont yet fully understand, that have had me pondering a lot
lately. One is a screenshot from Michael Ogawas Code Swarm
visualization of the evolution of the Eclipse software project.

And the other is another Amy Lin piece, Hydrolysis

I have no idea whether these are Ideas Worth Spreading, but I like
looking at them. It is my substitute for prayer. Blakes Tyger, with its
immortal symmetries, also helps.

The Scientific Sensibility


August 26, 2011
I dont like or use the term scientific method. Instead, I prefer the
phrase scientific sensibility. The idea of a scientific method suggests that
a certain subtle approach to engaging the world can be reduced to a
codified behavior. It confuses a model of justification for a model of
discovery. It attempts to locate the reliability of a certain subjective
approach to discovery in a specific technique.
It is sometimes useful to cast things you discover in a certain form to
verify them, or to allow others to verify them. That is the essence of the
scientific method. This form looks like the description of a sequential
process, but is essentially an origin myth. Discovery itself is an anarchic
process. Like the philosopher Paul Feyerabend, I believe in
methodological anarchy: there is no privileged method for discovering
truths. Dreaming of snakes biting their tails by night is as valid as pursuing
a formal hypothesis-proof process by day. Reading tea leaves is valid too.
Not all forms of justification are equally valid though, but thats a different
thing.
But methodological anarchy does not mean at least not to me
that there is no commonality at all to processes of discovery. The
sensibility that informs reliable processes of discovery has a characteristic
feature: it is unsentimental.
An unsentimental perspective is at the heart of the scientific
sensibility. But first, why sensibility?
Susan Sontags description of a sensibility in her classic essay, Notes
on Camp gets it exactly right:
Taste has no system and no proofs. But there is
something like a logic of taste: the consistent sensibility
which underlies and gives rise to a certain tasteAny
sensibility which can be crammed into the mold of a

system, or handled with the rough tools of proof, is no


longer a sensibility at all. It has hardened into an idea[t]o
snare a sensibility in words, especially one that is alive and
powerful, one must be tentative and nimble.
The scientific method is a sensibility crammed into the mold of a
system. It is a an attempt to externalize something subtle and internal into
something legible and external. The only reason to do this is to scale it into
an industrial mode of knowledge production, which can be powered by
participants who actually lack the sensibility entirely. Such knowledge
production has been characteristic of the bulk of twentieth century science
(in terms of number of practitioners, not in terms of value). Hence the
Hollywood stereotype of the scientist as a methodological bureaucrat;
someone who worships at the altar of a specific method. Sadly, Hollywood
gets it right. The typical scientist is a caricature of a human.
When we objectify discovery into a legible system and a specific
method, the subjective attitude with respect to that system and method
becomes impoverished in proportion to the poverty of the system and
method itself.
So to characterize our subhuman scientist, we use words like
objective, emotionless and disinterested. The first is a reductive
characterization: the unsentimental scientific sensibility can turn its gaze
onto purely subjective realities and discover riches. To limit it to
objectivity is to limit it to the narrow realm of the experimental method.
Similarly, lack of emotion turns into a virtue instead of a crippling
blindness. And finally when we say that to do science is to adopt a
disinterested stance, we institutionalize it. The scientist becomes an
impersonal judge in a courtroom of evidence, free from any conflicts of
interest. It is no wonder that when film-makers attempt to humanize
scientist characters, they have them succumb to personal motivations.
The scientific sensibility, however, is both broader and more fertile
than this combination of an impoverished system and a sub-human
caricature objective, emotionless and disinterested. To look at the
world with the scientific sensibility is to be more human, not less.

The word unsentimental is central here. To be unsentimental is to be


self-aware. To be unsentimental, you must first deal with your inner
realities at the level of sentiments rather than emotions. You do so by
creating mental room for emotions to drift out of your subconscious,
recognizing the desires that generate them and labeling the results. If you
can go beyond that and bracket the sentiments for further contemplation,
you can be unsentimental. The sentiments that accompany you on a
journey of discovery are part of the phenomenology that you must process
on that journey.
To have a perfectly unsentimental sensibility is to be free to look at
reality without expectations about what you will see.
You can be trained in the scientific method. In fact the method, in all
its impoverished glory, can actually be programmed into a computer for
certain problems. You cannot, however, achieve the scientific sensibility
through a training process or program it into a computer. At least not yet.
You cannot achieve this sensibility via a mechanical process of
identifying and neutralizing a laundry list of cognitive biases. Nor can you
get there through an effort of will or by struggling to suppress emotions.
To be unsentimental is not about suppressing your humanity, it is about
making your humanity irrelevant so you are reduced to the pure act of
seeing.
The only way to get there is by making a sacrifice: you must give up
the pleasures of a sentimental engagement with life. The unsentimental
eye, once opened, cannot be closed. The adoption of the scientific
sensibility is an irreversible step. Your experience of love, friendship and
fun will change. Expect your passions to be tragic passions. If you are
religious, expect a troubled existence. The scientific method is not
incompatible with religion, but the scientific sensibility is, because
religion presupposes a sentimental engagement of life.
There is one consolation though. The scientific sensibility makes
humor and irony your constant companions for life.

Diamonds versus Gold


July 14, 2011
I divide my writing into two kinds: gold versus diamonds. Sometimes
I knowingly palm cubic zirconia or pyrite onto you guys, but mostly I
make an honest attempt to produce diamonds or gold. On the blog, I
mainly attempt to hawk rough diamonds and gold ore. Tempo was more of
an attempt at creating a necklace: polished, artistically cut diamonds set in
purified gold.
I find the gold/diamond distinction useful in most types of creative
information work.
What do I mean here? Both are very precious materials. Both are
materials that are already precious in their natural state, as rough diamonds
or gold ore. Refinement only adds limited amounts of additional value.
Both are mostly useless, but do have some uses: gold in conducting
electricity, diamonds for polishing other materials. But there the
similarities end.
Gold is almost infinitely adaptable. It is malleable and ductile. It can
be worked very easily and finely using very little energy and tools made of
nearly any other metal It is a nearly perfectly fungible commodity.
Financially, it is practically a liquid rather than a solid. It plays very well
with other materials and adapts to them. Its purity can be measured with
near-perfect objective precision, and its value is entirely market-driven. It
has no identity. Its value is entirely intrinsic and based on the rarity of the
metal itself.
Gold can be melted, drained of history, and reshaped into new
artifacts. When you add gold to gold, the whole is equal to the sum of the
parts. When you subtract gold from gold, the pieces retain all the value of
the whole. You can work gold in reversible ways.

Diamonds are not adaptable at all. They are the hardest things around,
and the only thing that can work a diamond is another diamond. They are
nearly perfectly non-fungible. The more precious ones are so nonfungible, they have names, personalities and histories that are nearly
impossible to erase. As economic goods, they transcend mere brandhood
and aspire to sentience: we speak of cursed or lucky diamonds. Diamonds
do not play well with other materials. Other materials gold in particular
must adapt to them. Purity and refinement are not very useful concepts
to apply to a diamond. In fact, a diamond is defined by its impurities. The
famous Hope diamond is blue because of trace quantities of boron. Color,
clarity and flaws can be assessed, but ultimately working a diamond is
about revealing its personality rather than molding it. Diamonds that win
personality contests go on to become famous. Those that fail to impress
the contest judges are murdered broken up into smaller pieces or
degraded to industrial status.
The value of a diamond is in the eye of the beholder. At birth, rough
diamonds are assessed by expert sightholders, and at every subsequent
transaction, human judges assess value. A diamond is born as a brand. An
extreme, immutable brand that can only be destroyed by destroying the
diamond itself. And finally and perhaps most importantly a
diamonds value has nothing to do with its material constitution. Carbon is
among the commonest elements on earth. A diamonds value is entirely
based on the immense amounts of energy required to fuel the process that
creates it. They are found in places where deep, high-energy violence has
occurred, such as the insides of volcanic pipes.
Diamonds are forever. They cannot be drained of history. When you
break up a diamond you cannot add diamonds the pieces have less
value than the whole. Diamonds can only be worked in irreversible,
destructive ways.
Diamonds represent a becoming kind of value; the products of
creative destruction. If youve read my Be Slightly Evil newsletter issue,
Be Somebody or Do Something, you know the symbolism I am getting at
here. You also know where my sympathies lie.
I find it particularly amusing that the value of gold is measured in
purity carats, while the value of diamonds is measured in weight carats.

Purity and weight are what are known as intensive and extensive
measures. The size of a diamond is a measure of the quantity of tectonic
violence that created it.
I prefer diamonds to gold, perhaps because I am not an original
thinker, but a creative-destructive one. I am not very good at discovering
rare things. I am better at applying intense pressure to commonplace
things, in the hopes of producing a diamond. Sometimes I stumble upon
natural rough diamonds, but more often, I attempt to manufacture artificial
ones from coal. They are not as pretty, and it is very hard to manufacture
large ones, but when I succeed, I produce legitimate diamonds, born under
pressure.
Gold, I rarely mine myself (and earth-bound humans cannot
manufacture it; only dying suns can). I buy gold in the form of second
hand jewelry at the bookstore, melt it down, and rework into other things.
Most often, into settings for diamonds.

How to Define Concepts


June 21, 2007
Let us say you are the sort of thoughtful (or idle) person who
occasionally wonders about the meaning of everyday concepts. So there
you are, at the fair, laughing at yourself in a concave mirror, when
suddenly it hits you. You dont really know what concave means. You
just recall vague ideas of concave and convex lenses and mirrors from
high school and using the term in general conversation to describe certain
shapes. So you decide to figure out a definition.
What do you? How do you make up a definition? Lets get you into
some trouble.
So the first thought you have is: concavity has something to do with
indentations or inward curvature of shapes. You quickly abandon openended curves, of the sort you see in graphs of population growth and the
like. Being smart, you realize that the concavity there is not fundamental
you could turn a concave graph upside down, and make it convex with
respect to your preferred visual orientation of up is against gravity.
So you decide that the notion of concavity probably only makes sense
for closed curves: things with an inside and an outside (congrats, you just
found a use for the Jordan curve theorem!). You draw yourself a
prototypical closed curve, like the one on the left below, and stare at it:

You think, hmm really what is going on with concavity is that I


can sort of take a short cut across some parts by going outside it. That
leads to your first stab at a definition: a figure is concave if there exists a
pair of points in it such that the straight line between them is not contained
entirely within it. You draw a couple of lines and convince yourself, like
on the right. At this point, if you like to rush to math, you might even write
down an equation like this one:

and go, Aha! a closed curve is concave if and only if you can find a
pair of points like so, and for some theta, the point on the line given by my
clever equation isnt inside the figure!
Youve just found an attribute of convexity that you think is necessary
and sufficient to define it. But then, suddenly a thought occurs to you. You
sketch:

and go, Uh Oh!


What just happened here? Why does this bother you? Youve found a
way to draw a line across two closed figures that dont look concave to
you, and satisfied a formal notion of concavity. You really want to say that
concavity is a notion that only applies to single figures. So what do you
mean by single? Is a Figure 8 single? Is it concave?

You are in trouble. At this point, if you really cared enough, youd go
on to reinvent a good deal of topology, invent the notion of simply
connected, figure out that you need the notion of closed and open sets
and interiors and boundaries (to handle the Figure 8 case) and so forth. But
lets not go down that road. Lets ask the more interesting question, why
didnt you just define concavity to be anything that satisfies your original
straight line test? (For many purposes in math, that is in fact exactly
what you do, use the definition without worrying about connectedness
thats the impatient, technical, lets get on with it aspect of mathematics,
but you and I like to fuss over what we mean instead of getting
somewhere).
The interesting thing about the way our minds work is that math and
formalism is subservient to a fuzzier notion of what I want to get at. As
we refine technical definitions (or natural language definitions of entities
like culture) we tend to move the definition to get at an understood but
inexpressible concept. We practically never reduce the concept itself to the
definition we are working with. This sort of thing is an example of the
operation of what philosophers like to call intension (with an s).
Intension, roughly speaking is the true meaning of a concept we are
after. The difference between definition and meaning is what philosophers
like to characterize as primary (or a priori) and secondary (or a posteriori)
intension. The primary intension of water is watery stuff. That is why
a sentence like Ammonia is the water of Titan makes sense to us we
imagine ammonia oceans. By contrast, Water is H20 is a secondary
intension. David Chalmers has a beautiful discussion of intension in The
Conscious Mind: In Search of a Fundamental Theory.
Does this apply to this example? Concavity, unlike water is an
abstraction of real-world things like inkblots, bays, dents, holes and so
forth. It references too many things in the real world for us to useful say
something like concavity is sort of like bays or dents. Despite this,
however, our brains seem to work with a primary intension of concavity
that draws efforts at definition spiraling towards itself. We grope towards
what we mean through attribute-based tests expressed in terms of simpler
concepts (like straight line in our case).
What makes math special is that starting with a few prototypes that
suggest a useful notion, we can often converge in a finite number of steps

to a watertight characterization of an abstract concept within a useful


closed domain. Brouwer, was perhaps the only major mathematician who
tried to articulate this fundamental aspect of the structure of mathematical
thought that technicalities follow from trying to capture intuition.
Leaky abstractions like culture and war though, are another
matter. I dont yet have a good handle on how to think about the process of
achieving clarity with such concepts. Until then, all I can offer is my own
rule of thumb, Seek to capture the intension!

Concepts and Prototypes


June 14, 2007
We think about abstract concepts in terms of prototypical instances.
These prototypical instances inform how we construct arguments using
these concepts. At a more basic level, they determine how we go about
constructing definitions themselves. Prototypes pop up in all sorts of
conceptual domains, ranging from war to airplane to bird. So how
do prototypes work in our thinking? Lets start with an apparently simple
example the concept of triangle that can get tricky really quickly.
If I asked you to draw a triangle, you would probably draw one that
looked something like the one below, a scalene triangle, almost certainly
drawn with the longest side as base and obtuse angle, if there is one, on
top. Call this a prototypical triangle, understood as the sort of instance
most people would draw. Why we draw such instances is the question of
interest here. Lets exercise this instance in a simple argument to see what
role prototypicity plays in thinking. We will convince ourselves of the
validity of the formula for the area of a triangle half base times height
through mental visual manipulations.

Imagine a line dropping from the top vertex vertically down to the
base. This line enables you to visualize two right triangles. Now imagine a
copy of the triangle on the left being rotated clockwise 180 degrees.
Position this imaginary triangle so that you now have a complete rectangle
on the left. Repeat the process for the right. The two imaginary rectangles
now form a larger rectangle. The area of this rectangle is the product of the
base and height of the original triangle. Since you constructed this
rectangle by copying, rotating and pasting two triangles that exactly
covered the original triangle, the original triangle must have an area given
by half the product of the base and height.

You used imaginary visual manipulations to convince yourself of the


formula for the area of a triangle. Now ask yourself, why did you not start
with any of the following set of perfectly legal triangles:

We have here an isoceles right triangle, an isoceles triangle, an


equilateral triangle, a long, skinny triangle, a straight line segment and a
point. The first 3 cases possess symmetries and the last two contain
degeneracies.
Now work through the visual proof of the area of the triangle for each
of these cases. The last two are the easiest: the answer is zero by
examination, so the formula is trivially correct. More importantly, the
answer does not tell us much any constant instead of 1/2 would work,
so we have validated a non-unique candidate. The first three, I assert, are
also degenerate. Not in terms of their explicit geometric structure, but
because the visual proof of their satisfying a particular asserted property
the area formula in our example collapses to a simpler case than in
the scalene triangle case. Recall your mental manipulations for the scalene
triangle. Can you see how there are fewer steps in convincing yourself of
the truth of the area formula? In each case, a single cut-and-paste
visualization suffices. The scalene triangle proof visualization works (if
inefficiently) for the symmetric cases, but the reverse is not true.
So one answer to why do we choose as prototypes the instances we
do? is an information-theoretic one. A prototypical instance of a concept
is one that contains the maximum information that an entity satisfying the
definition possibly could. We are naturally inclined to work with the
richest-information-structure case. In the case of the explicitly degenerate
cases, the information poverty showed up in the non-uniqueness of the

formula that was satisfiable. In the more subtle cases with symmetries, it
showed up in terms of the degeneracy in the proof construction which
would not work in the general case.
That doesnt explain it all. What about our long skinny triangle
which is close to degenerate, but not strictly so? Why didnt we draw
something like that? I suspect this has to do with the precision of
comparisons we need to make when we mentally manipulate geometric
figures: we want enough asymmetry and non-degeneracy to clearly
illustrate the information capacity of our concept, but not so much that the
precision required of the representation is too high. I am not completely
happy with this hand-wavy account, so Ill revisit this when I come up
with something better. If you have a better account right now, post a
comment.
A final point: why did we choose to draw our original scalene triangle
with the longest side as a visual base? One proximal reason is that the
necessary manipulations require more effort if we were to draw it with,
say, the obtuse angle at the base (try it). A less obvious reason is that we
operate with orientational metaphors that determine notions like up and
down when dealing with abstractions. These metaphors inform both our
language (base and height) and probably explain why non-standard
orientations are mentally harder to work with, even though the explicit
visual-proof steps are orientation agnostic. These conceptual framing
metaphors will come up later when I talk about George Lakoff and his
work on metaphor, so Ill defer discussion of this aspect of prototypicity to
later.
When we move from instantiations of abstractions to sets of entities
(real or imagined) that we want to define, we run into problems with other
methods of picking out elements, such as archetypes and stereotypes, that
get in the way. We also run into issues of intension, with an s. Thats for
later.
I first encountered this notion of prototypicity in a biology class when
I was about 13. The teacher asked the class clown to come up to the
blackboard and draw an amoeba. He drew a neat block L shape, and the
class burst out laughing. The teacher got mad and told him to stop
clowning around and draw a proper amoeba. He countered that since wed

been taught that amoeba proteas could take on any shape, a regular L
was as much an any shape as an irregular blob.

How to Name Things


February 2, 2012
1
Naming and counting are the two most basic behaviors in our divided
brains. Naming is the atomic act of association, recognition,
contextualization and synthesis. Counting is the atomic act of separation,
abstraction, arrangement and analysis. Each behavior contains the seed of
the other.
To name a thing is to invite it to ensnare itself in your mind; to distill
and compress the essence of a gestalt into a single evocative motif, from
which it can be regenerated at will. Just add attention and stir.
Here are three very different American gestalts that I bet many of you
will recognize without clicking: Babbitt, Bobbitt, Rabbit.
We name and count babies, products, species, theorems, countries,
asteroids, ships, drugs, essays, wars, gods, dogs, foods, alcohols, pieces of
legislation, judicial pronouncements, wars, subcultures, ocean currents and
seasonal winds.
We try to name and number every little transient vortex, in William
James blooming, buzzing confusion, that persists long enough for us to
form a thought about it.
As with plans, so with names. Names are nothing; naming is
everything. To name a thing is to truly know it. As Ursula Le Guin said,
for magic consists in this, the true naming of a thing.
It is the process of naming that is important. The actual name that you
settle on at the end is secondary.

2
Vanity and pragmatism wrestle for control of the act of naming. We
bend one ear towards history and the other towards posterity. We parse for
unfortunate rhymes and garbled pronunciations. We attempt at once to
situate and differentiate. We count syllables and look for domain names.
We walk around the name, viewing it as parent, lover, friend, bully,
journalist, lexicographer and historian. We embed it in imaginary
headlines and taunting rhymes.
In Bali to name is to number. It is an unsatisfying synthesis that only
works in limited contexts.
The firstborn is Wokalayan (or Yan, for short),
second is Made, third is Nyoman or Komang (Man or
Mang for short), and fourth is Ketut (often elided to Tut).
I am not sure what happens if Wokalayan dies young. Does Made
replace his older sibling and become the new Wokalayan?
In crypotgraphy, the first named-character in an example scenario is
Alice. The second one is Bob. And so on down an alphabetic cast of
characters. This is not the world of interchangeable John and Jane Doe
figures. The order matters.
When birth order is more important individual personality, you get a
social order in naming that inhabitants of individualistic modernity
struggle to understand.
3
Counting is both ordinal and cardinal. It takes a while to appreciate
the difference between one, two, three and first, second, third.
To truly count is to know both processes intimately. In naming,
ordinality has to do with succession and replacement. Cardinality has to do

with interchangeability. You cannot master naming without mastering


counting.
The ordinal, cardinal and nominal serve to situate and uniquely
identify, but do not necessarily indicate the presence of something real.
Hence the query: name, rank and number?
There was once a substance with rank 0, number 0. It was named
ether. It did not actually exist. Substances 1-1 through 1-4 though, earth,
fire, water and wind, were real enough, and became the founding fathers
and mothers of the modern discipline of chemistry.
It is in fact useful to think of naming an interrogative act that creates
what it questions. Demand insistently enough to know the name, rank and
number of a thing, and you will eventually find out. Even if your mind has
to manufacture an answer.
When you understand both kinds of counting, you can count and
name in both ways, without using actual numbers.
That gives you iMac, iPod, iPhone and iPad on the one hand, and
Kodiak, Cheetah, Puma, Jaguar, Panther, Tiger, Leopard, Snow Leopard
and Lion, on the other. Ill leave you to guess why the first-born is a bear
here, while the rest are cats. Dont give up and click too soon.
Not many languages can efficiently express questions of ordinality. In
English for instance, the question, what is your birth-order ordinality
among your siblings? sounds downright weird, but I cannot find a simpler,
grammatical way to express it.
It is much easier to ask the related cardinality question: how many
siblings do you have?
Curiously, the ordinal question is very easy to ask in my nominal
native language of Kannada. It would translate to something like: How
many-eth son are you of your father? If such constructs were allowed in
English. At least that was the best I could come up with my father
challenged me to translate the line as a kid.

It would be a useful construct to have in English. We could ask,


What-ieth major version of Mac OS X is Lion?
The naming practices in Bali and the Ursula Le Guin quote made me
think of a rather clever idea for a short story about a culture where the
young start out with ordinal names as in Bali, but are given true names if
and when wise elders first spot the child in an act that expresses a unique
individuality.
At this point, a coming-of-age naming ceremony is conducted, and
the child is declared an adult with special privileges over the un-named.
Rather complicated things happened to the heros name in the story,
having to do with self-referential paradoxes. Ive forgotten the plot, but I
remember that at the time I had to diagram the events in the story.
I never wrote the story because coming up with names for the
characters was too hard.
4
We name to liberate, and we name to imprison. We name to flatter,
and we name to insult. We name to own, and we name to be owned. We
name to subsume, and have subsumed. We name to frame, and we name to
reframe.
Google bought Urchin on Demand and turned it into Google
Analytics. It bought Youtube and left the name alone.
The Left calls it Right to Choose. The Right calls it Right to Life. The
debate itself is partly about naming: at what point does something deserve
the name human?
The British and the French built a plane together and fought over the
name. The French won. It became the Concorde rather than the Concord.
Gandhi attempted to rename the untouchables Harijans. Gods
people. They resented being patronized, and chose for themselves the
name Dalit. The oppressed.

Priests weigh about the numerological significance of names and


marketing mavens opine about syllable counts.
States step in with Procrustean templates to tax and conscript: last
name, first name, middle initial. Under Spanish rule, the entire Philippines
became a geographic-lexicographic state.
Philosophers ponder the metaphysics of naming and Greek scholars
hunt for their linguistic roots.
As one anthropologist said (I have never managed to find the source),
naming is never a culturally insignificant act.
5
To name is to appreciate the crucial distinction, due to urban theorist
John Friedmann, between appreciative knowledge and manipulative
knowledge. The one allows us to construct satisfying images of the
world. The other allows us to gain mastery over it.
To either number or name is to both appreciate and manipulate. To
number is to appreciate timeless order; to name is to appreciate
transformative chaos.
You number to extend and preserve. Archival is the ultimate act of
numbering.
You name to create, destroy, fragment and churn. You name a product
and launch it. You give a dog a bad name and hang it.
In a break with family tradition, I was not named after my paternal
grandfather. The timeless sequence, ABABAB was broken.
6
Agent 007, James Bond, was named after an ornithologist.

In his numbered world, he is part of a greater order. A world of


conversations between 007 and M, where technology comes from Q and
even the secretary is a very countable Moneypenny. It is a timeless world
where the Ms and Qs are replaceable and 00s are both replaceable and
interchangeable.
In his named world, first he situates, then he differentiates.
My name is Bond. James Bond.
A tough, hard and unusual name, for a tough, hard guy, who allows
glimpses of a dark past to shine through the veneer of shaken-not-stirred
cocktails and social polish. He blends in, but makes his presence felt. It is
a name that is at once a trust and a threat. Bank of England to friends,
gunboats to foes.
Is that a threat? No, its a promise.
Commander Bond was once a naval reserve officer. It was in the
maritime world that the line, my name is my bond, gained currency.
It is a name of narrative belonging. It situates the man strongly as
British, but differentiates him not at all among Britishers. In Bond is the
veiled threat of a still-potent dying empire. In James lies identification
with, and anonymity within, that dying Empire.
Fleming once wrote to the real Bonds wife: It struck me that this
brief, unromantic, Anglo-Saxon and yet very masculine name was just
what I needed, and so a second James Bond was born.
7
The story of Windows is the story of a wild tree of apparently
domesticated numbers seeking its way in the world, rather than an orderly
parade of tamed wild cats.
1.0, 2.0, 3.0, NT, 3.1, 95, 98, ME, 2000, XP, Vista, 7, 8.

This is no accident. Microsoft, has always been a company that has


sought its way in the existing world, rather than inviting the world into a
fabricated universe of non sequiturs like Apple, Macintosh and Lisa.
The original portmanteau, MICRO-computer SOFT-ware, was a
seeking of a place in a world defined by others. The micro-computer was
ordinally a lesser thing than the mini-computer. Soft-ware was one of three
wares: hard, soft and firm. An element in a set of cardinality three. It was a
shy, retiring and polite name, that knew its place in the scheme of things.
But the personality worked, and Microsoft quietly took over the
universe it entered so politely. Windows was a literal-minded appropriation
of the name of a key element of the desktop metaphor. Office seeks to
belong in the workplace rather than redefine it. Internet Explorer remains
the only browser that presumes to name itself after the thing it explores.
How a company names itself, its products and services, and its
organizational parts, tells you a great deal about it.
To number something implicitly or explicitly, cardinally or
ordinally is the first step in a grander project to order, tag and classify a
part of reality; to prepare it for timeless forms of manipulation:
replacement and interchange. To number is to subsume the particular
within the general.
But to really name something in the sense of Le Guin, is to disrupt
that project at every turn by discovering new magic that confounds the
creeping logic of a rigidly ontological enterprise.
To really name is to find leaks as quickly as the number-givers find
water-tight categories. To break connections thought secure and make new
ones, previously considered impossible. To create difference
irreplaceability and non-interchangeability as fast as numbering creates
homogeneity.
This is perhaps why I still trust Microsoft more than I trust Apple. In
the mess that is the the Windows sequence-numbering, I find reassurance.

8
To position is to number and name at the same time, and create
something that is both a being and a becoming. Something rooted, that
seeks to connect and get along, and something restless that seeks to get
ahead and away.
To position a thing is to teach it to get ahead, get along, and get away.
We project onto the memetic world of names, our own fundamental
genetically-ordained proclivities. Evolutionary biology tells us that getting
ahead and getting along are the basic drives that govern life for a social
species. To this, as a species that invented individualism sometime in the
10th century AD, we must add getting away. The drive to become more
than a rank and number. To become a name, even if the only available one,
alpha, is taken.
The Microsoft version soup is Darwin manifest.
Getting ahead, getting along and getting away. Ordinal numbering,
cardinal numbering and naming. Name, rank and number.
Perhaps it is naming and numbering that are fundamental, not biology.
To number well is to comprehend symmetries and anticipate as-yetunnamed realities; holes in schemata, to be filled in the future. And so we
name new elements before discovering them, imagine antimatter when we
only know of matter. To categorize well is to create timeless order.
Mendeleevs bold leap advanced both chemistry and the art and science of
naming.
To number poorly is to squeeze, stuff and snip. To constrain reality to
our fearful and limited conception of it.
To name well is to challenge and court numbers.
To name poorly is to kill or be killed by numbers.

Naming without numbering creates a chaotic unraveling. Numbering


without naming creates orderly emptiness.
It takes discipline to couple the two forces together. And sometimes,
numbers and names dance together beautifully to create magic, as when
Murray Gell-Mann found inspiration in James Joyce line, three quarks
for Muster Mark.
9
To name is also to hide and cloak. To switch stories and manufacture
realities. This is the world of Don Draper. He dons a mask, and drapes
new realities over old ones. Starting with his own life.
And so Operation Infinite Justice became Operation Enduring
Freedom.
I was supposed to be named after my grandfather, in keeping with the
timeless ABABAB rhythm. I would have been Rama Rao. But then
they broke with tradition.
My mother wanted to name me Rahul, but my grandmother objected:
it is a name with deep significance for Buddhists the name of the
Buddhas son.
Fortunately, in the (cardinal and ordinal) universe of a thousand
names that is Vishnu there is actually a long hymn known as the Vishnu
Sahasranama, Vishnu of the Thousand Names a close cousin of
Rama was found.
And so I came into the world as Venkatesh. A break from tradition, but
not quite a complete break. Certainly not a defection to a competing
tradition. That would have upset my grandmother.
I once wanted to name an algorithm Id developed Mixing Bandits,
since it used mechanisms inspired by bandit processes. I gave a draft of
my paper to a distinguished professor in the field. He liked my work, but
objected to the name. My allusive overloading of a precise term did not sit

well with him. Mathematically, my algorithm was not related enough to


bandit processes.
So this grandmother rejected the baby, refusing to absorb it into the
family tradition. It wanders the world today as an illegitimate orphan of
the noble clan that has disavowed it, under the clumsy and undistinguished
name MixTeam scheduling.
10
In the genealogy of a single name you can trace entire grand
narratives.
Once upon a time, there was a company in Rochester called Haloid. It
made photographic paper and lived in the giant shadow of a company
across town called Kodak.
Haloid wanted to grow up. So it acquired a technology called
xerography: a name coined by a Greek scholar to situate the idea of dry
writing within the illegible history of that long intellectual tradition within
which the West seeks to situate everything it does.
Ironically, the technology was not the result of a long, gradually
evolving tradition that can be traced back to the Greeks. Not only did the
Greeks have nothing to do with it, as the biographer of the technology
David Owen notes, There was no one in Russia or France who was
working on the same thing. The Chinese did not invent it in the 11th
century BC.
Xerography sprang almost fully-formed from the mind of one man,
Chester Carlson. He systematically set about the project of inventing and
patenting something truly new. He managed to do so by putting an obscure
property of the element Selenium to a completely unexpected use.
So Haloid became Haloid Xerox, and eventually just Xerox. It is a
powerful name. So powerful that it subsumed the name of the man who
created it, Joe Wilson. During my time at Xerox, the Wilson Center for
Research and Technology (WCRT) became the Xerox Research Center,

Webster (XRCW). Across the world you will find XRCE (Europe), XRCC
(Canada) and XRCI (India). To earn its right to a unique name within this
orderly namespace, the sole rebel, PARC, had to unleash planet-disrupting
forces.
Xerography eventually became electrophotography, in the hands of
envious competitors who appeared after the trust-busters had done their
work. The name that had gotten ahead and away now had to get along. My
name is photography. Electro-photography.
They still call it xerography at Xerox though.
11
And across town, Kodak slowly declined and began to die. There is
irony here as well.
Photography does have a long history. The ancient Greeks did have
something to do with it. The ancient Chinese did know about pinhole
cameras. The French did play a role.
But Kodak is one of those rare names that was born through an act of
pure invention. George Eastman is quoted as saying about the letter k: it
seems a strong, incisive sort of letter. Yes, incisive like a knife.
The story goes that Eastman and his mother created the name from an
anagrams set. Wikipedia says about the process:
Eastman said that there were three principal concepts
he used in creating the name: it should be short; one cannot
mispronounce it, and it could not resemble anything or be
associated with anything but Kodak.
The first two principles are still adhered to by marketers when
possible. The last has been abandoned since the 1970s, when the
positioning era began.
As with Wilson, the child soon eclipsed the father. Eastman Kodak
became just Kodak to the rest of the world. In proving the soundness of his

principles of memetic stability, Eastman ceded his own place in the history
of naming to a greater name.
Haloid incidentally, is a reference to the binary halogen compounds of
Silver used in photography. The word halogen was coined by Berzelius
from the words hals (sea or salt) and gen (come to be). Coming to
be of the sea. It may be the most perfect name, suggesting the being and
becoming that is the essence of both naming and chemistry.
Jns Jacob Berzelius is a founding father of chemistry in large part
due to his prolific naming. He came up with protein as well. He was also
responsible for naming Selenium. From the Greek Selene, for Moon.
It was no small achievement. Chemistry is a science of variety and
difference. It deals in so many different thing that a narrowly taxonomic
mind will fail to appreciate its broader patterns.
In declaring that Physics is the only real science, all the rest are just
stamp collecting, Rutherford failed to appreciate chemistry the way
Berzelius did. As an ongoing grand narrative with lesser and greater
patterns.
Some deserving names like protein and others merely abstract,
categorical formulas like CnH2n+2 and names that just fall short of
cohering into semantic atoms, like completely saturated hydrocarbon.
12
Counting and naming are at once trivial and profound activities.
Toddlers learn to count starting with One, Two, Three
Terence Tao has won a Fields Medal and lives numbers like nobody
else alive today. And he is still basically learning to count. At levels you
and I would consider magic, but it is counting nevertheless.
Toddlers learn to name, starting with me, mama and dada.

Ursula Le Guin has won five Hugo and six Nebula awards, but is
fundamentally still a name-giver.
Names are born of universes, be they small ones that contain only
Kodak or large ones that contain all of Western civilization between alpha
and omega.
It is very hard to make up universes. It is easier to borrow and
disguise them, as Tolkien and Frank Herbert did.
And it is very hard to do so without accidentally causing collisions
between large, old namespaces that might not like each other, as my mom
found out with Rahul.
Lazy novelists are laziest with names, and the work falls apart. When
you have named every character in your novel perfectly, your novel is
finished. Plot and character converge towards perfection as names do.
Names in turn create universes. Carnegie Hall, Carnegie Foundation,
Carnegie-Mellon University.
To name is to choose one universe to draw from and another to create.
Rockefeller gave his name to few things. He preferred bland names like
Standard Oil and The University of Chicago.
And so it is that the Carnegie Universe is very visible, while the much
larger Rockefeller Universe is more hidden from sight.
13
Rockefeller chose to create, and hide much of what he created. But
you can go further. Beyond hiding lies un-naming. To un-name is to deny
identity.
To un-name and un-number is to anonymize completely.
It is useful for the name-giver to ponder the complementary problem
of un-naming. If to position is to name and number, to de-position is to unname and un-number.

You must seek randomness to disrupt the timeless order imposed by


numbering, disconnection to counter the narrative order created by
naming. Like Dorian Taylor, you must seek cryptonyms.
Cryptonym itself is from the Greek words for hidden and name.
Randomness is hard.
To un-name is to fight the natural. Given enough time, even a set of
cryptonyms will fail to arrest a cohering identity. To truly arrest a name,
even changing the crytponym at a random frequency is not enough. The
underlying cohering realities must be disrupted.
14
Names demand to be born, and hijack numbers if no worthy ones
appear. And so we have 9-11 and Chapter 11.
At other times, names strain to hang on to life, with no stories to tell.
In the arid, random desert that is bingo, where numbers rule, names
struggle.
Only to a Bingo player is 22 two little ducks.
Few numbers truly rise to the level of human meaning, and they are
all small: 13, 42, 867-5309.
The largest number in my life that is also a name with permanent
narrative significance is 1174831686.
When I was nine or ten, our local newspaper, The Telegraph, launched
a club for kids in its Sunday edition, called the Wiz Biz Club. I signed up
excitedly, to belong and to make new friends. That was my membership
number.
I received a badge, some stickers and an ID card with that number.

So Venkatesh Rao became 1174831686. That cryptonym was


probably the start of my struggle to own my name instead of being owned
by it.
I am glad to report that despite it being an extremely common Indian
name, I now own venkateshrao.com (it redirects to this site) and almost
the entire first page of Google results. Vishnu can have the other 999
names, but I plan to pwn this one, at least for one lifetime.
15
We dimly recognize, even without the aid of mathematicians who
study such things, that numbers win this decidedly unequal contest of
appreciation and manipulation in the long-term.
In the beginning, we generously allowed our businesses, products and
services to share the older namespaces of people and geographies. East
India Company, Jardine-Mathieson, Carnegie Steel, Johnson & Johnson.
That strategy quickly exhausted itself, and so we energetically began
manufacturing Xeroxes, Kodaks, Microsofts and Apples.
The first really-big-numbers company decided to name itself after a
number, Google. Its home became an even bigger number, Googleplex.
After Google, the Internet began throwing up naming needs faster
than humans could manufacture them, and the orderly taxonomy
unexpectedly imposed on the world by the Internet Domain Name system
suddenly made life very difficult indeed.
So far, weve kept up by inventing quasi-algorithmic models: flickr,
dopplr, e-widget, i-doodad.
But eventually naming as a way to understand and construct reality
will fail. Technology creates complexity that creeps inexorably towards
the unnameable-but-significant.
When semantic genealogies in naming give way to syntactic and
lexicographic genealogies, you are halfway to the world of pure numbers

(there is a cute scene in Neal Stephensons Cryptonomicon, where


members of an online group decide to abandon names and stick to purely
numbering and ranking the world; the split occurs between those who seek
cryptonyms and those who seek a fundamental order within which, for
instance, Earth might be numbered 1).
The march that begins with Aachen and Aardvark cannot keep up
with a universe that throws countable, but not-nameable, variety at us. We
count on, long after we can no longer name. And eventually we cannot
count, either, and must stare at an unnameable, uncountable void and
wonder as some mathematicians do whether it even exists, given
how it eludes characterization.
Yet we persist with both naming and numbering, finding solace in
imposing a partial lexicographic order on reality, even as the struggle gets
harder.
16
I have not used the word brand even once in this post, until just now.
Over the years, I have lost confidence in the utility of the concept.
It is appropriate only for the cardinal-ordinal world of mass
manufacturing, where everything has a rank and number, but very few
things have real names. Most brands are McBrands. Billions upon billions
have been served up by marketers and fond parents. Most represent no
deeper reality than the first answer to the question, name, rank and
number.
It is not surprising. After all the very word originates in processes that
evolved superficially distinguish the essentially interchangeable. In the
world of cows, and pottery before that, to brand was to mark for
identification and counting, and little else.
Brand is an abstraction that adds very little to the more fundamental
concepts of naming and numbering, and the key derivative concept of
positioning. In fact, it is distracting. The word makes it far too easy to lose
yourself in abstractions. Naming and numbering keep you honest and

focused on the gestalt you are trying to distill, with repeated tests. The
story of these attempts is what we know as PR, and with each proposed
naming and positioning test you can ask, do I understand this story yet?
Without such test-driven naming, branding is an exercise in waterfall
marketing.
To the extent that it is a useful word at all, it describes a consequence
rather than an action. Away from the concrete world of cows being
tortured with red-hot irons, there is no actual action that you can call
branding.
You name, number and position. You then make up non-verbal
correlates colors and logos that derive from these basic elements.
These are things you do.
Brand happens.

How to Think Like Hercule Poirot


August 31, 2009
Last fall, I spent a long weekend in the Outer Banks region, a few
hours south of Washington, DC, reading a collection of Agatha Christie
pastiches called Malice Domestic, Volume 1 (now the title of an annual
mystery conference). The summer tourist season was over, and the hordes
had moved on to Maine and Vermont to chase the Fall colors. The days
were gray, windy, rainy and chilly. The beach front properties had mostly
emptied out, and most of the summer attractions were closed. We had a
large three-level beach front house to ourselves, with a porch facing the
troubled, ominous sea.

The ocean view from our hotel at Cape Hatteras, Outer Banks

Perfect conditions for bundling up in a blanket with a cup of hot


cocoa and a mystery. Reading Malice Domestic was a revelation. None of
the included writers even came close to creating Christie-like magic.
Which led me to wonder: does Poirot endure because he represents certain
truths about how to think effectively, which lesser fictional detectives
lack? I think so.

The Poirot Doctrine


I learned from the varied failures in Malice Domestic that period
settings, isolated cozy contexts (such as locked libraries) and quirky
detective personalities are not necessary, let alone sufficient, for an
effective mystery story. Neither is parlor-trick deductive rationality of the
Holmes variety.
What makes Poirot endure is his capacity for what I call narrative
rationality (the title of a chapter of the book I am writing): the ability to
understand and influence a situation through stories. What saves the quirks
of his character (such as his penchant for merely arranging facts,
borderline obsessive-compulsive fastidiousness and sybarite comfortseeking) from seeming arbitrary is that they integrate seamlessly and
logically into his thinking style.
One element of narrative rationality is particularly important in
Poirots style, the fact that it is strongly driven by a doctrine, a set of
beliefs about how the world works and should work. Poirots doctrine
constrains and defines his narrative imagination, which helps drive the
plot.
Poirots doctrine comprises several sorts of right-brained, left-brained
and moral beliefs that allow him to quickly get beyond a myopic
Holmesian preoccupation with footprints and cigarette ash. He can
therefore think more effectively at higher levels of abstraction and
ambiguity. Sure, as a literary creation, Poirot is rather crude, and yes, the
contrived nature of his cases can make his thinking style itself seem
contrived. Still, his thought processes, unlike those of Sherlock Holmes
say, are surprisingly useful as a model for us non-fictional humans in the
real world.
Poirots psychological doctrine in particular, is a robustly intelligent
one, based on subtle ideas about human behavior and skepticism of
jargon-happy Freudian-technical theorizing. An example is the assertion
he offers (I forget in which novel): women are sometimes tender, but they
are never kind. I forget how Poirot uses the idea in his reasoning, but I

remember immediately feeling a great sense of clarity and relief when I


read it. It is a personality heuristic one that I find to be true that
requires the vocabulary of a storyteller rather than that of the theorist or
experimentalist, and proves powerful in reasoning about human (in this
case, female) behavior.
This is a right-brained sort of doctrinal element, one that enables him
to recognize patterns. But Poirot can go left-brained as well. For instance,
at one point he explains his bachelorhood to Captain Hastings as follows:
In my experience, I know of five cases of wives being murdered by their
devoted husbands. And twenty-two husbands being murdered by their
devoted wives. So thank you, no. Marriage, it is not for me. Poirot is a
Bayesian rationalist: he applies the spouse-as-prime-suspect principle
frequently in stories. In fact it is so likely that a husband or wife will turn
out to be the murderer in a Christie novel that she has to expend much of
her ingenuity in muddying marital equations.
Poirots Moral-Philosophical Universe
But even right and left-brained tendencies do not add up to wholebrained narrative rationality. This is where Poirot truly rises above other
fictional detectives: there is a moral-philosophical dimension to his
thinking that is at once fatalistic (people do not change) and normative.
Though he is Catholic, his views are actually closer to the Protestant
doctrine of predestination, and the Poirot plots are, as a consequence often
Greek-tragic in their inevitability (Death on the Nile is a good example).
His most frequent normative doctrinal utterance is probably I do not
approve of murder. The line usually appears after Poirot has provided a
nuanced and sympathetic exposition of the motives and actions of all
concerned, and it seems like he has practically justified the murderers
actions. But once he presents his compelling theory of the case, he draws
his line in the sand. Unlike the non-fictional francophone, Madame de
Stael, who is credited with the quote to understand all is to forgive all
(Tout comprendre rend trs-indulgent), Poirot never allows the
murkiness of psychology to cloud his moral vision, thereby saving the
Poirot stories from the tedious and self-absorbed agonies of many modern
fictional detectives.

Poirots moral philosophy mostly seems to be inherited from Christie


herself Poirot, like Christie, is a religious conservative who is deeply
suspicious of socialist save-the-world tendencies. Curiously, some of his
moral strengths seem to arise from Christies subconscious awareness of,
and overcompensation for, her own moral flaws. Christie herself is
blatantly xenophobic and racist (see Hickory Dickory Dock for instance).
Poirot began his career in The Mysterious Affair at Styles like any other
xenophobia-inspired Christie caricature, full of ridiculous, unreconstructed
Latin pomposity. But he evolves through later novels into an ironically
self-aware egoist. By the time of his death in Curtain, he has evolved in
ways that the English, with their misguided sense of modesty and selfdeprecation, never can.
To the extent that the moral elements of Poirots doctrine represent
philosophical truths, they simplify his detective work and allow him to
drive events towards decisive outcomes. This again, is an element of his
thinking style that I find useful in the real world: keep your psychology
complex, but your morality simple. Otherwise youll never get anything
done.
There is one last element in Poirots doctrine: the recognition and
exploitation of the flaws of others doctrines. The best known exploit, of
course, is his tendency to exaggerate his foreignness and play on the
xenophobic prejudices and assumptions of civilizational superiority on the
part of the English characters (who always seem to describe him with
archaic words like mountebank and jackanapes). The key moment of
redemption in a Poirot novel, the one that anchors the readers
identification with him, is when a shrewd English character calls Poirot
out on his charade, at which point he can assume his fully-realized
character. But this is not just a recurring motif of exposition and
identification in the Poirot canon. The very point of a Poirot novel is to
validate and reinforce the superiority of Poirots doctrine over lesser
doctrines. The moment of truth is not really the revelation of the murderer,
but the point in the story at which it becomes clear that Poirots world
view provides the best perspective with which to make moral sense of the
plot. The solution to the murder validates the doctrine.
The whole-brained Poirot doctrine right-brained, left-brained and
moral allows him to reason around more ambiguous situations than any

other fictional detective. The integrated unit of thought in Poirot-style


thinking is the story. He urges witnesses to talk freely, speculate, and tell
their story as they please, correctly understanding that people think,
remember and talk (whether they are lying or telling the truth) through
narratives. His own theories in turn, take the form of evolving stories,
which he continually tests for both psychological and empirical
plausibility. Though he has the dramatic imagination of a playwright, he
never loses sight of the distinction between bald facts and the accounts of
those facts; he never hesitates to kill beautiful theories if they fail to
account for
even a single trivial observation or psychological
implausibility. And of course, like any good fictional detective, the
significance he assigns to specific facts in his stories is often very different
from the significance attributed to them by his witnesses in their stories.
Poirot stories are really stories about stories.
Christie frequently highlights the complexities of Poirots thought
processes by juxtaposing them against those of other characters, who
operate by simpler doctrines. Compared to the lurid and sensationalist
imagination of Captain Hastings and the damn-the-facts fantasies of
Ariadne Oliver, Poirots own theories of the case can appear very prosaic.
On the other hand, the lack of imagination of Inspector Japp and Miss
Lemon can make Poirot seem like Shakespeare. Again, this is not to say
that Poirot is not capable of fantastic imagination when the situation
warrants it, as it does in Murder on the Orient Express. When the facts
justify bold leaps of faith, Poirot leaps.
Beyond Poirot
Perhaps I am backward-looking, but to my mind, Poirot has never
been topped in the annals of fictional detection. Christies other creations
can mostly be dismissed. Tommy and Tuppence are the worst secret agent
characters ever, Parker Pyne is a bore and Superintendent Battle rarely
does anything except look enigmatic while others solve the crime. Even
Miss Marple is pretty much a one-trick right-brained pony. Her stock-intrade is identifying similarities in personality patterns across widely
disparate social situations (an urbane Duke in London might remind her of
Tommy The Butchers Boy). The entire holographic Marple universe is
based on the dubious one-element doctrine, people are much the same

everywhere, which allows for specious extrapolations of the social


psychology of St. Marys Mead to the rest of the world.
Within the Christie universe, only the mysterious Mr. Quin is
something of a match for Poirot, when it comes to doctrine-driven
detection. In many ways, thanks to being partly a supernatural-allegorical
construct, Mr. Quin is often more sublime than Poirot. If you havent read
the Mr. Quin books (there are only a few), you should.
Among fictional detectives who have appeared since Poirot (at least
the ones Ive read/watched on TV), only Dr. House, solver of medical
mysteries, comes close. Though nominally a Holmes-inspired character
(the show is full of insider Holmes references), the character of House is
much closer to that of Poirot, once you discard the superficial Holmes
connections. Like Poirot, House is an ironic-doctrinaire mix of rightbrained intuition, left-brained statistical skepticism, and a complex-butblack-and-white moral compass. The fact that most of us understand
absolutely nothing of the medical jargon in the show underlines the fact
that Houses appeal lies at a doctrinal level.
The Short Version
Trust your right-brained pattern-spotting. Be a skeptical, data-driven
empiricist. Add a moral compass. Tie it all together with storytelling. Be
aware of, and exploit, the flawed doctrines of others. Do not be concerned
about the morality of this: doctrinal flaws provide the moral justification
for their own exploitation.

Boundary Condition Thinking


January 19, 2011
It is always interesting to recognize a simple pattern in your own
thinking. Recently, I was wondering why I am so attracted to thinking
about the margins of civilization, ranging from life on the ocean (for
example, my review of The Outlaw Sea [August 27, 2009]) to garbage,
graffiti, extreme poverty and marginal lifestyles that I would never want to
live myself, like being in a motorcycle gang. Lately, for instance, I have
gotten insatiably curious about the various ways one can be nonmainstream. In response to a question I asked on Quora about words that
mean non mainstream, I got a bunch of interesting responses, which I
turned into this Wordle graphic.

Then it struck me: even in my qualitative thinking, I merely follow


the basic principles of mathematical modeling, my primary hands-on
techie skill. This interest of mine in non mainstream is more than a
romantic attraction to dramatic things far from everyday life. My broader,
more clinical interest is simply a case of instinctively paying attention to
what are known as boundary conditions in mathematical modeling.

Mathematical Thought
To build mathematical models, you start by observing and braindumping everything you know about the problem, including key
unknowns, onto paper. This brain-dump is basically an unstructured take
on whats going on. Theres a big word for it: phenomenology. When I do
a phenomenology-dumping brainstorm, I use a mix of qualitative notes,
quotes, questions, little pictures, mind maps, fragments of equations,
fragments of pseudo-code, made-up graphs, and so forth.
You then sort out three types of model building blocks in the
phenomenology: dynamics, constraints and boundary conditions
(technically all three are varieties of constraints, but never mind that).
Dynamics refers to how things change, and the laws govern those
changes. Dynamics are front and center in mathematical thought. Insights
come relatively easily when you are thinking about dynamics, and sudden
changes in dynamics are usually very visible. Dynamics is about things
like the swinging behavior of pendulums.
Constraints are a little harder. It takes some practice and technical
peripheral vision to learn to work elegantly with constraints. When
constraints are created, destroyed, loosened or tightened, the changes are
usually harder to notice, and the effects are often delayed or obscured. If I
were to suddenly pinch the middle of the string of a swinging string-andweight pendulum, it would start oscillating faster. But if you are paying
attention only to the swinging dynamics, you may not notice that the
actual noteworthy event is the introduction of a new constraint. You might
start thinking, there must be a new force that is pushing things along
faster and go hunting for that mysterious force.
This is a trivial example, but in more complex cases, you can waste a
lot of time thinking unproductively about dynamics (even building whole
separate dynamic models) when you should just be watching for changes
in the pattern of constraints.
Inexperienced modelers are often bored by constraints because they
are usually painful and dull to deal with. Unlike dynamics, which dance

around in exciting ways, constraints just sit there, usually messing up the
dancing. Constraints involve and tedious-to-model facts like if the
pendulum swings too widely, it will bounce off that wall. Constraints are
ugly when you first start dealing with them, but you learn to appreciate
their beauty as you build more complex models.
Boundary conditions though, are the hardest of all. Most of the raw,
primitive, numerical data in a mathematical modeling problem lives in the
description of boundary conditions. The initial kick you might give a
pendulum is an example. The fact that the rim of a vibrating drum skin
cannot move is a boundary condition. When boundary conditions change,
the effects can be extremely weird, and hard to sort out, if you arent
looking at the right boundaries.
The effects can also be very beautiful. I used to play the Tabla, and
once you get past the basics, advanced skills involve manipulating the
boundary conditions of the two drums. Thats where much of the beauty of
Tabla drumming comes from. Beginners play in dull, metronomic ways.
Virtuosos create their dizzy effects by messing with the boundary
conditions.
In mathematical modeling, if you want to cheat and get to an illusion
of understanding, you do so most often by simplifying the boundary
conditions. A circular drum is easy to analyze; a drum with a rim shaped
like lake Erie is a special kind of torture that takes computer modeling to
analyze.
A little tangential kick to a pendulum, which makes it swing mildly in
a plane, is a simple physics homework problem. An off-tangent kick that
causes the pendulum bob to jump up, making the string slacken, before
bungeeing to tautness again, and starting to swing in an unpleasant conic,
is an unholy mess to analyze.
But boundary conditions are where actual (as opposed to textbook)
behaviors are born. And the more complex the boundary of a system, the
less insight you can get out of a dynamics-and-constraints model that
simplifies the boundary too much. Often, if you simplify boundary

conditions too much, the behaviors that got you interested in the first place
will vanish.
Dynamics, Constraints and Boundaries in Qualitative Thinking
Without realizing it, many smart people without mathematical training
also gravitate towards thinking in terms of these three basic building
blocks of models. In fact, it is probably likely that the non-mathematical
approach is the older one, with the mathematical kind being a codified and
derivative kind of thinking.
Historians are a great example. The best historians tend to have an
intuitive grasp of this approach to building models using these three
building blocks. Here is how you can sort these three kinds of pieces out
in your own thinking. It involves asking a set of questions when you begin
to think about a complicated problem.
1. What are the patterns of change here? What happens when I do
various things? Whats the simplest explanation here? (dynamics)
2. What can I not change, where are the limits? What can break if
things get extreme? (constraints)
3. What are the raw numbers and facts that I need to actually do
some detective work to get at, and cannot simply infer from what I
already know? (boundary conditions).
Besides historians, trend analysts and fashionistas also seem to think
this way. Notice something? Most of the action is in the third question.
Thats why historians spend so much time organizing their facts and
numbers.
This is also why mathematicians are disappointed when they look at
the dynamics and constraints in models built by historians. Toynbees
monumental work seems, to a dynamics-focused mathematical thinker,
much ado about an approximate 2nd order under-damped oscillator (the
cycle of Golden and Dark ages typical in history). Hegels historicism and
End of History model appears to be a dull observation about an
asymptotic state.

How the World Works


In a way, the big problem that interests me, which I try to think about
through this blog, is simply how does the world work?
At this kind of scale, the hardest part of building good models is
actually in wrestling with the enormous amount of boundary conditions
data. Thats where you either get up off the armchair, or turn to Google or
Amazon. Thinking about boundary conditions organizing the facts and
numbers in elegant ways becomes an art form in its own right, and you
have to work with stories, metaphors and various other crutches to get at
the right set of raw data to inform your problem. Only after youve done
that do dynamics and constraints get both tractable and interesting.
Abstractions and generalizations, if they can be built at all, live in the
middle. Stories live on the periphery.
This is part of the reason I dont like traditional mathematical models
at how the world works scale, like System Dynamics. They ignore or
oversimplify what I think is the main raw material of interest: boundary
conditions. A theory of unemployment, slum growth and housing
development cycles in big cities that ignores distinctions among
vandalism, beggary and back-alley crime is, in my opinion, not a theory
worth much. If you could explain elegantly why some cities in decline
turn to crime, while others turn to vandalism or beggary, then youd have
interesting, high-leverage insights to work with.
Its not surprising therefore, that one of the most seductive ideas in
abstract thinking about history, the deceptively simple center periphery
idea (basically, the idea that change and new historical trends emerge on
the peripheries and in the interstices of centers) is extremely hard to
analyze mathematically, since it involves a weird switcheroo between
boundary conditions and center conditions. Some day, Ill blog about
center-periphery stuff. I have a huge, unprocessed phenomenology braindump on the subject somewhere.
So in a way, thinking about things like the words in the graphic is my
way of wrapping my mind around the boundary conditions of the problem,

how does the world work? If I just made up a theory of the mainstream
world based on mainstream dynamics, it would be very impoverished. It
would offer an illusion of insight and zero predictive power. A theory of
the middle that completely breaks down at the boundaries and doesnt
explain the most interesting stories around us, is deeply unsatisfying.
I have proof that this approach is useful. Some of my most popular
posts have come out of boundary conditions thinking. The Gervais
Principle series was initially inspired by the question, how is Office
funny different from Dilbert funny? That led me to thinking about
marginal slackers inside organizations, who always live on the brink of
being laid off. My post from last week, The Gollum Effect [January 6,
2011], came from pondering extreme couponers and hoarders at the edge
of the mainstream.
So I operate by the vague heuristic that if I pay attention to things on
the edge of the mainstream, ranging from motorcycle gangs to extreme
couponers and hoarders, perhaps I can make more credible progress on big
and difficult problems.
Or at least, thats the leap of faith I make in most of my thinking.

Learning From One Data Point


September 28, 2010
Sometimes I get annoyed by all the pious statistician-types I find all
around me. They arent all statisticians, but there are a lot of people who
raise analytics and data-driven to the level of a holy activity. It isnt that
I dont like analytics. I use statistics whenever it is a cost of doing
business. Youd be dumb to not take advantages of ideas like A/B testing
for messy questions.
What bothers me is that there are a lot of people who use statistics as
an excuse to avoid thinking. Why think about what ONE case means,
when you can create 25 cases using brute force, and code, classify, cluster,
correlate and regress your way to apparent insight?
This kind of thinking is tempting, but is dangerous. I constantly
remind myself of the value of the other approach to dealing with data:
hard, break-out-in-a-sweat thinking about what ONE case means. No
rules, no formulas. Just thinking. I call this learning from one data point.
It is a crucially important skill because by the time a statistically
significant amount of data is in, the relevant window of opportunity might
be gone.
Randomness and Determinism
The world is not a random place. Causality exists. Patterns exist. In
grad school, I learned that there are two types of machine learning models
in AI. Models based on reasoning, and models based on statistics and
probability. This applies to both humans and machines. Both are driven by
feedback, but one kind is driven mainly by statistical formulas, while the
other kind is driven by thinking about the new information.
The probability models, like reinforcement or Bayesian learning, are
very easy to understand. They involve a few variables and a lot of clever
math, mostly already done by smart dead people from three centuries ago,
and programmed into software packages.

The reasoning models on the other hand, are complex, but largely
qualitative, and most of the thinking is up to you, not Thomas Bayes.
Explanation-Based Learning is one type. A slightly looser form is CaseBased Reasoning. Both rely on what are known as rich domain theories.
Most of the hard thinking in EBL and CBR is in the qualitative thinking
involved in building good domain theories, not in the programming or the
math.
The former kind requires lots of data involving a few variables. Do
people buy more beer on Fridays? Easy. Collect beer sales data, and you
get a correlation between time t and sales s. Gauss did most of the
necessary thinking a couple of hundred years ago. You just need to push a
button.
EBL, CBR and other similar models are different. A textbook example
is learning endgames in chess. If I show you an endgame checkmate
position involving a couple of castles and a king, you can think for a bit
and figure out the general explanation of why the situation is a checkmate.
You will be able to construct a correct theory of several other checkmate
patterns that work by the same logic. One case has given you an
explanation that covers many other cases. The cost: you need a rich
domain theory in this case a knowledge of the rules of chess. The
benefit: you didnt waste time doing statistical analyses of dozens of
games to discover what a bit of simple reasoning revealed.
Looser case-based reasoning involves stories rather than 100%
watertight logic. Military and business strategy is taught this way. Where
the explanation of a chess endgame could potentially be extended
perfectly to all applicable situations, it is harder to capture what might
happen if a game starts with a Sicilian defense. You can still apply a lot
of logic and figure out the patterns and types of game stories that might
emerge, but unlike the 2-castles-and-king situation, you are working in too
big a space to figure it all out with 100% certainty. But even this looser
kind of thinking is vastly more efficient than pure brute force statisticsbased thinking.
Theres a lot of data in the qualitative model-based kinds of learning
as well, except its not two columns of x and y data. The data is a fuzzy set

of hard and soft rules that interact in complex ways, and lots of
information about the classes of objects in a domain. All of it deployed in
the service of an analysis of ONE data point. ONE case.
Think about people for instance. Could you figure out, from talking to
one hippie, how most hippies might respond to a question about drilling
for oil in Alaska? Do you really need to ask hundreds of them at Burning
Man? It is worth noting that random samples of people are
extraordinarily hard to construct. And this is a good thing. It gives people
willing to actually think a significant advantage over the unthinking datadriven types.
The more data you have about the structure of a domain, the more you
can figure out from just one data point. In our examples, one chess
position explains dozens. One hippie explains hundreds.
People often forget this elementary idea these days. Ive met idiots
(who shall remain unnamed) who run off energetically do data collection
and statistical analysis to answer questions that take me 5 minutes of
careful qualitative thought with pen and paper, and no math. And yes, I
can do and understand quite a bit of the math. I just think 90% of the
applications are completely pointless. The statistics jocks come back and
are surprised that I figured it out while sitting in my armchair.
The Real World
Forget toy AI problems. Think about a real world question: A/B
testing to determine which subject lines get the best open rates in an email
campaign. Without realizing it, you apply a lot of model-based logic and
eliminate a lot of crud. You end up using statistical methods only for the
uncertainties you cannot resolve through reasoning. Thats the key:
statistics based methods are the last-resort, brute force tool for resolving
questions you cannot resolve through analysis of a single prototypical
case.
Think about customer conversations. Should you talk to 25 customers
about whether your product is good or bad? Or will one deep conversation
yield more dividends?

Depends. If there is a lot of discoverable structure and causality in the


domain, one in-depth customer conversation can reveal vastly more than
25 responses to a 3 question survey. You might find out enough to make
the decision you need to make, and avoid 24 other conversations.
But it takes work. A different kind of work. You can go have lunch
with just ONE well-informed person in an organization and figure out
everything important about it, by asking about the right stories, assessing
that persons personality, factoring out his/her biases, applying everything
you know about management theory and human psychology, and spending
a few hours putting your analysis together. You wont produce pretty
graphs and hard evidence of the sort certain idiots demand, but you will
know. Through your stories and notes, you will know. And nine times out
of ten, youll be right.
Thats the power of one data point. If you care to look, a single data
point or case is an incredibly rich story. Just listen to the story, tease out
the logic within it, and youll learn more than by attempting to listen to
fifty stories and fitting them all into the same 10-variable codification
scheme. Examples of statistical insights that I found incredibly stupid
include:
1. Beyond a point, more money doesnt make people happier
2. Religious people self-report higher levels of happiness than
atheists
Duh. These and other insights are accessible much more easily if
you just bothered to think. Usually the thinking path gets you more than
the statistics path in such cases. I cite such results to people who look for
that kind of verification, but I personally dont bother analyzing such
statistical results deeply.
Sure, it is good to be humble and recognize when you dont have
enough modeling information from one case. Sure, data can prove you
wrong. It doesnt mean you stop thinking and start relying on statistics for
everything. Look at the record of statistics based thinking. How often
are you actually surprised by a data-driven insight? I bet you are like

me. Nine out of ten times you ask they needed a study to figure THAT
out?
And the 1/10 times you get actual insight? Well, consider the beer and
diapers story. I dont tell that story. Statistics-types do.
This means going with your gut-driven deep qualitative analysis of
one anecdotal case will be fine 9 out of 10 times.
The Real Reason Data Driven is Valued
So why this huge emphasis on quants and data driven and
analytics? Could a good storyteller have figured out and explained (in
an EBL/CBR sense) the subprime mortgage crisis created by the quants? I
believe so (and I suspect several did and got out in time).
I think the emphasis is due to a few reasons.
First, if you can do stats, you can avoid thinking. You can plug and
chug a lot of formulas and show off how smart you are because you can
run a logistic regression and the Black-Scholes derivative pricing formula
(sorry to disappoint you; no, you are not that smart. The people who
discovered those formulas are the smart ones).
Second, numbers provide safety. If you tell a one-data-point story and
you turn out to be wrong, you will get beaten up a LOT more badly than if
your statistical model turns out to be based on an idiotic assumption.
Running those numbers looks more like real work than spinning a
qualitative just-so story. People resent it when you get to insights through
armchair thinking. They think the honest way to get to those insights is
through data collection and statistics.
Third: runaway behavioral economics thinking by people without
the taste and competence to actually do statistics well. Ill rant about that
another day.
Dont be brute-force statistics driven. Be feedback-driven. Be
prepared to dive into one case with ethnographic fervor, and keep those

analytics programs handy as well. Judge which tool is most appropriate


given the richness of your domain model. Blend the two together.
Qualitative story telling and reasoning and statistics.
And if I were forced to choose, Id go with the former any day.
Human beings survived and achieved amazing things for thousands of
years before statistics ever existed. Their secret was thinking.

Lawyer Mind, Judge Mind


March 29, 2012
Several recent discussions on a variety of unrelated topics with
different people have gotten me thinking about two different attitudes
towards dialectical processes. They are generalized versions of the
professional attitudes required of lawyers and judges, so Ill refer to them
as lawyer mind and judge mind.
In the specialized context of the law, the dialectical process is
structurally constrained and the required attitudes are codified and legally
mandated to a certain extent. Lawyers must act as though they were
operating from a lawyer-mindset, even if internally they are operating with
a judge-mind. And vice-versa. Outside of the law, the distinction acquires
more philosophical overtones.
I want to start with the law, but get to a broader philosophical,
psychological and political distinction that applies to all of us in all
contexts.
The Two Minds in Law
The lawyer mind allows you to make up the best possible defense or
prosecution strategy with the available evidence. Within limits, even if the
defense lawyer is convinced his client is guilty, s/he is duty-bound to make
the best possible case and is not required to share evidence that
incriminates the defendant or weakens the case. I asked several questions
about this sort of thing on Quora and got some very interesting answers
from lawyers. If you are a lawyer or judge and have opinions on these
basic questions, you may want to add them as answers to the questions
rather than as comments here.
The legal system is designed so that lawyers are under an ethical and
legal obligation to try and win, rather than get at the truth in any sense.
So a defense lawyer with a flimsy case, who is convinced of his clients

guilt, but who wins anyway because the prosecution is incompetent, is


doing his job. S/he should not pull his/her punches.
Whats more, there is a philosophy behind the attitude. It is not letter
over spirit. It is letter in service of the spirit. If things are working well, the
lawyer should not suffer agonies to see justice not being served in the
specific case, but find solace in the fact of the dialectic being vital and
evolving as it should.
The lawyer, by pulling out all stops for a legal win, regardless of the
merits of the case, is philosophically trusting the search for truth to the
dialectic itself, and where the dialectic fails in a particular instance, s/he (I
expect) views it as necessary inefficiency in the interests of the longerterm evolution of the legal system. Its the difference between not in my
job description small-mindedness and trusting the system awareness of
ones own role and its limitations.
The judges nominal role is to act as a steward of the dialectic itself
and make sure it is as fair as can be at any given time, without attempting
to push its limits outside of certain codified mechanisms. The judge is
charged with explicitly driving towards the truth in the particular case,
and also improving the systems potential its dialectical vitality so
that it discovers the truth better in the future (hence the importance of
writing judgments with an eye on the evolution of case law, which is
supposed to be a run a few steps ahead of legislation as a vanguard, and
discover new areas that require legislative attention).
When Does This Work?
Now, if you think about it, this scheme of things works well when the
system is actually getting wiser and smarter over time. If the system is
getting dumber and more subverted over time, it becomes harder and
harder for either the lawyer or the judge to morally justify their
participation in and perpetuation of the system (assuming they care about
such things).
A challenge for a judge might be, for instance, an increasing influence
of money in the system, with public defenders getting worse over time,

and rich people being able to buy better and better lawyers over time. If
this is happening, the whole dialectic is falling apart, and trust in the
system erodes. Dialectical vitality drains away and the only way to operate
within the system is to become good at gaming it without any thought to
larger issues. This is the purely predatory vulture attitude. If a legal system
is full of vulture-lawyers and vulture-judges, it is a carcass.
A moral challenge for a lawyer might be, for instance, deciding
whether or not to use race to his/her advantage in the jury selection
process, effectively using legal processes to get racial discrimination
working in his clients favor. Should the lawyer use such tactics, morally
speaking? It depends on whether the dialectic is slowly evolving towards
managing race more thoughtfully or whether it is making racial
polarization and discrimination worse.
This constant presence of the process itself in peripheral vision means
that both lawyers and judges must have attitudes towards both the specific
case and about the legal system in general. So an activist judge, for
instance, might be judge-minded with respect to the case, but lawyerminded with respect to the dialectic (i.e., being visibly partisan in their
philosophy about if and how the system should evolve, and either being
energetic or conservative in setting new precedents). You could call such a
person a judge-lawyer.
A lawyer who writes legal thrillers on the side, with a dispassionate,
apolitical eye on process evolution, might be called a lawyer-judge. A
lawyer with political ambitions might be a lawyer-lawyer. I cant think of
a good archetype label for judge-judge, but I can imagine the type: an
apolitical judge who is fair in individual cases and doesnt try too hard to
set precedents, but does so when necessary.
The x-(x)-X-(X) Template
Because of the existence of an evolving dialectic framing things, you
really you have four possible types of legal professionals: lawyer-lawyers,
judge-judges, lawyer-judges and judge-lawyers, where the first attitude is
the (legally mandated and formal-role based) attitude towards a specific

case, and the second is the (unregulated) political attitude towards the
dialectic.
When the system is getting better all the time, all four roles are
justifiable. But when it is gradually worsening beyond the point of no
return, none of them is. When things head permanently south, a mismatch
between held and demonstrated beliefs is a case of bad faith. Since all
hope for reform is lost the only rational responses are to abandon the
system or be corrupt within it.
To get at the varieties of bad faith possible in a collapsing dialectic,
you need to distinguish between held and demonstrated beliefs at both
case and dialectic levels to identify the specific pattern.
So you might have constructs like lawyer-(judge)-lawyer-(lawyer).
This allows you to slice and dice various moral positions in a very finegrained way. For example, I think a legalist in the sense that the term has
been used in history, is somebody who adopts a lawyer-like role in a
specific case within a dialectic thats decaying and losing vitality, while
knowing full well that it is decaying. Legalists help perpetuate a dying
dialectic. You could represent this as lawyer-(judge)-judge-(lawyer). Ill
let you parse that.
This is getting too meta even for me, so Ill leave it to people who are
better at abstractions to make sense of the possibilities here. Ill just leave
it at the abstract template expression Ive made up: x-(x)-X-(X).
The special case of the law illuminates a broader divide in any sort of
dialectical process. Some are full of judge-mind types. Others are full of
lawyer-mind types.
The net behavior of a dialectic depends not just on the type of people
within it, but on its boundary conditions: at the highest level of appeal, do
judge-minds rule or lawyer-minds?
Within the judiciary, even though there are more lawyer minds, the
boundary conditions are at the Supreme Court, where judge minds rule. So
the dialectic overall is judge-minded due to the nature of its highest appeal
process.

In other dialectics, things are different because the boundary


conditions are different.
Governance Dialectics
The watershed intellectual difference that separates conservative
(more lawyer-like) and liberal discourses (more judge-like) around a
particular contentious subject is framed by the boundary conditions of the
governance dialectic itself.
Politics exists within the dialectic that in principle subsumes all
others: the governance dialectic. In principle because if the governance
dialectic loses vitality, the subsumed dialectics can devour their parents.
You could argue that in a democracy where the legislative branch has
the ability, in principle, to amend the constitution arbitrarily, the overall
governance dialectic is one where the lawyer mind is the ultimate source
of authority, since the top body is a bunch of formally lawyer-mind types.
There are no judge-mind types with any real power, especially in
parliamentary democracies. Nominally judicial roles like the Speaker are
mostly procedural rather than substantive.
The theory of an independent judiciary does not in practice give
judge-mind people equal authority. The check-and-balance powers of the
judiciary are based on seeking to make the law more internally consistent
rather than improving its intentions or governing values. Of course, if the
legislative arm is slow in keeping up with the landscape being carved out
by case law, the judiciary gains more de facto power. Thats a subsumed
dialectic devouring its parent.
So in a democracy, lawyer-minds are structurally advantaged, since
the most powerful institution is set up for lawyer minds. Bipartisanship
(judge minds operating in a legislature) takes a special effort to go beyond
the structural default through an act of imagination.
Among the other institutions in a free-market democracy, theoretically
the judiciary, executive and free press are nominally judge-minded at their

boundaries, while the market is lawyer-minded (more on that in a bit). So


there is structural lawyer-mind bias in the top-level institutions (the
legislature and the market) and a structural judge-mind bias in the
secondary institutions (the judiciary, the press and the executive branch).
Traditional Imperial China was the opposite. The legal system
ultimately derived its authority from a judge-mind figure, the Emperor.
The lawyers were second-class citizens.
Other Dialectics
The notion of free press is currently being radically transformed
due to the fundamental tension between journalism and blogging.
Journalism, at least nominally, is driven by a judge-mind dialectic.
Journalists nominally aspire to a fair-and-balanced (without the Fox News
scare quotes) role in society.
Blogging is driven by a lawyer-mind dialectic. Bloggers trust that the
truth will out in some larger sense, and feel under no moral obligation to
present or even see all sides of an issue. If the opposed side has no
credible people, well, tough luck. The truth will just take a little longer to
out. This gradual transformation of dialectical boundary conditions has
been particularly clear in the various run-ins between Michael Arrington
and newspapers like the Washington Post. This too is a case of a subsumed
dialectic devouring its parent, since the government basically has no idea
what its role in the new media world should be.
Science is another important dialectic. I wont attempt to analyze it
though, since it exists in a feedback loop with the rest of the universe, and
is too complicated to treat here. Religion used to be dialectical in nature,
but isnt any more. But science is unimportant socially because it is very
fragile, and in a world that is socially messy, it is easily killed. It never
rules primarily because it takes a certain minimum amount of talent to
participate in the scientific dialectic, which makes it similar to a minority
dialectic.

Religion used to be a real dialectic. Now it is mostly theater in service


of political dialectics.
Capitalism is another dialectic with the capacity to devour
governance, just like the judiciary. But it is lawyer-like, not judge-like.
The idea of a fiduciary duty to maximize shareholder wealth in the US
is a lawyer-like duty towards society. The trend towards social
businesses (B-corporations in the US) is an attempt to invent companies
with more judge-like duties towards society. For the former to work, the
market has to be closer to truly competitive, and getting better all the time.
The invisible hand must be guided by an invisible and emergent judicial
mind.
In an environment where pure competition has been greatly
subverted, it is hard to justify this fiduciary duty. The rise of Bcorporation philosophy, indicates a failure in the governance dialectic,
since emergent judge-mind attitudes that should exist at the legislative
level are being devolved to the corporate level.
In the US, the legislature has abdicated the spirit, if not the letter, of
its responsibilities. Fiduciary duty may be a terrible idea, but the better
solution would be to shift to a different, but still lawyer-mind model. This
is because the market has a far lower capacity to manifest an emergent
judge-mind. Since it is the governance dialectic that controls the nature
and future of money, the principal coordination mechanism for the market,
the market is ultimately subservient in principle, just like the judiciary.
Since the top-level emergent judge-mind requires a culture of
bipartisan legislative imagination to exist, a legislative branch that cannot
define imaginative visions on occasion enables a takeover by the
structurally advantaged lawyer minds that comprise it, which leads to
polarization and a power vacuum, which in turn leads to the devouring by
nominally subsumed dialectics.
This is not an accident. By its very nature, you cannot structurally
advantage judge-minds at the ultimate boundary of a social system. If you
do, you are essentially legitimizing a sort of divine authority. The top level

has to be lawyer-minds arguing by default, with an occasional lawyer


gaining enough trust across the board to temporarily play judge.
Societies fail when their governance processes fail to demonstrate
enough imagination for sufficiently long periods. We are living through
such a period in the US today, as well as in many other parts of the world.
Governance processes across the world have lost their vitality and there is
a lot of devouring by dialectics it is supposed to subsume.
In the past during periods of such failure, violent adjustments have
occurred. War is after all, the social dialectic of last resort. Both world
wars and the US Civil War represented such adjustments. In each case, the
governance dialectic was revitalized, but at enormous cost in the short
term.
Empathy and Passion
When you approach all reality with an intrinsic lawyer mind, you
fundamentally believe that no matter how powerful your perspectiveshifting abilities, you cannot adopt all relevant points of view. Not even all
human points of view. With a judge-mind by contrast, your starting
assumption is that you will eventually be able to appreciate all points of
view in play. It is a somewhat arrogantly visionary perspective in that
sense, and requires exhibition of a sufficient imagination to justify itself.
With a lawyer-mind for instance, if you are white, you dont presume
to understand the black point of view. With a judge-mind, you assume you
can. Your emotions can also be lawyer-like (polarized passion) or judgelike (dispassionate).
If you are aware of, and unconflicted about, your role in a given
dialectic, you dont try to either suppress or amplify your emotions. You
try to be mindful about how they influence your intellectual processes and
control that influence if you think it is counter-productive. Up to a point,
passion improves a lawyer mind and lack of passion improves a judge
mind. Too much passion, and a lawyer-mind becomes emotionally
compromised. Too little passion and a judge mind becomes apathetic.
Both pathologies lead to procedural mistakes.

Passion cannot be conjured up out of nothing, nor can it be created or


destroyed independently of intellectual reactions. So if you need more or
less passion for your role, you have to either change your role via a true
intellectual shift, or borrow or lend passion. This requires empathy.
Depending on whether the passion is on your side or the opposite
side, empathy can make you more lawyer-minded or more judge-minded.
Empathy for a friend makes you more lawyer-like. Empathy for a rival
makes you more judge-like. This is how dialectics get more or less
polarized. A dialectic with vitality can swing across this range more easily.
One that lacks vitality gets locked into a preferred state.
So there is a sort of law of conservation of passion in a given
situation, with passions of different polarities canceling out via crossdivide empathy, or reinforcing via same-side empathy.
There is a certain irreversability and asymmetry though. Judge-minds
being fundamentally dispassionate cannot absorb passion and become
lawyer-like as easily as lawyer-minds can absorb opposed passions and
become more judge-like. This means judge-minds are more stable than
lawyer-minds. To lower polarization, all the minds in a dialectic must mix
more and let passions slosh and cancel out somewhat via empathy. This
means breaking down boundaries and creating more human-to-human
contact. To preserve or increase polarization on the other hand, artificial
barriers must be created and maintained. Or you need a situation where
material dialectics, like war and natural calamities, happen to be highly
active.
This is fundamentally why the labels conservative and progressive
mean what they do in politics. This is also why conservatives are typically
better organized institutionally. They have walls to maintain to prevent
contamination of their lawyer minds.
And finally, this is also why the governance dialectic is structurally
set up to advantage lawyer-minds at the highest levels: they need the
structure more. It is up to judge-minds to transcend existing structures and
imagine more structure into existence.

Knowing Your Place


With a lawyer mind in improving times, you conclude that your job is
merely to do your absolute best with the perspectives you can access
directly or via empathy, and trust larger processes to head in sane
directions.
The lawyer mind is therefore an open system view that is more robust
to unknown-unknowns. It trusts things it does not actually comprehend. It
is intellectually conservative in that it knowingly limits itself. The judge
mind is a closed system view that is less robust to unknown unknowns. It
is intellectual ambitious in that it presumes to adopt a see-all/know-all
stance. It does not trust what it cannot comprehend and is limited by what
it can imagine.
Paradoxically, what makes a judge-mind closed is its capacity for
imagination, while a lawyer-mind is open by virtue of its lack of
imagination.
The ability to adopt many conflicting perspectives
dispassionately fuels imaginative synthesis, but this synthesis then
imprisons the judge mind. The reverse paradox holds for lawyer minds.
These paradoxes suggest that each type of mind contains the seed of
the other, yin-yang style. Ill leave you to figure out how. The fundamental
delusion of a frozen judge-mind is the belief that this yin-yang state can
exist in one mind all the time. The fundamental delusion of a frozen
lawyer-mind is the belief that it never can.
In the Myers-Briggs system, where J(udging) and P(erceiving)
represent what Ive been calling the lawyer and judge mindsets
respectively. Ironic that the labels are somewhat reversed.
Psychologically, I am a P (a fairly strong INTP), but intellectually,
over the years Ive become increasingly lawyer-minded rather than judgeminded. Perhaps it is the effect of blogging. Perhaps it is a growing sense
of the limits of my own abilities.

In terms of more artistic archetypes, the fox and hedgehog reflect


lawyer and judge minds.

Just Add Water


February 29, 2012
A Bill Gates Roy Amara quote I encountered last week reminds me
strongly of compound interest.
We always overestimate the change that will occur in
the next two years and underestimate the change that will
occur in the next ten.
I hadnt heard this line before, but based on anecdotal evidence, I
think Amara was right to zeroth order, and it is a very smart comment. The
question is why this happens. I think the answer is that we are naturally
wired for arithmetic, but exponential thinking is unnatural. But I havent
quite worked it out yet. We probably use some sort of linear prediction
that first over-estimates and then under-estimates the underlying
exponential process, but where does that linear prediction come from?
Anyone want to take a crack at an explanation? I could be wrong.
Compound interest/exponential thinking might have nothing to do with it.
***
When I write, I generally start with some sort of interesting motif, like
the Gates quote, that catches my eye, which I then proceed to attempt to
unravel. Sometimes it turns out theres nothing there, and sometimes a
trivial starting point can fuel several thousand words of exploration.
I call this the just add attention model of writing. Its like just-addwater concentrates. A rich motif will yield a large volume of mind fuel if
you just dissolve it in a few hours of informed attention.
The previous nugget is an example. If I were to let it simmer for a few
days and then sat down to do something with the Gates quote, I would
probably be able to spin a 4000-word post from it. I figured Id let you
guys take a crack at this one.

My hit rate has been steadily improving. Nowadays, when I suspect


that something will sustain exploration to such and such a depth, I am
almost always right.
I prefer the word motif to words like pattern or clue, because it is
more general. A motif merely invites attention. By contrast, a pattern
attracts a specific kind of analytical attack, and a clue sets up a specific
kind of dissonance.
***
The nature of just-add-attention writing explains why it is hard for me
to write short posts. If I wrote short posts, theyd just be too-clever
questions with no answers, or worse, cryptic motifs offered with no
explanation.
You cannot really compress just-add-attention writing. You can only
dehydrate it back into a concentrate. Just-add-attention writing has a
generative structure but no clear extensive structure. It is like a tree rather
than a human skeleton.
By this I mean that you can take the concentrate the motif and
repeatedly apply a particular generative process to it to get to what you an
extensive form. But this extensive form has no clear structure at the
extensive level. At best, it has some sort of fractal structure. A human
skeleton is a spine with four limbs, a rib cage and a skull attached. A tree
is just repeated tree-iness.
But I hesitate to plunge forward and call all generative-extensive
forms fractal, as you might be tempted to do. Fractal structures have more
going on.
***
Just-add-attention writing is partially described well by Paul
Grahams essay about writing essays, which somebody pointed out to me
after I posted my dense writing piece a few weeks back. But I dont think

it is the same as the Graham model. I think the Graham model involves
more conscious guidance from a separate idea about the aesthetics of
writing, sort of like bonsai.
Just-add-attention writing is driven by its own aesthetic. This can lead
to unpredictable results, but you get a more uncensored sense of whether
an idea is actually beautiful.
Dense writing is related to just-add-attention in a very simple way:
making something dense is a matter of partially dehydrating an extensive
form again, or stopping short of full hydration in the first place. Along
with pruning of bits that are either hard to dilute or have been irreversibly
over-diluted.
Why would you want to do that? Because just-add-attention writing
can sort of sprawl untidily all over the place. Partially dehydrating it again
makes it more readable, at the cost of making it more cryptic.
This add-attention/dehydrate again process can be iterated with some
care and selectivity to create interesting artistic effects. It reminds me of a
one-word answer Xianhang Zhang posted on Quora to the question, how
do you chop broccoli? Answer: recursively.
Regular writing can be chopped up like a potato. Just-add-attention
writing must be chopped up like a broccoli. It is more time consuming.
Thats why I cannot do what some people innocently suggest, simply
serializing my longer pieces as a sequence of arbitrarily delineated parts. I
have never successfully chopped up a long piece into two shorter pieces.
At best, I have been able to chop off a straggling and unfinished tail end
into another draft and then work that separately.
***
Not all generative processes lack extensive structure. The human
skeleton is after all, also the product of a generative process (ontogeny).
To take a simpler example, the multiplication table for 9 is defined by a
generative rule (9 times n), but also has an extensive structure:

09
18
27
36
45
54
63
72
81
90
In case you didnt learn this trick in grade school, the extensive
structure is that you can generate this table by writing the numerals 0-9
twice in adjacent columns, in ascending and descending order.
If you wanted to blog the multiplication table for 9, and had to keep it
to one line. You could use either:

The nine times table is generated by multipling 1, 2,, n by 9, or


Write down 0-9 in ascending order and then in descending order in
the next column

Both are good compressions, though the second is more limited. But
this is rare. In general a sufficiently complex generative process will
produce an extensive-form output that cannot then be compressed by any
means other than rewinding the process itself.
***
Just-add-attention writing is easy for those who can do it, but not
everybody can do it. More to the point, of the people who can do it, a
significant majority seem to find it boring to do. It feels a little bit like
folding laundry. It is either a chore, or a relaxing experience.
What sort of people can do it?

On the nature front, I believe you need a certain innate capacity for
free association. Some people cannot free associate at all. Others free
associate wildly and end up with noise. The sweet spot is being able to
free associate with a subconscious sense of the quality of each association
moderating the chain reaction. You then weave a narrative through what
youve generated. The higher the initial quality of the free association, the
easier the narrative weaving becomes.
On the nurture front, this capacity for high-initial-quality free
association cannot operate in a vacuum. It needs data. A lot of data,
usually accumulated over a long period of time. What you take in needs to
age and mature first into stable memories before free association can work
well on this foundation. The layers have to settle. By my estimate, you
have to read a lot for about 10 years before you are ready to do just-addwater writing effectively.
Unfortunately, initial conditions matter a lot in this process, because
our n+1 reading choice tends to depend on choices n and n-1. The reading
path itself is guided by free association. But since item n isnt usable for
fertile free association until, say, youve read item n+385, there is a time
lag. So your reading choices are driven by partly digested reading choices
in the immediate past.
So if you make the wrong choices early on, your fill the hopper
phase of about 10 years could go horribly wrong and fill your mind with
crap. Then you get messed-up effects rather than interesting ones.
So there is a lot of luck involved initially, but the process becomes a
lot more controlled as your memories age, adding inertia.
***
This idea that just-add-attention writing is driven by aged memories
of around 10 years of reading suggests that the process works as follows.
When you recognize a motif as potentially interesting, it is your
stored memories sort of getting excited about company. Interesting is a
lot of existing ideas in your head clamoring to meet a new idea. Thats
why you are sometimes captivated by an evocative motif but cannot say

why. You wont know until your old ideas have interviewed the new idea
and hired it. Motif recognition is a screening interview conducted by the
ideas already resident in your brain.
Or to put it in a less overwrought way, old ideas act as a filter for new
ones. Badly tuned filters lead to too-open or too-closed brains. Well-tuned
ones are open just the right amount, and in the right ways.
Recognition must be followed by pursuit. This is the tedious-to-some
laundry-folding process of moderated free association. It is all the ideas in
your head interrogating the new one and forming connections with it.
Finally, the test of whether something interesting has happened is
whether you can extract a narrative out of the whole thing, once the
interviewing dies down.
A good free association phase will both make and break connections.
If your brain only makes connections, it will slowly freeze up because
everything will be connected to everything else. This is as bad as nothing
being connected, because you have no way to assess importance.
The pattern of broken and new connections (including those
formed/broken in distant areas) guides your narrative spinning.

The Rhetoric of the Hyperlink


July 1, 2009
The hyperlink is the most elemental of the bundle of ideas that we call
the Web. If the bit is the quark of information, the hyperlink is the
hydrogen molecule. It shapes the microstructure of information today.
Surprisingly though, it is nearly as mysterious now as it was back in July
1945, when Vannevar Bush first proposed the idea in his Atlantic Monthly
article, As We May Think. July 4th will mark the second anniversary of
Ribbonfarm (I started on July 4th, 2007), and to celebrate, I am going to
tell you everything Ive learned so far about the hyperlink. That is the lens
through which I tend to look at more traditional macro-level blogintrospection topics, such as how to make money blogging, and will
blogs replace newspapers? So with a Happy Second Birthday,
Ribbonfarm! and a Happy 64th Birthday, Hyperlink, lets go explore
the hyperlink.

Hyper-Grammar and Hyper-Style


The hyperlink is not a glorified electronic citation-and-libraryretrieval mechanism. The electronic library perspective, that the
hyperlink is merely a convenience that comes with the cost of amplifying
distraction, is a myopic one. But lousy though it is, that is our starting

point. The first airports were designed to look like railway stations after
all. As McLuhan said, We see the world through a rear-view mirror. We
march backwards into the future. Turn around, backward march.
This is the default mental model most people have of hyperlinks, a
model borrowed from academic citation, and made more informal:

Implicit inline: Nick Carr believes that Google is making us stupid.


Explicit inline: Nick Carr believes that Google is making us stupid
(see this article in the Atlantic).

Both are simple ports of constructs like Nick Carr believes [Carr,
2008] that Google is making us stupid. There are a couple of mildly
interesting things to think about here. For instance, the hyper-grammatical
question of whether to link the word believes, as I have done, or the title.
Similarly, you can ask whether this or article or this article in the Atlantic
should be used as the anchor in the explicit version. There is also the
visual-style question of how long the selected anchor phrase should be: the
more words you include, the more prominent the invitation to click. But
overall, this mental model is self-limiting. If links were only glorified
citations, a Strunk-and-White hyper-grammar/hyper-style guide would
have little new raw material to talk about.
Lets get more sophisticated and look at how hyperlinks break out of
the glorified-citation mould. Turn around, forward march.
Hyperlinking as Form-Content Mixing
Here are two sentences that execute similar intentions:

There are many kinds of fruit (such as apples, oranges and


bananas).
There are many kinds of fruit.

I dont remember where I first saw this clever method of linking (the
second one), but I was instantly fascinated, and I use it when I can. This
method is a new kind of grammar. You are mixing form and content, and
blending figure and ground into a fun open the secret package game.

From a traffic-grubbing perspective, the first method will leak less,


because if you already know all about apples, the link tells you what it is
about, and you wont click. So if you want to reference a really unusual
take on apples, the second method is more effective.
Real hyperlink artists know that paradoxically, the more people are
tempted to click away from your content, the more they want to keep
coming back. There is a set of tradeoffs involving compactness,
temptation to click, foreshadowing to eliminate surprise (and retain the
reader), and altruism in passing on the reader. But the medium is friendlier
to generosity in yielding the stage. This yielding the stage metaphor is
important and we will come back to it.
But possibly, this and similar tricks seems trivial to you. Lets do a
more serious example.
Hyperlinking as Blending of Figure, Ground and Voice
A while ago, on the Indian site, Sulekha.com, I wrote an article
pondering the interesting difficulties faced by non-European-descent
writers (like me) in developing an authentic voice in English.
Postmodernists, especially those interested in post-colonial literature,
obsess a lot about this sort of thing. Gayatri Spivak, for instance, wrote a
famous article, Can the Subaltern Speak? But scarily-impenetrable people
like Spivak are primarily interested in themes of oppression and politics. I
am primarily interested in the purely technical problem of how to write
authentically in English, in cases where my name and identity are
necessary elements of the text. Consider these different non-hyperlinked
ways of writing a sentence, within a hypothetical story set in Bollywood.

Fishbowl/Slumdog Millionaire method: Amitabh stared grimly


from a tattered old Sholay poster.
Expository: Amitabh Bachchan, the Bollywood superstar, stared
grimly from a tattered old Sholay poster. Sholay, as everybody
knew, was the blockbuster that truly established Bachchan.
Global contextual: Amitabh Bachchan, the Clint Eastwood of
India, stared grimly down from a tattered old Sholay poster.

Sholay, that odd mix of Kurosawa and John Wayne that drove
India wild.
Salman Rushdie method: Amitabh, he-of-boundless-splendor,
stared down, a-flaming, from a tattered old Sholay poster.

Critics and authors alike agonize endlessly about the politics of these
different voices. This particular example, crossing as it does linguistic and
cultural boundaries, in the difficult domain of fiction, is extreme. But the
same sorts of figure/ground/voice dynamics occur when you write inculture or non-fiction.
The first simply ignores non-Indian readers, who must look in at
Indians constructing meaning within a fishbowl, with no help. It is simple,
but unless the intent is genuinely to write only for Indians (which is
essentially impossible on the Web, in English), not acknowledging the
global context is a significant decision (whether deliberate or unthinking).
The second method is simply technically bad. If you cant solve the
problem of exposition-overload, you shouldnt be writing fiction.
The third method is the sort of thing that keeps literary scholars up at
nights, worrying about themes of oppression. Is acknowledging Clint
Eastwood as the prototypical strong-silent action hero a political act that
legitimizes the cultural hegemony of the West? What if Id said Bruce Lee
of India or Toshiro Mifune of India? Would those sentences be acts of
protest?
Rushdie pioneered the last method, the literary equivalent of theater
forms where the actors acknowledge the audience and engage them in
artistic ways. Rushdie finesses the problem by adopting neither simplicity
nor exposition, but a deliberate, audience-aware self-exoticization-with-awink. If you know enough about India, you will recognize he-ofboundless-splendor as one literal meaning of the name Amitabh, while
Sholay means flames. By putting in cryptic (to outsiders) cultural
references, Rushdie simultaneously establishes an identity for his voice,
and demands of non-Indians that they either work to access constructible
meaning, or live with opacity. At the same time, Indians are forced to look
at the familiar within a disconcerting context.

But Rushdies solution is far from perfect. In Midnights Children, for


instance, he translates chand-ka-tukda, an affectionate phrase for a child in
Hindi, literally as piece-of-the-moon. A more idiomatically appropriate
translation would be something like sweetie-pie. Depending on the
connotations of moon in non-Indian languages, the constructed meaning
could be anywhere from weird to random. That gets you into the whole
business of talking about languages, local and global metaphors,
translation, and the Sapir-Whorf hypothesis. Fine if that rich tapestry of
crap is what you want to write about. Not so good if you actually just want
to write a story about a pampered child.
Here is a solution that was simply not available to writers in the past:

Amitabh stared down grimly from a ratty old Sholay poster.

(a version of this solution, curiously, has been available to comicbook artists. If the sentence above had been the caption of a panel showing
a boy staring at an Amitabh Bachchan Sholay poster, you would have
achieved nearly the same effect).
This is an extraordinarily complex construct, because the sentence is a
magical, shape-shifting monster. It blends figure and ground compactly;
the gestalt has leaky boundaries limited only by your willingness to click.
Note that you can kill the magic by making the links open in new windows
(which reduces the experience to glorified citation, since you are
insistently hogging the stage and forcing context to stay in the frame).
What makes this magical is that you might never finish reading the story
(or this article) at all. You might go down a bunny trail of exploring the
culture and history of Bollywood. Traditionally, writers have understood
that meaning is constructed by the reader, with the text (which includes the
authors projected identity) as the stimulus. But this construction has
historically been a pretty passive act. By writing the sentence this way, I
am making you an extraordinarily active meaning-constructor. In fact, you
will construct your own text through your click-trail. Both reading and
writing are always political and ideological acts, but here Ive passed on a
lot more of the burden of constructing political and ideological meaning
onto you.

The reason this scares some people is rather Freudian: when an author
hyperlinks, s/he instantly transforms the author-reader relationship from
parent-child to adult-adult. You must decide how to read. Your mom does
not live on the Web.
Thats not all. The writer, as I said, has always been part of the
constructed meaning, but his/her role has expanded. Literary theorists
have speculated that bloggers write themselves into existence by
constructing their online persona/personas. The back-cover author
biography in traditional writing was a limited, unified and carefully
managed persona, usually designed for marketing rather than as a
consciously-engineered part of the text. Online however, you can click on
my name, and explore how I present myself on LinkedIn, Facebook and
Twitter. How deeply you explore me, and which aspects you choose to
explore, will change how you construct meaning from what I write.
So, in our three examples, weve gone from backward-looking, to
clever, to seriously consequential. But you aint seen nothing yet. Lets
talk about how the hyperlink systematically dismantles and reconstructs
our understanding of the idea of a text.
Fractured-Ludic Reading
The Kindle is a curiously anachronistic device. Bezos desire was to
recreate the ludic reading experience of physical books. To be ludic, a
reading experience must be smooth and immersive to the point where the
device vanishes and you lose yourself in the world created by the text. It
is the experience big old-school readers love. Amazon attempted to make
the physical device vanish, which is relatively unproblematic as a goal.
But they also attempted to sharply curtail the possibilities of browsing and
following links.
In light of what weve said about constructing your own text, through
your click-trail, and your meaning from that text, it is clear that Bezos
notion of ludic is not a harmless cognitive-psychology idea. It is a
political and aesthetic idea, and effectively constitutes an attitude towards
that element we know as dissonance. It represents classicism in reading.

Writers (of both fiction and non-fiction) have been curiously lagging
when it comes to exploring dissonance in their art. Musicians have gone
from regular old dissonance through Philip Glass and Nirvana to todays
experimental musicians who record, mix and play back random street
noises as performance. Visual art has always embraced collages and more
extreme forms of non sequitur juxtaposition. Dance (Stravinsky), film
(David Lynch) and theater (Beckett) too, have evolved towards extreme
dissonance. Writers though, have been largely unsuccessful in pushing
things as far. The amount of dissonance a single writer can create seems to
be limited by a very tight ceiling that beyond which lies incomprehensible
nonsense (Becketts character Lucky, in Waiting for Godot, beautifully
demonstrates the transition to nonsense).
In short, we do not expect musical or visual arts to be unfragmented
or smooth or allow us to forget context. We can tolerate extreme
closeness to random noise in other media. Most art does not demand that
our experience of it be ludic the way writing does. Our experience can
be disconnected, arms-length and self-conscious, and still constitute a
legitimate reading. Word-art though, has somehow been trapped within its
own boundaries, defined by a limited idea of comprehensibility and an
aesthetic of intimacy and smooth flow.
There are two reasons for this. First, sounds and images are natural,
and since our brains can process purely unscripted stuff of natural origin,
there is always an inescapable broader sensory context within which work
must be situated. The color of the wall matters to the painting in a way that
the chair does not matter to the reading of a book. Words are unnatural
things, and have always lived in isolated, bound bundles within which
they create their own natural logic. The second reason: music and visual
art can be more easily created collaboratively and rely on the diversity of
minds to achieve greater levels of dissonance (an actor and director for
example, both contribute to the experience of the movie). Writing has
historically been a lonely act since the invention of Gutenbergs press. We
are now returning to a world of writing that is collaborative, the way it
was before Gutenberg.
So what does this mean for how you understand click-happy online
reading? You have two choices:

You could think of click-happy Web browsing as non-ludic


cognition behavior that is destroying the culture of reading. This is
the view adopted by those who bemoan continuous partial
attention. This is Nick Carr in Is Google Making us Stupid?
Or you could think of browsing as a new kind of ludic: an
unsettling, fragmented experience that is still comprehensible in
the sense that a David Lynch movie is comprehensible. It is a kind
of ludic that can never be created within one brain. Click trails are
texts whose coherence derives from your mind, but whose
elements derive from multiple other minds.

In other words, when you browse and skim, you arent distracted and
unfocused. You are just reading a very dissonant book you just made up.
Actually, you are reading a part of a single book. The single book. The one
John Donne talked about. I first quoted this in my post The Deeper
Meaning of Kindle. The second part is well-known, but it is the first part
that interests us.
All mankind is of one author, and is one volume;
when one man dies, one chapter is not torn out of the book,
but translated into a better language; and every chapter
must be so translatedAs therefore the bell that rings to a
sermon, calls not upon the preacher only, but upon the
congregation to come: so this bell calls us all: but how
much more me, who am brought so near the door by this
sickness.No man is an island, entire of itselfany mans
death diminishes me, because I am involved in mankind;
and therefore never send to know for whom the bell tolls; it
tolls for thee.
The Hyperlink as the Medium
If you start with McLuhan, as most people do, there are two ways to
view the Web: as a vast meta-medium, or as a regular McLuhanesque
medium, with nothing meta about it. For a long-time I adopted the metamedium view (after all, the Web can play host to every other form: text,
images, video and audio), but I am convinced now that the other view is

equally legitimate, and perhaps more important. The Web is a regular


medium whose language is the hyperlink. The varieties of hyperlinking
constitute the vocabulary of the Web. If I give you an isolated URL to type
into your browser, for a stand-alone web page with a video or a piece of
text, you are not really on the Web. If there is no clickable hyperlink
involved, you are just using the browser as a novel reading device.
But though hyperlinking can weave through any sort of content, it has
a special relation to the written word. The Gutenberg era was one where
writing was largely an individual act. Before movable type, the epics and
religious texts of the ancient world were harmonies of multiple voices. But
what we have today is not a resurrected age of epics. The multi-voiced
nature of todays hyper-writing is different. The difference lies in the fact
that the entire world of human experience has been textualized online.
Remember what I said about walls and paintings? The color of the wall
matters to the painting in a way that the chair does not matter to the
reading of a book. In the Web of hyperlinks, writing has found its wall.

Seeking Density in the Gonzo Theater


January 11, 2012
Consider this thought experiment: what if you were only allowed
2000 words with which to understand the world?
With these 2000 words, youd have to do everything. Youd be
allowed to occasionally retire some words in favor of others, or invent
new words, but youd have to stick to the budget.
Everything would have to be expressible within the budget: everyday
conversations and deep conversations, shallow thoughts and profound
ones, reflections and expectations, scientific propositions and vocational
instruction manuals, poetry and stories, emotions and facts.
How would you use your budget? Would you choose more nouns or
verbs? How many friends would you elevate to a name-remembered
status? How many stars and bird species would you name? Would you
have more concrete words or more reified ones in your selection? How
many of the most commonly used words would you select? Counting
mathematical symbols as words, how many of those would you select?
Would you mimic others selections or make up your own mind?
***
When I read old texts, I am struck by the density of the writing.
Words used to be expensive. You had to make one word do many things.
That last sentence contains a simple example. I originally had convey
many meanings in place of do many things. For some readers, the
substitution will make no difference. To others, it will make a great deal of
difference.
We talk of dense texts as being layered. They lend themselves to rereading from many perspectives over a long period of time. Even as late as
the nineteenth century, we find that the average professional writer wrote

with a density that rivals the densest writing today. With the exception of
scientific writing best understood as a social-industrial process for
increasing the density of words every other kind of writing today has
become less layered. Most writing admits one reading, if that.
Dense writing is not particularly difficult. Merely time-consuming. As
the word layering suggests, it is something of a mechanical craft, and you
become better with practice. Even mediocre writers in the past, working
with starter material no denser than todays typical Top 10 blog post, could
sometimes achieve sublime results by putting in the time.
If the mediocre can become good by pursuing density, the good can
become great. Robert Louis Stevenson famously wrote gripping action
sequences without using adverbs and adjectives. His prose has a sparse
elegance to it, but is nevertheless dense with meaning and drama. I once
tried the exercise of avoiding adverbs and adjectives. I discovered that it is
not about elimination. The main challenge is to make your nouns and
verbs do more work.
***
In teaching and learning writing today, we focus on the isolated virtue
of brevity. We do not think about density. Traditions of exegesis the
dilution, usually oral, of dense texts to the point where they are
consumable by many are confined to dead rather than living texts.
We have forgotten how to teach density. In fact, weve even forgotten
how to think about it. We confuse density with overwrought, baroque
textures, with a hard-to-handle literary style that can easily turn into
tasteless excess in unskilled hands.
The 2000-word thought experiment, if you try it, will likely force you
to consider density of meaning as a selection factor. Some words, like
schadenfreude, are intrinsically dense. Others, like love, are dense because
they are highly adaptable. Depending on context, they can do many
things.
Density is a more fundamental variable than the length of a text. It is
intrinsic to writing, like the density of a fluid; what is known in fluid

dynamics as an intensive property. The length of an arbitrarily delineated


piece of text on the other hand, is an extensive property, like the volume of
a specified container of fluid.
Choosing words precisely and crafting dense sentences is important.
Choosing small containers is not.
***
Writing used to be a form of making. I sometimes wonder what it
would be like to have to carve your thoughts onto stone tablets. One of
these days I am going to try carving the first draft of a post in stone.
Writing on paper is also an expensive luxury. There was a time when
writers made their own paper and ink. You had to write with
temperamental things like quills. The practice of calligraphy was not a
writerly affectation. It was a necessary skill in the days of temperamental
media.
The scribe was more of an archivist than a writer. The other ancestor
of the writer, the bard-sage, was both composer and performer. The
average person did not read, but relied on the bard or priest to expand
upon and perform the written, archived word. Particularly good
performances would lead to revisions of the written texts.
When fountain pens and cheap factory-made paper made their
appearance, writers were able to waste paper, and as a consequence,
written words. In the history of thought, the invention of the ability to
waste words was probably as important as the invention of the ability
famously noted by Alan Kay to waste bits in the history of
programming.
With cheap paper was born that iconic image of the twentieth century
writer a writer sitting alone in a room, crumpling up a piece of writing
in frustration, and tossing it into an overflowing waste-paper basket.
Unlike the sage-bard, enacting old texts and beta-testing new ones through
public oral performances, or the scribe, committing tested, quality-

controlled and expensive texts to stone, the modern, pre-Internet writer


was a resource-rich creature of profligate excess.
The very idea of a waste paper basket would have been unthinkable
at one time.
***
It is difficult today to get a sense of how expensive writing used to be.
I once watched a traditional temple scribe demonstrate the process of
making the palm-leaf manuscripts that were used in India until Islam
brought paper-making to the subcontinent. That probably happened a few
centuries after the Abbasids defeated the Tang empire at the Battle of Talas
in 751 AD, and extracted the secret of paper-making from Chinese
prisoners of war.
Palm leaves are easily the worst writing technology ever invented by
a major culture. They make leather, papyrus, paper and silk look like
space-age media by comparison. A good deal that seems strange about
India as an idea suddenly makes sense once you get that the civilization
was being enacted through this ridiculous medium (and equally ridiculous
ones like tree bark) until about 1000 AD. Imagine a modern civilization
that had to keep its grand narrative going using only tweets, and you get
some sense of what was going on.
Heres how you make palm-leaf manuscripts. First you cut little
index-card sized rectangles out of palm fronds and dry them flat. Then you
carefully use a needle to scratch out the text typically a few lines per
leaf. Then you make an ink out of ground charcoal, carefully rub it into
the scratches, and swab away the excess. Finally, you carefully pierce a
hole through the middle (not the edge, since the thing is brittle) and thread
a piece of string through a sheaf of loosely-related leaves.
Congratulations, you have a book.
Since the sheaf is more unstable than individual leaves, you have to
plan for graceful degradation. Expect individual leaves to be lost or
damaged. Expect accidental shuffling and page numbering turning to
garbage. Expect new leaves to be inserted, like viruses. Dont expect

multi-leaf stories to remain stable. Expect narrative trunks to sprout


branches added by later authors.
The palm leaf manuscript was brittle and easily damaged, available in
one unhelpful size, with a lifespan of perhaps a few decades on average
(carefully preserved ones lasted around 150 years I believe). After that you
had to make a copy if you wanted to keep the ideas alive. If you were rich
or powerful, you could get stuff carved onto stone or copper plates by
slaves. If not, your best bet was to go with palm leaves and hope that
people would descend on your home to make copies.
***
When you look at old writing technology, poetry suddenly makes
sense.
It is modular content that comes in fixed-length chunks, with
redundancy and error-correcting codes built in. It is designed to be
transmitted and copied across time and space through unreliable and noisy
channels, one stone tablet, palm leaf or piece of handmade paper at a time.
The technology was still unreliable enough that the oral tradition remained
the primary channel. Writing began as a medium for backups. Scribes
were the first data warehousing experts. They did more than merely
transcribe the spoken word. They compressed, corrected and encrypted as
well, and periodically updated texts to reflect the extant state of the oral
tradition.
That is why verses are so eminently quotable outside the context of
poems. Poems are extensive oral containers of arbitrary length, in some
cases delineated after the fact. Verses are standardized containers designed
to carry intense, dense, archival-quality words around.
Today we view traditional verse epics as single works. The Illiad has
about 9000 verses. The Mahabharata has about 24,000. It makes far more
sense to talk about both as data-warehoused records of extremely long
in both time and words convergent conversations. They are closer to
Googles index than to books.

For the ancients, texts had to be little metered packets. But as paper
technology got cheaper and more reliable, poetry, like many other obsolete
technologies before and after, turned into an art form. Critical function
turned into dispensable style. Meter and rhyme ceased to be useful as
error-correcting coding mechanisms and turned into free dimensions for
artistic expression.
Soon, individual verses could be composed under the assumption of
stable, longer embedding contexts. Extensive works could be delineated a
priori, during the composition of the parts. And the parts could be safely
de-containerized. Rhyming verse could be abandoned in favor of blank
verse, and eventually meter became entirely unnecessary. And we ended
up with the bound book of prose.
Technologically, it was something of a backward step, like reverting
to circuit-switched networks after having invented packet switching, or
moving back from digital to analog technology. But it served an important
purpose: allowing the individual writer to emerge. The book could belong
to an individual author in a way a verse from an oral tradition could not.
***
Poetry gets it right: length is irrelevant. You can standardize and
normalize it away using appropriate containerization. It is density that
matters. Evolving your packet size and vocabulary over time helps you
increase density over time.
My posts range between 2000-5000 words, and I post about once a
week here on ribbonfarm. But there are many bloggers who post two or
three 300-word posts a day, five days a week. They also log 2000-5000
words.
So I am not particularly prolific. I merely have a different packet size
compared to other bloggers, optimized for a peculiar purpose: evolving an
idiosyncratic vocabulary. It seems to take several thousand words to
characterize a neologism like gollumize or posturetalk. But once that is
done, I can reuse it as a compact and dense piece of refactored perception.

You could say that what I am really trying to do on this blog is


compose a speculative dictionary of dense words and phrases. Perhaps one
day this blog will collapse under its own gravity into a single super-dense
post written entirely with 2000 hyperlinked neologisms, like a neutron
star.
Poetry functional ancient poetry, the cultural TCP/IP of the world
before around 1000 AD is necessarily a social process, involving, at the
very least, a sage-bard, a scribe, an audience and a patron. The oral culture
refines, distills, tests, reworks, debates and judges. Iterative performance is
a necessary component. When oral exegesis of an unstable verse dies
down, and memorization and repetition validate the quality of the finished
verse, the scribe breaks out his chisel.
The prose book can stand apart from broader social processes in
radically individual ways. It can travel from writer to readers largely
unaltered, setting up a hub-spoke pattern of conversational circuits.
***
Ive occasionally described my blogging as a sort of performance art.
But something about that self-description has been bothering me. I have
now concluded that if the description applies at all, it applies to a different
kind of blogger, not me.
The Web obscures the crucial and necessary distinction between oral
and written cultures. Some bloggers perform and talk. Others are scribes.
I think I am a scribe, not a performer.
Yet, there is no easy correspondence between pre-Gutenberg bardsages and scribes and todays bloggers. In the intervening centuries, we
have seen the rise and fall of the individualist writer, working alone, filling
waste-paper baskets.
History does not rewind. It synthesizes. The blogosphere, I am
convinced, synthesizes the collectivist pre-Gutenberg culture of sage-bard
and scribes with the individualist post-Gutenberg culture of papercrumpling waste-paper-basket fillers.

In the process of synthesis, virtual circuits must ride once more on top
of a revitalized packet-switched network. The oral/written distinction must
be replaced by a more basic one that is medium-agnostic, like the Internet
itself.
***
According to legend, the sage Vyasa needed a scribe to write down
the Mahabharata as he composed it. Ganesha accepted the challenge, but
demanded that the sage compose as fast as he could write. Wary of the
trickster god, Vyasa in turn set his own condition: Ganesha would have to
understand every verse before writing it down. And so, the legend
continues, they began, with Vyasa throwing curveball verses at Ganesha
whenever he needed a break.
The figure of Vyasa the composer is best understood as a literary
device to represent a personified oral tradition (that perhaps included a
single real Vyasa or family of Vyasas).
But the legend gets at something interesting about the role of a scribe
in a dominantly oral culture. A second-class citizen like a minute-taker or
official record-keeper, the scribe must nevertheless synthesize and
interpret an ongoing cacophony in order to produce something coherent to
write down. When the spoken word is cheap and the written word is
expensive, the scribe must add value. The oral tradition may be the
default, but the written one is the court of final appeal in case of conflict
among two authoritative individuals.
There is a brilliant passage in Yes, Prime Minister, where the Cabinet
Secretary Humphery Appleby helps the Prime Minister, Jim Hacker, cook
the minutes of a cabinet meeting after the fact, to escape from an informal
oral commitment. Applebys exposition of the principle of accepting the
minutes as the de facto official memory gets to the heart of the VyasaGanesha legend:
Sir Humphrey: It is characteristic of all committee
discussions and decisions that every member has a vivid
recollection of them and that every members recollection

of them differs violently from every other members


recollection. Consequently, we accept the convention that
the official decisions are those and only those which have
been officially recorded in the minutes by the Officials,
from which it emerges with an elegant inevitability that any
decision which has been officially reached will have been
officially recorded in the minutes by the Officials and any
decision which is not recorded in the minutes is not been
officially reached even if one or more members believe
they can recollect it, so in this particular case, if the
decision had been officially reached it would have been
officially recorded in the minutes by the Officials. And it
isnt so it wasnt.
The key point here is that the scribe must do more than merely
transcribe. He must interpret and synthesize. I suspect the Vyasa-Ganesha
legend was invented by the first scribe paid to write down the hitherto-oral
Mahabharata, to legitimize his own interpretative authority in capturing
something coherent from a many-voiced tradition, with each voice
claiming the authority of a mythical Vyasa.
***
So if the modern blogosphere is neither the collectivist, negotiated
recording of a Grand Narrative, arrived at via a conversation between
scribes and sage-bards, nor the culture of purely individual expression that
reigned between Gutenberg and Tim Berners-Lee, what is it?
For blogging to be performance art, the performer must live an
interesting life and do interesting things. For a while I thought I qualified,
but then I reflected and was forced to admit that my dull daily routine does
not qualify as raw material for performance art.
How about this: instead of a half-coherent oral tradition or the
relatively coordinated doings of the British Cabinet, the blogosphere is
primarily an uncoordinated theater of large-scale individual gonzo
blogging. As culture is increasingly enacted by this theater of decentered

gonzo blogging instead of traditions that enjoy received authority, minutetaking scribe bloggers must increasingly interpret what they are seeing.
The first human scribe who wore the mask of Ganesha could
reasonably assume that there was a coherent trunk narrative with
discriminating judgments required only at the periphery. He would only
be responsible for smoothing out the rough edges of an evolving oral
consensus. Equally Humphrey Appleby could hope for a coherent
emergent intentionality in the deliberations of the cabinet.
But the scribe-blogger cannot assume that there is anything coherent
to be discovered in the gonzo blogging theater. At best he can attempt to
collect and compress and hope that it does not all cancel out.
There is another difference. When words are literally expensive, as
words carved in stone are, anything written has de facto authority,
underwritten by the wealth that paid for the scribe. Scribes were usually
establishment figures associated with courts, temples or monasteries,
deriving their interpretative authority from more fundamental kinds of
authority based on violence or wealth.
With derived authority comes supervision. The compensation for lost
derived authority is the withdrawal of supervision. The scribe-blogger is
an unsupervised and unauthorized chronicler in a world of contending
gonzos. Any authority he or she achieves is a function of the density and
coherence of the interpretative perspective it offers on the gonzo-blogging
theater.
***
I wish I could teach dense blogging. I am not sure how I am gradually
acquiring this skill, but I am convinced it is not a difficult one to pick up.
It requires no particular talent beyond a generic talent for writing and
thinking clearly. It is merely time-consuming and somewhat tedious.
Sometimes I strive for higher density consciously, and at other times,
dense prose flows out naturally after a gonzo-blogger memeplex has
simmered for a while in my head. I rarely let non-dense writing out the
door. You need gonzo-blogging credibility to successfully do Top 10 list

posts. I can manufacture branded ideas, but lack the raw material needed
to sustain a personal brand.
Writing teachers with a doctrinaire belief in brevity urge students to
focus. They encourage selection and elimination in the service of explicit
intentions. The result is highly legible writing. Every word serves a
singular function. Every paragraph contains one idea. Every piece of prose
follows one sequence of thoughts. There is a beginning, a middle and an
end. Like a city laid out by a High-Modernist architect, the result is
anemic. The text takes a single prototypical reader to a predictable
conclusion. In theory. More often, it loses the reader immediately, since no
real reader is anything like the prototypical one assumed by (say) the
writer of a press release.
An insistence on focus turns writing into a vocational trade rather than
a liberal art.
Both gonzo blogging and scribe blogging lead you away from the
writing teacher.
Striving for density, attempting to compress more into the same
number of words, inevitably leads you away from the legibility prized by
writing teachers. Ambiguity, suggestion and allusion become paramount.
Coded references become necessary, to avoid burdening all readers with
selection and filtration problems. Like Humpty-Dumpty, you are
sometimes forced to enslave words and chain them to meanings that they
were not born with.
***
Dense writing creates illegible slums of meaning. To the vocational
writer, it looks discursive, messy and randomly exploratory.
But what the vocational writer mistakes for a lack of clear intention is
actually a multiplicity of intentions, both conscious and unconscious.

Francine Prose, in Reading Like a Writer, remarked that beginning


novelists obsess about voice, the question of who is speaking. She goes on
to remark that the more important question is who is listening?
The failure to ask who is listening is peculiar to pre-Internet book
writers. You cannot possibly fail that way as a blogger.
The modern extensive-prose, word-wasteful book represents the
apogee of a certain kind of individualism. An individualism that writes
itself into existence through self-expression unmodulated by in-process
feedback, something only entire cultures could afford to do in the age of
stone-carved words. For this kind of writer, the reader was a distant
abstraction, easily forgotten.
A muse was an optional aid to the process rather than a necessary
piece of cognitive equipment. At most modern, pre-blogging book writers
wrote for a single archetypal reader.
For the blogger, a multiplicity of readerly intentions is a given. At the
very least, you must constantly balance the needs of the new reader
against the needs of the long-time reader. Every frequent commenter or
email/IM correspondent becomes an unavoidable muse. This post for
instance, was triggered by a particularly demanding muse who accused
me, over IM, of having gotten lazy over the last few posts and neglecting
this blog in favor of my more commercial, less-dense writing.
She was right. Mea culpa. Having to pay the rent is not a valid excuse
for failing to rise to the challenge of a tricky balancing act.
Density is the natural consequence of trying to say many things to
many distinct people over long periods of time without repeating yourself
too much or sparking flame wars. The long-time reader gets impatient
with repetition and demands compaction of old ideas into a shorthand that
can be built upon. The newcomer demands a courteous, non-cryptic
welcome. Active commenters demand a certain kind of room for their own
expansion, elaboration and meaning construction.
The exegesis of living texts is not the respectful affair that it is around
dead ones. If you blog, there will be blood.

***
In the days of 64k memories, programmers wrote code with as much
care as ancient scribes carved out verses on precious pieces of rock, one
expensive chisel-pounding rep at a time.
In the remarkably short space of 50 years, programming has evolved
from rock-carving parsimony to paper-wasting profligacy.
Still living machine-coding gray eminences bemoan the verbosity and
empty abstractions of the young. My one experience of writing raw
machine code (some stepper-motor code, keyed directly into a controller
board, for a mechatronics class) was enlightening, but immediately
convinced me to run away as fast as I could.
But why shouldnt you waste bits or paper when you can, in service of
clarity and accessibility? Why layer meaning upon meaning until you get
to near-impenetrable opacity?
I think it is because the process of compression is actually the process
of validation and comprehension. When you ask repeatedly, who is
listening, every answer generates a new set of conflicts. The more you
resolve those conflicts before hitting Publish, the denser the writing. If
you judge the release density right, you will produce a very generative
piece of text that catalyzes further exploration rather than ugly flame wars.
Sometimes, I judge correctly. Other times I release too early or too
late. And of course, sometimes a quantity of gonzo-blogger theater
compresses down to nothing and I have to throw away a draft.
And some days, I find myself staring at a set of dense thoughts that
refuse to either cohere into a longer piece or dissolve into noise. So I
packetize them into virtual palm-leaf index cards delimited by asterixes,
and let them loose for other scribes to shuffle through and perhaps sinter
into a denser mass in a better furnace.

It is something of a lazy technique, ultimately no better than listblogging in the gonzo blogosphere. But if it was good enough for
Wittgenstein, its good enough for me.
***

Rediscovering Literacy
May 3, 2012
Ive been experimenting lately with aphorisms. Pithy one-liners of the
sort favored by writers like La Rochefoucauld (1613-1680). My goal was
to turn a relatively big idea, the sort I would normally turn into a 4000word post, into a one-liner. After many failed attempts over the last few
months, a few weeks ago, I finally managed to craft one I was happy with:
Civilization is the process of turning the incomprehensible into the
arbitrary.
Many hours of thought went into this 11-word candidate for eternal
quotability. When I was done, I was tempted to immediately unpack it in a
longer essay, but then I realized that that would defeat the purpose.
Maxims and aphorisms are about more than terseness in the face of
expensive writing technology. They are about basic training in literacy.
The aphorism above is possibly the most literate thing I have ever written.
By stronger criteria Ill get to, it might even be the only literate thing Ive
ever written, which means Ive been illiterate until now.
This post isnt about the aphorism itself (Ill leave you to play with it),
but about literacy.
I used to think that the terseness of written language through most of
history was mostly a result of the high cost and low reliability of writing
technologies in pre-modern times. I now think these were secondary
issues. I have come to believe that the very word literacy meant something
entirely different before around 1890, when print technology became
cheap enough to sustain a written form of mass media.
Literacy as Sophistication
Literacy used to be a very subtle concept that meant linguistic
sophistication. It used to denote a skill that could be developed to arbitrary
levels of refinement through practice. Literacy meant using mastery over

language both form and content to sustain a relentless and


increasingly sophisticated pursuit of greater meaning. It was about an
appreciative, rather than instrumental use of language. Language as a
means of seeing rather than as a means of doing.
Reading and writing the ability to translate language back and
forth between oral and written forms was a secondary matter. It was a
vocational pursuit of limited depth.
The written form itself was merely a convenience for transmitting
language across space and time, and a mechanism by which to extend the
limits of working memory. It had little to do with language skills per se.
Confusing the two is like confusing the ability to read and write
musical notation with musical ability. You can have exceptional musical
ability without knowing how to read music. And conversely, you might
have no musical ability whatsoever, but still be able to read and write
musical notation and translate back and forth between the keyboard and
paper. Being able to read and write musical notation really has almost
nothing to do with musical ability.
When writing was expensive, conflating the two skills (two-way
translation and sophisticated use) was safe and useful. If somebody knew
how to read and write, you could safely assume that he or she was also a
sophisticated user of language.
It was never considered a necessary condition though, merely a
sufficient one. A revealing sign is that many religious messiahs have been
illiterate in the reading/writing sense, and have had scribes hanging on
their every word, eagerly transcribing away for posterity.
Exposition and Condensation
Before Gutenberg, you demonstrated true literacy not by reading a
text out aloud and taking down dictation accurately, but through
exposition and condensation.

You were considered literate if you could take a classic verse and
expound upon it at length (exposition) and take an ambiguous idea and
distill its essence into a terse verbal composition (condensation).
Exposition was more than meaning-extraction. It was a demonstration
of contextualized understanding of the text, skill with both form and
content, and an ability to separate both from meaning in the sense of
reference to non-linguistic realities.
Condensation was the art of packing meaning into the fewest possible
words. It was a higher order skill than exposition. All literate people could
do some exposition, but only masters could condense well enough to
produce new texts considered worthy of being added to the literary
tradition.
Exposition and condensation are in fact the fundamental learned
behaviors that constitute literacy, not reading and writing. One behavior
dissolves densely packed words using the solvent that is the extant oral
culture, enriching it, while the other distills the essence into a form that
can be transmitted across cultures.
Two literate people in very different different cultures, if they are
skilled at exposition, might be able to expand the same maxim (the Golden
Rule for instance) into different parables. Conversely, the literary masters
of an era can condense stories and philosophies discovered in their own
time into culturally portable nuggets.
So the terseness of an enduring maxim is as much about cross-cultural
generality as it is about compactness.
The right kind of terseness allows you to accomplish a difficult
transmission challenge: transmission across cultures and mental models.
Reading and writing by contrast, merely accomplish transmission across
time and space. They are much simpler inventions than exposition and
condensation. Cultural distance is a far tougher dimension to navigate than
spatial and temporal dimensions. By inventing a method to transmit across
vast cultural distances, our earliest neolithic ancestors accidentally turned
language into a tool for abstract thinking (it must have existed before then

as a more rudimentary tool for communication, as in other species that


possess more basic forms of language).
So how did we come to focus on reading and writing? Why is it
reading, riting and rithmetic and not exposition, condensation and
arithmetic?
Reading and Writing
Today the ability to read and write is ubiquitous in the developed
world, and what was once a safe conflation of literacy and transcription
ability has become more than meaningless. It has become actively
dangerous.
To see why, it is useful to consider the relative status of the spoken
word with respect to the written word in pre-modern times.
Before Gutenberg, reading and writing were considered not just
secondary skills, but lowly ones, much as typing in the days before
personal computing. It is revealing that the first designs for a personal
computer at Xerox included one that had no keyboard next to the monitor,
but was equipped instead with a dictaphone connection to a secretary who
did any typing necessary. It was assumed that executives would not want
to do their own typing, but would watch the action scroll by on a monitor.
Reading and writing were for students and scribes. Career scribes
were not scholars. Reading and writing skills by themselves represented a
vocation, not learning.
Where both the written and spoken word could be used, the latter was
in fact preferred. Scholars demonstrated linguistic virtuosity through the
spoken rather than the written word. When they gained enough
prominence, they acquired students and scribes who would do the lowly
work of translation between oral and written forms in exchange for the
privilege of learning from a master.
But we havent explained why the spoken word was preferred. What
has confused us is the red herring of preservation through memorization. If

preservation through memorization were the only purpose of oral cultures,


they should have all vanished long ago. As McLuhan famously argued, it
wasnt until the Gutenberg revolution that the spoken word was finally
dethroned by the written word.
The traditional explanation for the mysterious persistence of oral
cultures has been that pre-Gutenberg written-word technologies were
either too expensive to be generally accessible, or simply not reliable
enough. The characteristic practices of oral cultures, by this theory,
evolved to aid accurate preservation through memorization.
This is a bit like saying that people continued to eat fresh foods after
refrigeration was invented because early refrigerators were not reliable
enough or inexpensive enough to allow everybody to eat frozen foods.
The memorization-for-preservation explanation falls apart when you
poke a little. You find that typical oral cultures contain practices that we
moderns loosely label memorization because we dont understand what
they actually accomplish.
I am going to use Indian oral culture as an example because it is the
one I know best, and because it possesses some illuminating extreme
features. But I suspect you will find similar unexplained complexity in
every oral culture, particularly ones associated with major religions, such
as Latin.
Oral Cultural is Not About Memorization
This was a radical realization for me: oral culture is not about
preservation-by-memorization. One strong piece of evidence can be found
in this Wikipedia description of memorization practices in ancient India.
Ignore the commentary and pay attention to the actual descriptions of the
recitation techniques:
Prodigious energy was expended by ancient Indian
culture in ensuring that these texts were transmitted from
generation to generation with inordinate fidelity. For
example, memorization of the sacred Vedas included up to

eleven forms of recitation of the same text. The texts were


subsequently proof-read by comparing the different
recited versions. Forms of recitation included the japha (literally mesh recitation) in which every two
adjacent words in the text were first recited in their original
order, then repeated in the reverse order, and finally
repeated again in the original order. The recitation thus
proceeded as:
word1word2,
word2word1,
word1word2;
word2word3, word3word2, word2word3;
In another form of recitation, dhvaja-pha (literally
flag recitation) a sequence of N words were recited (and
memorized) by pairing the first two and last two words and
then proceeding as:
word1word2, wordN 1wordN; word2word3, wordN
3wordN 2; ..; wordN 1wordN, word1word2;
The most complex form of recitation, ghana-pha
(literally dense recitation), took the form:
word1word2,
word2word1,
word1word2word3,
word3word2word1, word1word2word3; word2word3,
word3word2, word2word3word4, word4word3word2,
word2word3word4;
For fun, I will offer you these recitation forms of my newly-minted
maxim.
Original: Civilization is the process of turning the incomprehensible
into the arbitrary.
Mesh recitation: civilization is, is civilization, civilization is, the
process, process the, the process, of turning, turning of, of turning
Flag recitation: civilization is, the arbitrary, is the, into the, the
process, incomprehensible into, process of, the incomprehensible

Dense recitation: civilization is, is civilization, civilization is the, the


is civilization, civilization is the, is the, the is, is the process, process the
is, is the process
If you are practicing eleven different forms of combinatorial
recitation, there is clearly something going on beyond preservation-bymemorization. One piece of evidence is that though the Vedas were
accurately preserved, the oral culture also sustained torrents of secondary
expository literature that was not accurately preserved. The Mahabharata
is an example. Not only was no canonical version preserved, there was no
canonical version. The thing grew like a Wikipedia of mythological fanfiction.
From my own experiences with memorization, the recitation routines
seem like extreme overkill. Straightforward repetition, aided by meter and
rhyme, is sufficient if preservation-by-memorization (as an alternative to
unreliable writing), is the only goal. I memorized two Shakespeare plays
that way (though admittedly I have now forgotten most of them).
So what is going on here?
Recitation as Creative Destruction
Once you try this out loud, you realize what is happening. This is
microcosmic creative destruction. Try to do this sort of recitation really
mindlessly. You will find it extraordinarily difficult. The recitation patterns
will force you to pay attention to meaning as well.
Far from being about mindless rote memorization, recitation is about
mindful attention to a text.
Youre taking a permutations-and-combinations blender to the words,
juxtaposing them in new ways, and actively performing combinatorial
processing. You are rigorously testing the strength of every single word
choice and ordering decision. You are isolating and foregrounding
different elements of the logical content, such as implication, subject-verb
and verb-object agreement, and so forth. There is an functional-aesthetic

element too. Terseness does not preclude poetry (and therefore,


redundancy). In fact it requires it. Despite the compactness of a text, room
must be made for various useful symmetries.
If the original has any structural or semantic weaknesses at all, this
torture will reveal it. If the original lacks the robustness that poetry brings,
it will be added.
Not only does all this not help plain memorization, I claim that it
makes it harder. You destabilize the original line in your head and turn it
into a word soup. If the original is any way confused or poorly ordered,
you will soon end up in a state of doubt about which sequence of words is
the correct one.
For many students, practicing recitation must have been mindless
tedium, but for a few, it would have catalyzed active consideration and
reworking of the underlying ideas, in search of new wisdom. These
students must have evolved into new masters, the source of beneficial
mutations and crossovers in the cultural memeplexes they were charged
with preserving.
Being forced to juggle words like this must have helped cultivate a
clear awareness of the distinction between form and content. It must have
helped cultivate an appreciation of language as a medium for performance
rather than a medium for transmission or preservation. It must have forced
students to pay careful attention to precision of word choice in their own
compositions. It must have sustained a very mindful linguistic culture.
The analogy to music is again a useful one. The description of the
varied forms of recitation sounds less like tedious memorization and more
like music students practicing their scales. The only reason that you
remember the basic scale (do re mi fa so la te do in Western solfege
notation) is that the sequence has the simplest and most complete
progression among all the permutations and combinations of the notes.
But if you could only sing the one pattern, you wouldnt be a musician
(actually, there is more than an analogy here; music and language are
clearly deeply related, but I havent thought that idea through).

Being only able to faithfully transcribe between oral and written


forms is rather like being only able to sing the default do-re-me sequence.
The former can no more be a true measure of literacy than the latter can be
a measure of musical ability.
The only way the original can survive such mangling is if it is actually
a beautifully dense condensation that has a certain robust memetic
stability. At the risk of losing most of you, I think of a carefully composed
set of related aphorisms as eigenvectors spanning a space of meaning. It is
the space itself, and the competence to explore it, that define a literate
comprehension of the text. Not the ability to reproduce or translate
between written and oral forms.
We can make a fairly strong claim:
Oral cultures are not just, or even primarily, about quality assurance
in transmission. They are primarily about quality assurance in
composition, and training in the basic moves of exposition and
condensation.
When you think about it this way, there is no mystery. Oral culture
persisted long after the development of writing because it was not about
accurate preservation. It was about performance and cultural enactment
through exposition and condensation.
The Costs of Gutenberg
And then Gutenberg happened.
The results were not immediately apparent. The old culture of literacy
persisted for several centuries. The tipping point came in the 1890s, when
printing technology became sufficiently cheap to support mass media
(there is a world of difference between ubiquity of bibles and a culture of
daily newspapers).
So sometime in the twentieth century, we lost all the subtlety of oral
culture, turned our attention to the secondary vocational skills of reading
and writing, and turned literacy into a set of mechanical tests.

Today, to be literate simply means that you can read and write
mechanically, construct simple grammatical sentences, and use a minimal,
basic (and largely instrumental) vocabulary. We have redefined literacy as
a 0-1 condition rather than a skill that can be indefinitely developed.
Gutenberg certainly created a huge positive change. It made the raw
materials of literary culture widely accessible. It did not, however, make
the basic skills of literacy, exposition and condensation, more ubiquitous.
Instead, a secondary vocational craft from the world of oral cultures
(one among many) was turned into the foundation of all education, both
high-culture liberal education and the vocational education that anchors
popular culture.
The Fall of High Culture
I wont spend much time on high culture, since the story should be
familiar to everybody, even if this framing is unfamiliar.
The following things happened.

Instead of condensing new knowledge into wisdom, we began


encrypting it into jargon.
Exposition as creative performance gave way to critical study as
meaning-extraction.
The art of condensation turned into the art of light, witty party
banter.
Conversation turned into correspondence and eventually into
citation.
Natural philosophy turned into science, and lost its literary
character.
Interpretation and re-enactment became restricted to narrowly
political ends.
Poetry was transformed from an intermediate-level literacy skill to
a medium for self-indulgence.

The result of these changes on high culture was drastic. Discovery


began to outpace interpretation and comprehension. We began to discover
more and more, but know less and less. Science seceded from the rest of
culture and retreated behind walls of jargon. The impoverished remains
outside those walls were re-imagined as a shrill and frozen notion of
humanism.
Mathematics and programming, two specialized derivatives of
language that I consider part of high culture, retained the characteristics of
oral cultures of old, with an emphasis on recombinant manipulation,
terseness, generality and portability.
Both are now being threatened (by increasingly capable forms of
computing). I will leave that story for another day.
The Fall of Popular Culture
But it is perhaps the transformation of popular culture that has been
most dramatic. If you have ever talked to an intelligent and articulate, but
illiterate (in the modernist reading-writing sense) member of a popular
folk culture that has been relatively well-shielded from modern mass
culture, you will understand just how dumb the latter is.
Pre-modern folk cultures are as capable as their high-culture cousins
of sustaining linguistic traditions based on exposition and condensation.
They are the linguistic minor leagues in relation to the major leagues of
high culture, not spectator-cultures.
A pre-modern village does not rely, for intellectual sustenance, on
stories brought from imperial capital cities by royal bards. At best, a few
imported elements from distant imperial cultures become political
integration points within larger grand narratives. I encountered a curious
example of this sort of thing in Bali: a minor character in the
Mahabharata, Sahadeva, apparently serves as the integration point
between the localized version of Hinduism and purely local elements like
the Barong and Rangda, which do not appear anywhere in the
Mahabharata to my knowledge.

By contrast, modern mass culture is a spectator culture, linguistically


speaking. You read stories but you do not necessarily attempt to rewrite
them. You watch movies, but you do not attempt to re-enact them as plays
that incorporate elements of local culture. The analogy to music is again
useful. Before the gramaphone and radio, most families around the world
made their own music.
The effects of print, radio and television based mass media were to
basically destroy popular literary (but not necessarily written) cultures
everywhere. Was it an accident or an act of deliberate cultural violence?
I believe it was an accident that proved so helpful for the industrial
world that repairs were never made, like smallpox decimating the ranks of
Native Americans.
For the industrial world, exposition and condensation were useless
skills in the labor force. The world needed workers who could follow
instructions: texts with one instrumental meaning instead of many
appreciative meanings:

Turn on Switch A.
Watch for the green light to come on.
Then push the lever.

As the finely differentiated universe of local folk cultures was


gradually replaced by a handful of mass, popular cultures, ordinary
citizens lost their locally enacted linguistic cultures, and began to feed
passively on mass-produced words. In the process, they also lost the basic
skills of literacy: exposition and condensation, and partially regressed to
pre-Neolithic levels of linguistic sophistication, where language sustains
social interaction and communication, but not critical, abstract thought.
What does this world look like?

Can the Gollum Speak?


I previously proposed the Gollum as an archetype of an ordinary
person turned into a ghost by consumer culture. What I am talking about
here is the linguistic aspect of that transformation.
If you consider the decline of popular literary culture and its
replacement by mass culture a sort of consumerization of language, you
have to ask, can highly gollumized people use language in a literate way at
all?
The Gollum can read, write and repeat, but Ive slowly concluded that
it cannot actually think with language. And not because it isnt smart, but
because it has been educated.
Everywhere around me I find examples of written and spoken
language that I find bizarrely Frankenstein-monster like. Clumsy
constructions based on borrowed parts, and rudely assembled (PR pitches
and resume cover letters are great examples of modern Frankenstein
writing).
The language of the true Gollum is a language of phrases borrowed
and repeated but never quite understood.
Words and phrases turn into mechanical incantations that evoke
predictable responses from similarly educated minds. Yes there is meaning
here, but it is not precise meaning in the sense of a true literary culture.
Instead it is a vague fog of sentiment and intention that shrouds every
spoken word. It is more expressive than the vocalizations of some of our
animal cousins, but not by much.
Curiously, I find the language of illiterate (reading-writing sense) to
usually be much clearer. When I listen to some educated people talk, I get
the curious feeling that the words dont actually matter. That it is all a
behaviorist game of aversion and attraction and basic affect overlaid on
the workings of a mechanical process. That mechanical process is enacted
by instrumental meaning-machines manufactured in schools to generate,

and respond appropriately to, a narrow class of linguistic stimuli without


actually understanding anything.
When I am in a public space dominated by mass culture and its native
inhabitants, such as a mall, I feel like I am surrounded by philosophical
zombies. Yes, they talk and listen, but it is not clear to me that what they
are using is language.
And it isnt just the Im like, duh, and shes like uh-oh crowd that I am
talking about. I am including here the swarms of barely-literate (in the
thinking sense) liberal arts graduates who can read and write phrases like
always-already and dead-white-male (why not already-always or
deceased-European man? I suspect Derrida and Foucault could tell you,
but none of the millions who parrot them could).
This might sound like engineering elitism, but I find that the only
large classes of people who appear to actually think in clearly literate ways
today are mathematicians and programmers. But they typically only do so
in very narrow domains.
To learn to think with language, to become literate in the sense of
linguistically sophisticated, you must work hard to unlearn everything
built on the foundation of literacy-as-reading-and-writing.
Because modern education is not designed to produce literate people.
It is designed to produce programmable people. And this programmability
requires less real literacy with every passing year. Today, genuinely literate
reading and writing are specialized arts. Increasingly, even narrowly
instrumental read-write literacy is becoming unnecessary (computers can
do both very well).
These are not stupid people. You only have to listen to a child
delightedly reciting supercalifragilisticexpialidocious or indulging in other
childish forms of word-play to realize that raw skill with language is a
native capability in the human brain. It must be repressed by industrial
education since it seeks natural expression.

So these are not stupid people. These are merely ordinary people who
have been lobotomized via the consumerization of language, delivered via
modern education.
We dimly realize that we have lost something. But appreciation for
the sophistication of oral cultures mostly manifests itself as mindless
reverence for traditional wisdom. We look back at the works of ancients
and deep down, wonder if humans have gotten fundamentally stupider
over the centuries.
We havent. Weve just had some crucial meme-processing software
removed from our brains.
Towards a Literacy Renaissance
This is one of the few subjects about which I am not a pessimist. I
believe that something strange is happening. Genuine literacy is seeing a
precarious rebirth.
The best of todays tweets seem to rise above the level of mere bon
mots (gamification is the high-fructose corn syrup of user engagement)
and achieve some of the cryptic depth of esoteric verse forms of earlier
ages.
The recombinant madness that is the fate of a new piece of Internet
content, as it travels, has some of the characteristics of the deliberate
forms of recombinant recitation practiced by oral culture.
The comments section of any half-decent blog is a meaning factory.
Sites like tvtropes.org are sustaining basic literacy skills.
The best of todays stand-up comics are preserving ancient wordplay
skills.
But something is still missing: the idea that literacy is a cultivable
skill. That dense, terse thoughts are not just serendipitous finds on the

discursive journeys of our brains, but the product of learnable exposition


and condensation skills.
I suppose paying attention to these things, and actually attempting to
work with archaic forms like maxims and aphorisms in 2012 is something
of a quixotic undertaking. When you can store a terbayte of information
(about 130,000 books, or about 50% larger than a typical local public
library) on a single hard-disk words can seem cheap.
But try reading some La Rochefoucauld, or even late hold outs like
Oliver Wendell Holmes and J. B. S. Haldane, and you begin to understand
what literacy is really about. The cost of words is not the cost of storing
them or distributing, but the cost of producing them. Words are cheap
today because we put little effort into their production, not because we can
store and transmit as much as we like.
It is as yet too early to declare a literacy renaissance, but one can
hope.

Part 2:
Towards an Appreciative View
of Technology

Towards an Appreciative View of Technology


June 5, 2012
Recently I encountered the perfect punchline for my ongoing
exploration of technology: any sufficiently advanced technology is
indistinguishable from nature. The timing was perfect, since Ive been
looking for an organizing idea to describe how I understand technology.
Looking back over the technology-related posts in my archives over
the last five years, this technology-is-nature theme pops out clearly, as
both a descriptive and normative theme. I dont mean that in the sense of
naive visions of bucolic bliss (though that is certainly an attractive
technology design aesthetic) but in the sense of technology as a
manifestation of the same deeper lawfulness that creates forests-and-bears
nature. Technology at its best allows for the fullest expression of that
lawfulness, without narrow human concerns getting in the way.
I will explain the title in a minute but first, here is my technology
sequence of 14 posts written over the last five years. The organizing
narrative for the sequence comes from this technology-is-nature idea that
informs my thinking, whether I am pondering landfills or rusty ships.
Contemplating Technology
1. An Infrastructure Pilgrimage
2. Meditation on Disequilibrium in Nature
3. Glimpses of a Cryptic God
4. The Epic Story of Container Shipping
5. The World of Garbage
What Technology Wants
1. The Disruption of Bronze
2. Bays Conjecture
3. Halls Law: The Nineteenth Century Prequel to Moores Law
4. Hacking the Non-Disposable Planet
5. Welcome to the Future Nauseous

6. Technology and the Baroque Unconscious


Engineering Consolations
1. The Bloody-Minded Pleasures of Engineering
2. Towards a Philosophy of Destruction
3. Creative Destruction: Portrait of an Idea
Technology is central to all my thinking, but my relationship with it is
complicated. I no longer think of myself as an engineer. On the other hand,
though I think and write a lot about the history, sociology, psychology and
aesthetics of technology, I do not have the humanities mindset. I am not a
humanist (or that very humanist creature, a transhumanist). I am pretty
solidly on the science-and-engineering side of C. P. Snows famous twocultures divide.
Engineering is generally understood in instrumental terms by both
practitioners and observers: as something you do with a set of skills. By
contrast, I have tended in recent years to understand it in appreciative
terms.
So you could say that my writing on technology over the years has
turned into something resembling art history of the critical variety (the
connection is made somewhat explicit in one of my posts in the previous
sequence).
Perhaps as a result, I have been accused in the past (with some
justification) of turning my technology writing and thinking into a sort of
sloppy anthropomorphic thermodynamic theology based on loose notions
of technological agency, entropy and decay.
While there is certainly a degree of wabi sabi in my technological
thinking, in my defense I would say that when I lurch into purple prose or
bad poetry, it has less to do with deeply held conceptual beliefs and more
to do with attempting to convey the sense of grandeur that I think is
appropriate for the proper appreciation of technology.
We reserve for overtly showy things like cathedrals the kind of awe
that should really be extended (multiplied several times) to apparently

mundane things like shipping containers. We cannot make sense of the


modern human condition until we begin to understand that
interchangeable parts for everyday machines are actually a far greater
achievement than more narrowly humanist expressions of who we are.
I will leave it at that. I think I am going to be writing more about
technology appreciation in the future.

An Infrastructure Pilgrimage
March 7, 2010
In Omaha, I was asked this question multiple times: Err why do
you want to go to North Platte? Each time, my wife explained, with a hint
of embarrassment, that we were going to see Bailey Yard. He saw this
thing on the Discovery Channel about the worlds largest train yard A
kindly, somewhat pitying look inevitably followed, Oh, are you into
model trains or something? Ive learned to accept reactions like this.
Women, and certain sorts of infidel men, just dont get the infrastructure
religion. No, I explained patiently several times, I just like to look at
such things. I was in Nebraska as a trailing spouse on my wifes business
trip, and as an infrastructure pilgrim. When boys grow into men, the
infrastructure instinct, which first manifests itself as childhood car-planetrain play, turns into a fully-formed religion. A deeply animistic religion
that has its priests, mystics and flocks of spiritually mute, but faithful
believers. And for adherents of this faith, the five-hour drive from Omaha
to North Platte is a spiritual journey. Mine, rather appropriately, began
with a grand cathedral, a grain elevator.

As you leave the unlikely financial nerve center of Omaha behind,


and head west on I-80 (itself a monument), you hit the heartland very
suddenly. We stopped by the roadside just outside of Lincoln, about an
hour away from Omaha, to spend a few meditative minutes in the
company of this giant grain elevator. There is no more poignant symbol of
Food Inc and global agriculture. It is an empire of the spirit at its peak,

facing a necessary decline and fall. The beast is rightly reviled for the
cruelty it unleashes on factory-farmed animals. The problems with
genetically modified seeds are real. The horrendous modern corn-based
diet it has created cannot be condoned. Yet, you cannot help but
experience the awe of being in the presence of a true god of modernity. An
unthinking, cruel, beast of a god, but a god nevertheless.
After a quick pause at Lamars donuts (an appropriate sort of highlyprocessed religious experience) we drove on through the increasingly
desolate prairie. Near Kearney, you find the next stop for pilgrims, the
Great Platte River Archway Monument.

This is a legitimately religious archway, and a thoroughly American


experience. There are penny-stamping machines, kitschy souvenirs (which
must be manufactured in China and container-shipped to Nebraska to
count as holy objects; no locally-manufactured profanities for me) and the
inescapable period-costumed guide who insists on speaking in character.
Once you are inside, it is a curiously unsettling experience. Through
multiple levels that cross back and forth across I-80, you experience the
history of the nineteenth century expansion of Europeans into the
American West, from the journeys of the Mormons to Utah, to those
trudging up the Oregon trail, to the 49ers headed to California in search of
gold. There is also stuff about the Native American perspective, but this is
fundamentally a story about Europeans.

You take an escalator into the archway, where you work your way
through the exhibits. There are exhibits about stagecoaches, dioramas of
greedy, gossiping gold-seekers by campfires, and paintings of miserablelooking Mormons braving a wintry river crossing. There are other
exhibits about the development of the railroad and the great race that
ended with the meeting of the Union and Pacific railroads in Utah. In the
last segment, you find the story of the automobile, trucking, and the early
development of the American highway system. From the window, you can
watch the most recent of these layers of pathways, Eisenhowers I-80,
thunder by underneath at 70 odd miles an hour.
It is a pensive tale of one great struggle after another, with each age of
transportation yielding, with a creative-destructive mix of grace and
reluctance, to the next. The monuments of the religion of infrastructure are
monuments to change.

As you head further west from Kearney along I-80, the Union Pacific
railroad tracks keep you company. I watched a long coal train rumble
slowly across the prairie, and nearer North Platte, a massive double-decker
container train making its way towards Bailey. If you know enough about
container shipping, which Ive written about before, watching a container
train go by is like watching the entire globalized world put on a parade for
your benefit. You can count off names of major gods, such as Hanjin and
Maersk, as they scroll past, like beads on a rosary. From the great ports of
Seattle and Los Angeles, the massive flood of goods that enters America
from Asia pours onto these long snakes, and they all head towards that
Vatican of the modern world, Bailey Yard.

All of the Union Pacific rail traffic is controlled today by computer


from a command center in Omaha, but the heart of the hardware is in
North Platte. Bailey Yard is a classification (train dis-assembly and reassembly) yard. Trains enter from the east or west and are slowly pushed
up one of two humps. At the crest of the hump, single cars detach and roll
down by gravity, with remote-controlled braking. Each is directed towards
one of around 114 bowl tracks (assembly segments), where new trains are
assembled. Once a year, in September, during the North Platte Rail Fest,
you can visit the yard itself. Its sort of a railroad Christmas. On other
days, you must be content with the view from the Golden Spike tower
overlooking Bailey Yard.

It is hard to describe the grandeur of what you see. On the viewing


deck there are benches pews practically that encourage you to sit
and watch a while, and the inevitable retiree volunteer, anxious to explain
things. During my time there, I spoke to a thin and rather sad-looking old
man, a 33-year UP veteran. His tired face, covered with age spots, lit up
briefly when I asked what must have been an unexpectedly technical
question about the lack of turntables (platforms to turn engines around,
since you cannot do a U-turn on a railway track). He explained patiently
that with modern locomotives, you dont need turntables, since they are
equally efficient in either direction.
He also explained the humps and the sorting process, all new stuff for
me. His was the non-meditative variety of infrastructure religiosity. Facts
and figures, seared into memory, were his prayers. I listened as I always

do on these occasions, with the reverence due to the recitation of


scriptures.
The pace of activity at Bailey is deceptively slow. There is an
appropriate gravitas to this grand mechanical opera that makes you
wonder how the place can possibly process 10,000 cars a day, sorting
about 3,000. And then you contemplate the scale of operations and
understand. From the panoramic viewpoint at the top of the Golden Spike,
it can be hard to appreciate this scale. At first I did not believe that there
were 114 bowl tracks. There are 60 that lead off the west hump alone, and
I had to do a rough count before I believed it. You have to remind yourself
that you are looking at real-sized engines and cars sprawling across the flat
Nebraska landscape. Id be a bug underneath just one wheel of one of the
four locomotives in the picture below. I am small. Trains are big. Nebraska
is even bigger.

I hunted in the gift store for a schematic map of the yard, but there
wasnt one. The storekeeper initially thought I was looking for something
like a model train or calendar, but when I explained what I wanted, a look
of understanding and recognition appeared on his face. I had risen in his
estimation. I was no longer an average, uninformed and accidental visitor.
I was a fellow seeker of spiritual truths who knew what was important. A
lot of people ask for that, he said and explained that after 9/11, the
Department of Homeland Security stopped the sale of the posters. He told
me he expected the restrictions to be eased soon. Anyway, I took a picture
of a beautiful large-scale map of the yard that was hanging in the lobby.

Since Google Maps (search for North Platte and zoom in) seems to show
about as much detail as my picture, I feel safe sharing a low-resolution
version. If somebody scarily official objects, Ill take it down.

To the religious, such schematics are sacred texts. You contemplate


the physical reality to experience the awe, but you contemplate artifacts
like this when you want to meditate on the universality of it all. Staring at
the schematic, I was struck by its resemblance to schematics of integrated
circuits. And that after all, is what a railway classification yard is. A largescale circuit with bowl tracks for capacitors, humps for potentials and
brakes for resistors. It is the beauty of thoughts like this, that connect
microscopic chips of silicon to railroad yards too big even for a wideangle lens, that gets us monks and nuns joining those monasteries of
modernity, engineering schools. Laugh if you will, but I can get mistyeyed when looking at something like this schematic.
The next morning, we headed back, rushing to outrace a storm. We
had enough time though, to stop at the Strategic Air and Space Museum.
One of the first things I did after landing in New York for the first time in
1997, was to visit the USS Intrepid. Since then, Ive visited the WrightPatterson AFB museum, the Pima Air and Space Museum in Tuscon, AZ
(with its acres of aircraft parked in the desert heat), oddball little airplane
museums in obscure places, and of course, the Smithsonian. These days, I
live close to Reagan National Airport. Sometimes, I take a walk along
Mount Vernon trail, which at one point swings right past the end of the

Reagan runway. There, you can stand and contemplate airplanes roaring
overhead every few minutes.

But the Strategic Air and Space Museum is a different sort of


experience, an experience designed to remind you that gods of
infrastructure are angry, vengeful gods capable of destroying the planet.
This is not the friendly god who lends you wings to fly across the world
for work and life. These are the darker gods that can rain nuclear wrath
down on us. And no god is more wrathful than the Convair B-36, a
massive six-engined behemoth, with the largest wingspan of any combat
aircraft in history, and appropriately called the peacemaker. Between
1949 and 1959, these beasts were the instruments of Cold War foreign
policy.
To look at pictures of the B-36 in the sky is something of a
sacrilegious act. You must never look at all of a B-36 at once. And within
the confines of this museum, you cannot. Not with the human eye, and not
with the camera. The picture above is one you can safely look at. It took
some maneuvering to get all three engines of one wing into the frame.
And here I am, a tiny zit of a human being, standing below the belly
of the beast, next to a hydrogen bomb (that stubby thing next to me).

No infrastructure pilgrimage can be complete without a reverential


pause before a phallic god of destructive power. Menhirs and obelisks will
not do for our age. Neither will skyscrapers, which are merely symbols of
humanitys child-like greedy grasping at earthly pleasures. Out in the
heartland, among the grain silos (cornucopias?) where I began my
pilgrimage, are scattered very different sorts of silos. Silos containing
ballistic missiles, designed to soar up and kiss space, home to our loftiest
aspirations, before diving back down to destroy us. Outside the museum,
there are three Cold War ballistic missiles on display. Here is one. I forgot
to look at the sign, but I think it is an early Atlas.

Meditation on Disequilibrium in Nature


October 15, 2007
The idea of stability is a central organizing concept in mathematics
and control theory. Lately I have been pondering a more basic idea:
equilibrium, which economists prefer to work with. Looking at some
fallen trees this weekend, a point I had appreciated in the abstract hit me in
a very tangible form: both stability and equilibrium are intellectual
fictions. Here is the sight which sparked this train of thought:

These are the fallen trees that line the southern shore of Lake Ontario,
under the towering Chimney Bluffs cliffs left behind in disequilibrium
by the last Ice Age. For about a half mile along the base of the cliffs, you
see these trees. Here is the longest shot I could take with my camera.

No, evil human loggers did not do this. The cliffs are naturally
unstable, and chunks fall off at regular intervals. Here are a couple of the
more dramatic views you see as you walk along the trail at Chimney
Bluffs State Park.

Signs line the trail, warning you to keep away from the edge. Unlike
the Grand Canyon or the Niagara Falls, whose state of disequilibrium
requires an intellectual effort to appreciate, the Chimney Bluffs are in
disequilibrium on a time scale humans can understand. A life form we can
understand trees can actually bet on an apparently stable equilibrium
and lose in this landscape, fall, and rot in the water, while we watch.
Even earthquakes, despite their power, dont disturb our equilibriumcentric mental models of the universe. We view them as anomalous rather
than characteristic phenomena. The fallen trees of the Chimney Bluffs
cannot be easily dismissed this way. They are signs of steady, creeping
change in the world around us, going on all the time; creative-destruction
in nature.
The glaciated landscape of Upstate New York, of which the Chimney
Bluffs are part, is well known. The deep, long Finger Lakes, ringed by
waterfalls, have anchored my romantic fascination with this region for
several years now. The prototypical symbol of the region is probably
Taughannock falls:

Unless you live in the region for a while, you wont get around to
visiting the Chimney Bluffs. But visit even for a weekend, and everybody
will urge you to go visit the falls.
We create tourist spots around sights which at once combine the
frozen drama of past violence in nature, and a picture of unchanging calm
in the present. Every summer and fall, the falls pour into Lake Cayuga and
tourists take pictures. Every winter, they slow to a trickle. Change is so
slow that we even let lazy thinking overpower us and make preservation
the central ethic of any concern for the environment. Even the entire
ideology and movement is called conservation.
We forget cataclysmic extinction events that periodically wipe out
much of life. We forget to sit back and visualize and absorb the
implications of the dry quantitative evidence of ice ages. Moving to the
astronomical realm, we rarely stop and ponder the thought that the earth is
cooling down, that its magnetic poles seem to flip every tens of thousands
of years, that its rotation has slowed from a once fast 22-hour day. We

forget that our Sun will eventually blow up into a Red Giant that will be
nearly as large as the orbit of Mars.
We forget that nature is the first and original system of evolving
creative destruction. Schumpeters model of the economy came along
later.
Towards Disequilibrium Environmentalism
This troubles me. On the one hand, environmental concerns are
certainly very high on my list of ethical and practical concerns. Yet, when
nature itself is chock full of extinctions, unsteady heatings and coolings
and trembles and crumbles, why are we particularly morally concerned
about global warming and other unsustainable patterns of human activity?
A practical human concern is understandable (tough luck, Seychelles), but
to listen to Al Gore, you would think that it is somehow immoral to not
think entirely in terms of preservation, conservation, equilibrium and
stability. So nature decides to slowly destroy the Chimney Bluffs. We
decide to draw down oil reserves, slowly saturate the oceans with CO2
and melt the ice caps. Why is the first fine, but the others are somehow
morally reprehensible? If you worry that we are destroying a planet that
we share with other species, well, nature did those mass extinctions long
before we came along.
In this respect, the political left is actually rather like the right it is
truly a conservative movement. Instead of insisting on the preservation of
an unchanging set of cultural values and societal forms, it insists on an
unchanging environment.
To be truly powerful, the environmentalist movement must be
reframed in ways that accommodates the natural patterns of
disequilibrium, change, and ultimate tragic, entropic death of the universe.
I dont know how to do this.
Why Disequilibrium instead of Instability?
Stability is a comforting idea. It is the idea that there is a subset of
equilibria that, when disturbed slightly, return to their original conditions.

But it is a false comfort. Every instance of stability lives within an


artificial bubble of time, space and mass-energy isolation. Expand the
boundaries enough, or let the external universe inject a sharp enough
disturbance, and your stability will vanish. Unstable equilibria are even
sillier, because even infinitesimal disturbances can knock them out.
Which means disequilibrium, not equilibrium, is the natural state.
Using the word disequilibrium suggests a steady, sustained absence of
stability the universe is one, long transient signal where every illusion
of stability will be destroyed, given enough time.

Glimpses of a Cryptic God


February 16, 2012
I rarely listen to music anymore. Strange anxieties and fears seem to
flood into my head when I try. When I seek comfort in sound these days, I
tend to seek out non-human ones. The sorts of soundscapes that result
from technological and natural forces gradually inter-penetrating each
other.

At the Mira Flores lock, the gateway into the Pacific Ocean at the
southern end of the Panama Canal, you can listen to one such soundscape:
the idling of your vessels engine, mixed with the flapping and screeching
of seabirds. The draining of the lock causes fresh water to pour into salt
water, killing a new batch of freshwater fish every 30-45 minutes. The
seabirds circle, waiting for the buffet to open.
***

The seabirds have adapted to a world created by human forces better


than humans themselves. They reconcile the technological and the natural
without suffering agonies. They have smoothly reconstructed their
identities without worrying about labels like
transhumanist or
paleohumanist. There is neither futurist eagerness nor primitivist yearning
to their adaptation. They do not strain impatiently to transform into dimly
glimpsed future selves, nor do they strive with quixotic energy to return to
an imagined original hunter-gatherer self.
If they do not strain to transform, they also do not strive for
constancy. No doctrine of seabirdism elevates current contingencies into
eternal values that imprison. The seabirds feast without worry on the
unexpected bounty of salinity-killed fish. They do not ponder whether it is
natural.
They are thankfully unburdened by the sorts of limiting selfperceptions that we humans enshrine into the doctrine of humanism. I
think of humanism as an overweening conception of being flash-frozen
into a prescription during a brief window of time in early-modern Europe.
A time when humans had just gotten comfortable transforming nature, but
had not yet been themselves transformed enough by the consequences to
understand what they were doing.
That naked label, humanism, unadorned by prefixes like paleo- or
trans-, reveals our continued failure to center our sense of self within
larger non-human realities. Our big social brains can invent elaborate
anthropomorphic gods and social realities within which we gladly
subsume ourselves, but struggle to manufacture a sense of belonging to
anything that includes dead fish, seabirds, engineered canal locks and
seawater. Belonging has become an exclusively human idea to humans.
We are still mean little inquisitors at the ongoing trial of Copernicus,
resisting decentering realities that cannot be recursively reduced to the
human. Man makes gods in his own image, blind to the non-human.
And so we distract ourselves with debates about the distinction
between natural and artificial while ignoring the far more basic one
between human and non-human.

Sometimes being a bird-brain helps. Last year, I decided I was going


to be a barbarian. I am going for bird-brain-barbarian this year.
***
So I rarely listen to music.
Music these days feels like a fog descending on my brain, obscuring
visibility and tugging me gently inward into a cocoon of human belonging
that promises warmth and security, but delivers an unsettling estrangement
from non-human realities. Realities that are knocking with increasing
urgency at the door of our species-identity.
Technology is more visual landscape than soundscape, but listening to
pleasing human rhythms makes it harder to see technological ones. So
even when there are no interesting soundscapes, I prefer silence. It is easy
to miss frozen visual music when a soothing voice is piping fog into your
brain through your ears. Perhaps all songs are lullabies.

Visible function lends lyricism to the legible but alien rhythms and
melodies of technology-shaped landscapes. You can make out some of the
words, like crane and unloading, but the song itself is generally
impenetrable.

It is perhaps when the lyrics are at their most impenetrable that you
can most pay attention to the song. To understand is to explain. To explain
is to explain away and turn your attention elsewhere. Obviousness of
function can sometimes draw a veil across form, by encouraging a tooquick settling into a comforting instrumental view of technology.
Oscillating slowly back and forth across sections of the Panama
Canal, you will see strange boats carrying dancing fountains. I missed
what the tour guide said, so I have no idea what this is.

Perhaps a fire-fighting boat of some sort, or a dredging vessel. But I


dont need to know. One of the minor benefits of an engineering education
is a confidence in your ability to fathom function if the need arises,
leaving you free to appreciate pure form without a sense of anxiety.
Looking at this water-dancer of a boat, I found myself wondering about
the place of this beast on a larger spectrum.
On one end you find the Old Faithful geyser at Yellowstone National
Park:

And at the other end, you find the orderly, authoritarian highmodernist fountains at the Bellagio in Las Vegas, which dance to human
music, for human entertainment.

Each is a glimpse of a different stratum of techno-natural geology.


The human layer is built on top of the cryptohuman layer. The
cryptohuman layer on top of the natural. Each layer offers up a waterdancer emissary to explain itself to us.
As an engineer, you no longer suffer those sudden stabs of
uncomprehending anxiety that can be triggered in more humanistic brains
by glances under the hood. When I hear a non-engineer seeking an answer
to what does that thing do, half the time I hear, not curiosity, but fear. An
urge to comprehend intimidating realities through the reassuring lens of
human intention.
It takes an unnatural, inhuman instinct to ponder artificial form
divorced from its intended function. But increasingly, this instinct is a
necessary one if you seek to inhabit the twilight zone between human and
non-human.
The Panama Canal is as much freshwater-fish-killer and seabird-freelunch kitchen as it is a narrowly human shipping shortcut.
And it is also a manifestation of strange symmetries and cryptic
generative laws, whose nature we do not completely understand, but feel
an urge to unleash ever more completely. Technological landscapes have
yet to experience their Watson and Crick moment.

And so we stand aside and ponder the deeper mysteries of banks of


cranes, and wonder about the connection between Old Faithful, Water
Dancer boats and the Bellagio fountains.
***
The Panama Canal is a great place to get up close and personal with
container ships. I pursue ship-spotting opportunities with a mildly
obsessive tenacity.

One of my evil twins, Alain de Botton, appears initially sympathetic


to ship spotters in his writing, but admiration for their willingness to
engage technology soon gives way to a sort of mildly patronizing
humanism.
Admittedly, the ship spotters do not respond to the
objects of their enthusiasm with particular imagination.
They traffic in statistics. Their energies are focused on
logging dates and shipping speeds, recording turbine
numbers and shaft lengths. They behave like a man who
has fallen deeply in love and asks his companion if he
might act on his emotions by measuring the distance
between her elbow and her shoulder blade. But whatever

their inarticulacies, the ship-spotters are at least


appropriately alive to some of the most astonishing aspects
of our time.
For de Botton, to resort to numbers as a mode of appreciation is
inarticulacy. A visible symptom of a lack of poetic eye. It is a very
humanist stance. One that reminds me of that famous quote (I forget the
source) that claimed that it would take 500 Newton souls to make one
Milton soul.
Rather ironic that that comparison required a number.
And there is something deeply sad about the fact that de Botton feels
the urge to compare engagement with technology to the very inadequate
benchmark of human love. Would kissing a ship, or singing a sonnet to it,
be a more appropriate response than recording turbine numbers?
What is of immense importance to us as humans is not necessarily of
importance to the non-human-centric universe qua NHCU. The implicit
suggestion that writing a sonnet might perhaps be a better reaction than
recording turbine numbers says more about our self-absorption than about
turbines.
Taking refuge in numbers when faced with technological complexity
is in part an acknowledgment of the poverty of a poetically enacted
humanist life script . Numbers are how we grope for the trans-human.
I save my number-appreciation for private contemplation, and
sometimes wax lyrical on this blog, but there is never any doubt in my
mind. Numbers are the more fundamental mode of appreciation. And if
your mathematical abilities limit you to mere counting, so be it. Thats
better than pretending a container ship is a girl to be romanced.
When I was a kid, I used to visit my uncle who worked for the
railways and lived in a railway town right by some trunk routes. I would
sit on the porch and count the number of wagons on trains that went by,
for hours on end. The delight of spotting the rare two-locomotive,
hundred-plus-car train is not for the innumerate.

Counting is contemplation. Trains and container ships are our


rosaries.
***
I recently finished Neal Stephensons Cryptonomicon (recommended
by many of you).
He is no great master of narrative or character development,but it is
that very failing that elevates his writing to interesting. There is no
denying that he looks at technology the way it ought to be looked at.
Given a choice between saying something interesting about technology
and crafting a better narrative by human literary aesthetics, he consistently
chooses the former. And were better off for it.
When he occasionally attempts to capture in words the very nonverbal engagement of the world that is the characteristic of technologists,
he offers a glimpse of what an alternative to poetry looks like. An example
is an extended passage in Cryptonomicon where archetypal nerd Randy
Waterhouse ponders the dynamics of dust storms in the eastern desert side
of Oregon, and reaches conclusions about the open-ended strangeness of
the natural world. That sort of idle train of thought is a far more
appropriate reaction to technological reality than de Bottons more
articulate and poetic, but ultimately depth-limited engagement of the nonhuman.
Daniel Pritchett, a frequent email correspondent, IM buddy and my
host in Memphis on my road trip last year, pointed me to a passage in
Stephensons essay, In the Beginning was the Command Line, which reads
thus:
Contemporary culture is a two-tiered system, like the
Morlocks and the Eloi in H.G. Wellss The Time Machine,
except that its been turned upside down. In The Time
Machine the Eloi were an effete upper class, supported by
lots of subterranean Morlocks who kept the technological
wheels turning. But in our world its the other way round.
The Morlocks are in the minority, and they are running the

show, because they understand how everything works. The


much more numerous Eloi learn everything they know
from being steeped from birth in electronic media directed
and controlled by book-reading Morlocks. So many
ignorant people could be dangerous if they got pointed in
the wrong direction, and so weve evolved a popular
culture that is (a) almost unbelievably infectious and (b)
neuters every person who gets infected by it, by rendering
them unwilling to make judgments and incapable of taking
stands.
Morlocks, who have the energy and intelligence to
comprehend details, go out and master complex subjects
and produce Disney-like Sensorial Interfaces so that Eloi
can get the gist without having to strain their minds or
endure boredom.
A little too harsh perhaps, but on the whole a fair indictment of the
techno-illiterate. I wonder if Stephenson would consider de Botton one of
the Eloi. I suspect he would. The acquittal argument for de Botton, in a
Stephensonian court for technology-appreciation crimes, is that he is more
romanticist than politically compromised postmodernist. His crimes are
ultimately forgivable.
Stephensons typology helps us at least distinguish between two of the
three fountains. The Water Dancer boat is a serendipitous Morlock
fountain. The Bellagio fountain is an Eloi fountain constructed by
Morlocks.
My reactions to the three fountains were different in interesting ways.
With Old Faithful, I found myself basically speechless and
thoughtless. A division by zero moment.
With the Water Dancer fountain, I found myself in a state of happy
contemplation.

With the Bellagio fountain, my mind immediately wandered to


speculations about the the control algorithms and valve designs that would
be needed to build the thing.
***
To be among the Eloi is to lack a true sense of scale, and a sense of
when the clumsiest numerical groping with numbers is philosophically a
better response than the most sublime poetry. The Eloi fundamentally do
not get when to give up on words and turn to numbers.
It is a difference not of degree, but of kind. In Stephensons terms, de
Botton finding the ship-spotters response inarticulate is a case of one of
the adult Eloi making fun of a Morlock baby.
Certainly some of the ship-spotters may never venture beyond a
stamp-collector/model-builder/cataloger approach to ships (all very noble
pursuits). But some will eventually end up in places where the Eloi would
be entirely blind. Places where only numbers allow you to feel your way
forward, away from the limited sphere where the light of humanist poetry
shines.
Scale is perhaps the first aspect of reality where innumeracy severely
limits your ability to engage reality.
Scale is a curious thing. Out on the open water, a container ship can
seem normal-sized by some intuitive sense of normal.

But if you watch from the observation tower at Mira Flores, the sheer
sheer size of one of these beasts starts becoming apparent. You get the
sense that something abnormal is going on.

And once it is really close, little cues start to alter your sense of the
various proportions involved, like this lifeboat and Manhattan fire-escape
style stairways.

Cruise ships give you a sense that a large modern ship is something
between a luxury hotel and a small city in terms of scale, but container
ships give you a sense of the non-human scales involved.
Partly this is because cruise ship designers go to great lengths to make
you forget that you are on a ship (which lends a whole new meaning to
Disney-like sensorial interfaces). But mainly it is because our minds
cling so eagerly to the human that even the slightest foothold is sufficient
for anthropocentric perspectives to dominate thought. I am no more
immune than anybody else. My eyes instinctively sought out the lifeboat
and stairways human scale things. Earlier in this essay, I felt obliged to
describe the technological landscape by analogy to human music-making.
You can see why I think de Botton is my evil twin. He embraces
tendencies that I also see in myself, but am intensely suspicious of. I dont
trust my own attraction to poetry when it comes to appreciating
technology.
***
Scale is not just about comparisons and proportions. It is also about
precision.
Take this little engine that runs along the side of the lock on tracks,
steadying the ship. The clearance for some ships is in the inches, and it
takes many of these little guys to keep a large ship moving slowly, safely
and steadily through the lock. Inches in a world of miles. Ounces in a
world of tons.
It is when one scale must interact with another in this manner that you
get a true sense of what scale means. This is another reason numbers
matter. You cannot appreciate precision without numbers (I remember the
first time I experienced scale-shock in the numerical-precision sense of the
term: when I learned that compressors in rocket engines must spin at over
40,000 RPM. I remember spending something like half an hour trying to
understand that number, 40,000 as a mechanical rotation rate).

Scale and precision make for a non-verbal aesthetic. To have a true


sense of scale is to give up the sense of being human. You cannot identify
with the very large and very small if much of your identity is linked to an
object that can be contained within a box about six feet long.
***
The more I study technology, the more I tend to the view that it is a
single connected whole. Recurring motifs like container ships can turn
into obsessions precisely because they offer glimpses of a cryptic God. An
object for the devoutly atheist and anti-humanist soul to seek in perpetuity,
but never quite comprehend.
I go on infrastructure pilgrimages. I write barely readable poptheology treatises with ponderous titles like The Baroque Unconscious in
Technology [November 11, 2011], and I do my little dabbling with math,
software and hardware on the side.
But I still havent seen It. Just an elbow here, a shoulder blade there.
And I make my modest attempts to measure those distances.
***
This essay is my sneaky way of getting around my own noPowerPoint rule for Refactor Camp 2012, where my talk will be on motifs,
mascots and muses. The event has sold out. Thanks everybody for your
great support, and looking forward to meeting everybody.

If you put yourself on the waitlist, Ill see what I can do. I am waiting
to hear from the venue staff about whether there is capacity beyond the
nominal maximum of 45.
Also, for those of you in Chicago, a heads-up. Ill be there for the
ALM Chicago conference next week, Feb 22-23, where Ill be doing a talk
titled Breathing Data, Competing on Code. The Neal Stephenson quote is
involved.
Make it if you can. Or email me, and perhaps we can do a little
meetup if theres a couple of readers there.

The Epic Story of Container Shipping


July 7, 2009
If you read only one book about globalization, make it The Box: How
the Shipping Container Made the World Smaller and the World Economy
Bigger, by Marc Levinson (2006). If your expectations in this space have
been set to low by the mostly obvious, lightweight and mildly
entertaining stuff from the likes of Tom Friedman, be prepared to be
blown away. Levinson is a heavyweight (former finance and economics
editor at the Economist), and the book has won a bagful of prizes. And
with good reason: the story of an unsung star of globalization, the shipping
container, is an extraordinarily gripping one, and it is practically a crime
that it wasnt properly told till 2006.

There are no strained metaphors (like Friedmans Flat) or attempts


to dazzle with overworked, right-brained high concepts (Gladwells books
come to mind). This is an important story of the modern world,
painstakingly researched, and masterfully narrated with the sort of
balanced and detached passion one would expect from an Economist
writer. It isnt a narrow tale though. Even though the Internet revolution,
spaceflight, GPS and biotechnology dont feature in this book, the story
teases out the DNA of globalization in a way grand sweeping syntheses
never could. Think of the container story as the radioactive tracer in the
body politic of globalization.

The Big Story


(Note: Ive tried to make this more than a book review/summary, so a
BIG thank-you is due to @otoburb, aka Davison Avery, a dazzlingly wellinformed regular reader who provided me with a lot of the additional
material for this piece.)
What is amazing about The Box is that despite being told from a
finance/economics perspective, the story has an edge-of-the-seat quality of
excitement to it. This book could (in fact, should) become a movie; a
cinematic uber-human counterpoint to Brandos On the Waterfront. The
tale would definitely be one of epic proportions, larger than any one
character; comparable to How the West Was Won, or Lord of the Rings.
The Box Movie could serve as the origin-myth for the world that Syriana
captured with impressionistic strokes.
The movie would probably begin with a montage of views of
containerships sounding their whistles in homage around the world on the
morning of May 30, 2001. That was the morning of the funeral of the
colorful character at the center of this story, Malcolm McLean. McLean
was a hard-driving self-made American trucking magnate who charged
into the world of shipping in the 1950s, knowing nothing about the
industry, and proceeded, over the course of four decades, to turn that
world upside down. He did that by relentlessly envisioning and driving
through an agenda that made ships, railroads and trucks subservient to the
intermodal container, and in the process, made globalization possible. In
doing so, he destroyed not only an old economic order while creating a
new one, he also destroyed a backward-looking schoolboy romanticism
anchored in ships, trucks and steam engines. In its place, he created a new,
adult romanticism, based on an aesthetic of networks, boxes, speed and
scale. Reading this story was a revelation: McLean clearly belongs in the
top five list of the true titans of the second half of the twentieth century.
Easily ahead of the likes of Bill Gates or even Jack Welch.
Levinson is too sophisticated a writer to construct simple-minded
origin myths. He is careful not to paint McLean as an original visionary or
Biblical patriarch. From an engineering and business point of view, the

container was a somewhat obvious idea, and many attempts had been
made before McLean to realize some version of the concept. While he did
contribute some technological ideas to the mix (marked more by
simplicity and daring than technical ingenuity), McLeans is the central
plot line because of his personality. He brought to a tradition-bound, selfromanticizing industry a mix of high-risk, opportunistic drive and a
relentless focus on abstractions like cost and utilization. He seems to have
simultaneously had a thoroughly bean-counterish side to his personality,
and a supremely right-brained sense of design and architecture. Starting
with the idea of a single coastal route, McLean navigated and took full
advantage of the world of regulated transport, leveraged his company to
the hilt, swung multi-million dollar deals risking only tens of thousands of
his own money, manipulated New-York-New-Jersey politics like a Judo
master and made intermodal shipping a reality. He dealt with the nittygritty of crane design, turned the Vietnam war logistical nightmare into a
catalyst for organizing the Japan-Pacific coast trade, and finally, sold the
company he built, Sea-Land, just in time to escape the first of many slow
cyclic shocks to hit container shipping. His encore though, wasnt as
successful (an attempt to make an easterly round-the-world route feasible,
to get around the problem of empty westbound container capacity created
by trade imbalances). The entire story is one of ready-fire-aim audacity;
Kipling would have loved McLean for his ability to repeatedly make a
heap of all his winnings and risk it on one turn of pitch-and-toss. He
walked away from his first trucking empire to build a shipping empire.
And then repeated the move several times.
McLeans story, spanning a half-century, doesnt overwhelm the plot
though; it merely functions as a spinal cord. A story this complex
necessarily has many important subplots, which Ill cover briefly in a
minute, but the overall story (which McLeans personal story manifests, in
a Forrest Gumpish way) also has an overarching shape. On one end, you
have four fragmented and heavily regulated industries in post World-War
II mode (railroads, trucking, shipping and port operations). It is a world of
breakbulk shipping (mixed discrete cargo), when swaggering, Brando-like
longshoremen unloaded trucks packed with an assortment of items,
ranging from baskets of fruit and bales of cotton to machine parts and
sacks of coffee. These they then transferred to dockside warehouses and
again into the holds of ships whose basic geometric design had survived
the transitions from sail to steam and steam to diesel. It was a system that

was costly, inefficient, almost designed for theft, and mind-numbingly


slow, keeping transportation systems stationary and losing money for far
too much of their useful lives.
On the other end of the big story (with a climactic moment in the
Vietnam war), is the world we now live in: where romantic old-world
waterfronts have disappeared and goods move, practically untouched by
humans, from anywhere in the world to anywhere else, with an
orchestrated elegance that rivals that of the Internets packet switching
systems. Along the way the container did to distribution what the assembly
line had done earlier to manufacturing: it made mass distribution possible.
The fortunes of port cities old and new swung wildly, railroads clawed
back into prominence, regulation fell apart, and supply chains got globally
integrated as manufacturing got distributed. And yes, last but not the least,
the vast rivers of material pouring through the worlds container-based
plumbing created the quintessential security threat of our age: terror
sneaking through security nets struggling to monitor more than a percent
or two of the worlds container traffic.
Now if you tell me that isnt an exciting story, I have to conclude you
have no imagination. Lets sample some of the subplots.
The Top Five Subplots
There are at least a dozen intricate subplots here, and I picked out the
top five.
One: The Financial/Business Subplot
At heart, containerization is a financial story, and nothing illustrates
this better than some stark numbers. At the beginning of the story, total
port costs ate up a whopping 48% (or $1163 of $2386) of an illustrative
shipment of one truckload of medicine from Chicago to Nancy, France, in
1960. In more comprehensible terms, an expert quoted in the book
explains: a four thousand mile shipment might consume 50 percent of its
costs in covering just the two ten-mile movements through two ports. For
many goods then, shipping accounted for nearly 25% of total cost for a
product sold beyond its local market. Fast forward to today: the book

quotes economists Edward Glaeser and Janet Kohlhase: It is better to


assume that moving goods is essentially costless than to assume that
moving goods is an important component of the production process. At
this moment in time, this is almost literally true: due to the recession.
These sort of odd dynamics are due to the fact that world shipping
infrastructure changes very slowly but inexorably (and cyclically) towards
higher, more aggregated capacity, and lower costs. This is due to the
highly capital-intensive nature of the business, and the extreme economies
of scale (leading to successively larger ships in every generation). Ships,
though they are moving vehicles, are better thought of as somewhere
between pieces of civic infrastructure (due to the large legacy impact of
government regulation and subsidies) and fabs in the semiconductor
industry (which, like shipping, undergoes a serious extinction event and
consolidation with every trough in the business cycle). Currently the top
10 companies pretty much account for 100% of the capacity, as this
visualization from from gcaptain.com shows, which tells us that today
there are over 6048 container ships afloat, with a total capacity of around
13 million TEU (twenty-foot equivalent).

The mortgaging and financial arrangements dictate that ships


absolutely must be kept moving at all costs, so long as the revenue can at
least make up port costs and service debt. As the most financially

constrained part of the system, ships dominate the equation over trains and
trucks. One tidbit about the gradual consolidation: as of the books
writing, McLeans original company, Sea-Land, is now part of Maersk.
How this came to be is the most important (though not the most fun)
subplot. Things didnt proceed smoothly, as you might expect. All sorts of
forces, from regulation, to misguided attempts to mix breakbulk and
containers, to irrationallities and tariffs deliberately engineered in to keep
longshoremen employed, held back the emergence of the true efficiencies
of containerization. But finally, by the mid-seventies, todays business
dynamics had been created.
Two: The Technology Subplot
If the dollar figures and percentages tell the financial story, the heart
of the technology action is in the operations research. While McLean and
Sea-Land were improvising on the East Coast, a West Coast pioneer,
Matson, involved primarily in the 60s Hawaii-California trade, drove this
storyline forward. The cautious company hired university researchers to
throw operations research at the problem, to figure out optimal container
sizes and other system parameters, based on a careful analysis of goods
mixes on their routes. Today, container shipping, technically speaking, is
primarily this sort of operations research domain, where systems are so
optimized that an added second of delay in handling a container can
translate to tens of thousands of dollars lost per ship per year.
If you are wondering how port operations involving longshore labor
could have been that expensive before containerization, the book provides
an illuminating sample manifest from a 1954 voyage of a C-2 type cargo
ship, the S. S. Warrior. The contents: 74,903 cases, 71,726 cartons,
24,0336 bags, 10,671 boxes, 2,880 bundles, 2,877 packages, 2,634 pieces,
1,538 drums, 888 cans, 815 barrels, 53 wheeled vehicles, 21 crates, 10
transporters, 5 reels and 1,525 undetermined. Thats a total of 194,582
pieces, each of which had to be manually handled! The total was just
5,015 long tons of cargo (about 5,095 metric tons). By contrast, the
gigantic MSC Daniela, which made its maiden voyage in 2009, carries
13,800 containers, with a deadweight tonnage of 165,000 tons. Thats a
30x improvement in tonnage and a 15x reduction in number of pieces for a

single port call. Or in other words, a change from 0.02 tons (20 kg) per
handling to about 12 tons per handling, or a 465X improvement in
handling efficiency (somebody check my arithmetic but I think I did
this right). And of course, every movement in the MSC Danielas world is
precisely choreographed and monitered by computer. Back in 1954,
Brando time, experienced longshoremen decided how to pack a hold, and
if they got it wrong, loading and unloading would take vastly longer. And
of course there was no end-to-end coordination, let alone global
coordination.
Thats not to say the mechanical engineering part of the story is
uninteresting. The plain big box itself is simple: thin corrugated sheet
aluminum with load-bearing corner posts capable of supporting a stack
about 6-containers high (not sure of this figure), with locking mechanisms
to link the boxes. But this arrangement teems with subtleties, from
questions of swing control of ship-straddling cranes, to path-planning for
automated port transporters, to the problem of ensuring the stability of a 6high stack of containers in high seas, with the ship pitching and rolling
violently up to 30 degrees away from the vertical. Here is a picture of the
twist-lock mechanism that holds containers together and to the
ship/train/truck-bed and endures enormous stresses, to makes this magic
possible:

I am probably a little biased in my interest here, since I am fascinated


by the blend of OR, planning and mechanical engineering (particularly

stability and control) problems represented by container handling


operations. I actually wrote a little simulator for my students to use as the
basis for their term project when I taught a graduate course on complex
engineering systems at Cornell in 2006 (it is basically a Matlab
visualization and domain model with swinging cranes and stacking logic;
if you are interested, email me and Ill send you the code). But if you are
interested in this aspect, try to get hold of the Rotterdam and Singapore
port episodes of the National Geographic Channel Megastructures show.
There is a third thread to this subplot, that is probably the dullest part
of the book: the story of how American and International standards bodies
under heavy pressure from various business and political interests
struggled and eventually reached a set of compromises that allowed the
container to reach its full potential as an interoperability mechanism. The
story was probably a lot more interesting than Levinson was able to make
it sound, but thats probably because it would take an engineering eye,
rather than an economists eye, to bring out the richness underneath the
apparently dull deliberations of standards bodies. There are also less
important, but entertaining threads that have to do with the technical
challenges of getting containers on and off trains and trucks, the sideshow
battle between trucking and railroads, the design of cells in the ships
themselves, the relationship between a ships speed/capacity tradeoffs and
oil prices, and so forth.
Three: The Labor and Politics Subplot
This is the subplot that most of us would instinctively associate with
the story of shipping, thanks to Marlon Brando. The big picture has a story
with two big swings. First, in the early part of the century, dock labor was
a truly Darwinian world of competition, since there were spikes of demand
for longshore labor followed by long periods of no work. Since it was a
low-skill job, requiring little formal education and a lot of muscle, there
was a huge oversupply of willing labor. Stevedoring companies general
contractors for port operations picked crews for loading and unloading
operations through a highly corrupt system of mustering, favors, bribes,
kickbacks and loansharking. The longshoremen, for their part, formed
close brotherhoods, usually along ethnic lines (Irish, Italian, Black in the
US) that systematically kept out outsiders, and maintained a tightly

territorial system of controls over individual piers. This capitalist


exploitation system then gave way to organized labor, but a very different
sort of labor movement than in other industries. Where other workers
fought for steady work and regular hours and pay, longshoremen fought to
keep their free-agent/social-network driven labor model alive, and resist
systematization. This local, highly clannish and tribal labor movement had
a very different kind of DNA from that of the inland labor movements, and
as containerization proceeded, the two sides fought each other as much as
they fought management, politicians and automation. Though longshore
labor is at the center of this subplot, it is important not to forget the labor
movements in the railroad and trucking worlds. Those stories played out
equally messily.
East and West coast labor reacted and responded differently, as did
other parts of the world, but ultimately, the forces were much too large for
labor to handle. Still, the labor movement won possibly its most
significant victory in this industry, and came to be viewed as a model by
labor movements in other industries.
The labor story is essentially a human one, and it has to be read in
detail to be appreciated, for all its drama. The story has its bizarre
moments (at one point, West Coast labor had to actual fight management
to advocate faster mechanization and containerization, for reasons too
complex to go into in this post), and is overall the part that will interest the
most people. It is important though, not to lose sight of the grand epic
story, within which labor was just one thread.
The other big part of this subplot, inextricably intertwined with the
labor thread, is the politics thread. And here I mean primarily the politics
of regulation and deregulation, not local/urban. To those of us who have
no rich memory of regulated economies, the labyrinthine complexities of
regulation-era industrial organization are simply incomprehensible. The
star of this thread was the all-powerful Interstate Commerce Commission
of the US (ICC), and its sidekicks, the government-legitimized pricefixing cartels of shipping lines on major routes. The ICC controlled the
world of transport at a bizarre level of detail, ranging from commoditylevel pricing, to dictating route-level access, to carefully managing
competition between rail, road and sea, to keep each sector viable and
stable. And of course, there was a massive money-chest of subsidies,

loans and direct government infrastructure investment in ports to be fought


over. The half-century long story can in fact be read as the McLean bull in
the china shop of brittle and insane ICC regulations, simultaneously
smashing the system to pieces, and taking advantage of it.
Four: The Urban Geography and History Subplot
This is the subplot that interested me the most. Containerization
represented a technological force that old-style manual-labor-intensive
ports and their cities simply were not capable of handling. The case of
New York vs. Newark/Elizabeth is instructive. New York, the greatest port
of the previous era of shipping, was an economy that practically lived off
shipping, with hundreds of thousands employed directly or indirectly by
the sector. Other industries ranging from garments to meatpacking
inhabited New York primarily because the inefficiencies of shipping made
it crucial to gain any possible efficiency through close location.
Containerization changed all that. While New York local politics
around ports was struggling with irrelevant issues, it was about to be
blindsided by containers. The bistate Port Authority, finding itself cut out
of New York power games, saw an opportunity when McLean shipping
was looking to build the first northeastern container handling wharf. This
required clean sheet design (parallel parking wharfs instead of piers
perpendicular to shore), and plenty of room for stacking and cranes. While
nominally supposed to work towards the interests of both states, the Port
Authority essentially bet on Newark, and later, the first modern container
port at Elizabeth. The result was drastic: New York cargo traffic collapsed
over just a decade, while Newark went from nothing to gigantic. Today,
you can see signs of this: if you ever fly into Newark, look out the window
at the enormous maze of rail, truck and sea traffic. The story repeated
itself around the US and the world. Famous old ports like London,
Liverpool and San Francisco declined. In their place arose fewer and far
larger ports in oddball places: Felixstowe in the UK, Rotterdam, Seattle,
Charleston, Singapore, and so forth.
This geographic churn had a pattern. Not only did old displace new,
but there were far fewer new ports, and they were far larger and with a
different texture. Since container ports are efficient, industry didnt need to

locate near them, and they became vast box parking lots in otherwise
empty areas. The left-behind cities not only faced a loss of their portbased economies, but also saw their industrial base flee to the hinterland.
Cities like New York and San Francisco had to rethink their entire raison
detre, figure out what to do with abandoned shorelines, and reinvent
themselves as centers of culture and information work.
There is a historical texture here: the rise of Japan, Vietnam, the Suez
Crisis, oil shocks, and the Panama Canal all played a role. Just one
example: McLean, through his Vietnam contract, found himself with fullypaid up, return-trip empty containers making their way back across the
Pacific. Anything he could fill his boxes with was pure profit, and Japan
provided the contents. With that, the stage was set for the Western US to
rapidly outpace the East Coast in shipping. Entire country-sized
economies had their histories shaped by big bets on container shipping
(Singapore being the most obvious example). At the time the book was
written, 3 of the top 5 ports (Hong Kong, Singapore, Shanghai, Shenzen
and Busan, Korea) were in China. Los Angeles had displaced
Newark/New York as the top port in the US. London and Liverpool, the
heart of the great maritime empire of the Colonial British, did not make
the top 20 list.
Five: The Broad Impact Subplot
Lets wrap up by looking at how the narrow world of container
shipping ended up disrupting the rest of the world. The big insight here is
not just that shipping costs dropped precipitously, but that shipping
became vastly more reliable and simple as a consequence. The 25%
transportation fraction of global goods in 1960 is almost certainly an
understatement because most producers simply could not ship long
distances at all: stuff got broken, stolen and lost, and it took nightmarish
levels of effort to even make that happen. Instead of end-to-end shipping
with central consolidation, you had shipping departments orchestrating ad
hoc journeys, dealing with dozens of carriers, forwarding agents, transport
lines and border controls.
Today, shipping has gotten to a level of point-to-point packetswitched efficiency, where the shipper needs to do a hundredth of the work
and can expect vastly higher reliability, on-time performance, far lower

insurance costs, and lower inventories. That means a qualitatively new


level of thinking, one driven by the axiom that realistically, the entire
world is your market, no matter what you make. The dependability of the
container-plumbing makes you rethink every business.
In short, container shipping, through its efficiency, was a big cause of
the disaggregation of vertically integrated industry structures and the
globalization of supply chains along Toyota-like just-in-time models. Just
as the Web (1.0 and 2.0) sparked a whole new world of business models,
container shipping did as well.
The deepest insight about this is captured in one startling point made
in the book. Before container shipping, most cargo transport involved
either raw materials or completely finished products. After container
shipping, the center of gravity shifted to intermediate (supply chain)
goods: parts and subassemblies. Multinationals learned the art of sourcing
production in real time to take advantage of supply chain and currency
conditions, and moving components for assembly and delivery at the right
levels of disaggregation. Thanks to container shipping, manufacturers of
things as messy and complicated as refrigerators, computers and airplanes
are able to manage their material flows with almost the same level of ease
that the power sector manages power flows on the electric grid through
near real-time commodity trading and load-balancing.
My clever-phrase-coinage of the day. The container did not only make
just-in-time possible. It made just-in-place possible.
Conclusion: Towards Box: The Movie
I wasnt kidding: I think this story deserves a big, epic-scale movie.
Not some schmaltzy piece-of-crap story about a single longshoreman
facing down adversity and succeeding or failing in the container world,
but one that tells the tale in all its austere, beyond-human grandeur; one
that acknowledges and celebrates the drama of forces far larger than any
individual human.
An end-note: this is my first post in a deliberate attempt to steer my
business/management posts and book reviews 90 degrees: from a focus on

general management topics like leadership, and functional topics like HR


and marketing, towards vertical topics. I have felt for some time that
business writing is facing diminishing returns from horizontal and
functional foci, and while Ill return to those views when I find good
material, my pipeline is mainly full of this kind of stuff now. Hope you
guys like the change in direction: steering a major theme of a very wordy,
high-inertia blog like this one is as hard as steering a fully-laden
container ship. It is going to take some time to overcome the directional
inertia. Some verticals on my radar, besides shipping, include the
garbage/trash industry, healthcare, and infrastructure. If you are
interested in particular verticals, holler em out, along with suggested
books.

The World of Garbage


November 6, 2010
For the last two years, Ive had three books on garbage near the top of
my reading pile, and Ive gradually worked my way through two of them
and am nearly done with the third. The books are Rubbish: The
Archeology of Garbage by William Rathje and Cullen Murphy (1992),
Garbage Land: On the Secret Trail of Trash by Elizabeth Royte (2005),
and Gone Tomorrow: The Hidden Life of Garbage by Heather Rogers
(2005). Last week, I also watched the CNBC documentary, Trash Inc.:
The Secret Life of Garbage. Notice something about the four subtitles?
Each hints at the hidden nature of the subject. It is a buried, hidden secret
physically and philosophically. And there are many reasons why
uncovering the secret is an interesting and valuable activity. The three
books are motivated by three largely separate reasons: Rathje and Cullen
bring an academic, anthropological eye to the subject. Roytes book is a
mix of amateur curiosity and concerned citizenship, while Rogers is
straight-up environmental activism. But reading the 3 books, I realized
that none of those reasons interested me particularly. I was fascinated by a
fourth reason: garbage (along with sewage, which I wont cover here) is
possibly the only complete, empirical big-picture view of humanity you
can find.
The Boundary Conditions of Civilization
Sometimes an engineering education can lead to very curious ideas
about what is important. Garbage is important and interesting in an
engineering sense because it illuminates one of the boundary conditions of
any systemic view of the world. If you cut through the crap (no pun
intended) of all our lofty views of ourselves, humanity is essentially a
giant system that feeds on low-entropy resources on one end (mines,
forests, oilfields) and defecates high-entropy waste at the other. Among
other things, this transformation allows us to create low-entropy islands of
order around ourselves (cities, buildings and everything else physical that
we build). If this flow from resources to garbage were to shut down,
nature would rapidly reclaim every inch of civilization, and you can read

about this fascinating thought experiment in The World Without Us by


Alan Weisman which Ive mentioned before.
Heres the thing about this view: the input end is simply too complex
to comprehend in any summary sense. We suck resources out of the planet
in extremely complicated and diversified ways. The processing part is also
far too complex to understand (it is basically civilization), but thought
experiments like Weismans at least help us get a non-empirical sense of
the scale and complexity of our presence on this planet.
But the output end? Easy. Just drill into the nearest landfill. Or follow
the course of a single man-made artifact. In Trash Inc., there is a revealing
example: plastic beverage bottles.
Message in a Bottle
The story of plastic water/soda bottles from a trash perspective is
simple. According to Trash Inc., in the US, about 51 billion bottles are
used every year (this number seems incredible. It amounts to about 1
bottle per person every 2 days. But it seems to be correct).
Only about 22% are recycled. The recycled stuff goes to make
polyester fabrics, mats and the like. Ironically, a manufacturer of such
recycled plastic goods in the US profiled in the documentary noted that he
was forced to import about 70% of his bottle needs from countries like
Canada.
What happens to the rest? Those that get thrown away with the
regular trash make it into the regular waste stream, with companies like
Waste Management working hard to figure out how to cheaply separate
the bottles out (since they represent a significant revenue opportunity; a
WM talking head in the documentary noted that WM could potentially
increase its revenues from $13 billion to $23 billion if it could just figure
out how to cheaply separate valuable recyclables from the waste stream
headed to landfills).
And there is a third category: stuff that doesnt even get to landfills,
but washes down streams and rivers into the open ocean, where it drifts for

hundreds of miles to form garbage islands in the middle of the ocean, such
as the Great Pacific Garbage Patch.
The story of the plastic water bottle serves as a sort of radioactive
tracer through the garbage industry, touching as it does every piece of the
puzzle.
The three books and the documentary explore different aspects of the
system, so lets briefly review them.
Rubbish by Rathje and Cullen
Rubbish, though a little dated, is the most professional of the three
books, since it is the result of a large, long-term academic study, with no
particular agenda in mind, and written by the godfather of the entire field
of Garbology. To the principals of the University of Arizona Garbage
project, garbage is just archeological raw material. The fact that drilling
into modern, active landfills tells us about modern humans, while digging
into ancient mounds tells us about Sumerians, is irrelevant to them. The
perspective lends an interesting kind of objectivity to the book.
The first and most basic thing I learned from the book surprised me
no end, and answered a question that I had always wondered about. Why
do ancient civilizations seem to get buried under mounds?
Turns out that for much of history, waste simply accumulated on
floors inside dwellings. Residents would simply put in new layers of fresh
clay to cover up the trash. Every dwelling was a micro landfill. When the
floor rose too high, they raised the ceiling and doorways.
The result was that most ancient civilizations rose (literally) on a pile
of their own trash. There is even a table of historical waste accumulation
rates included. South Asia is the winner in this contest: the Bronze Age
Indus Valley Civilization apparently had the fastest accumulation of waste
at nearly 1000 cm/century. (I cant resist a little subcontinental humor:
how about we attribute all the great cultural achievements of the Indus
Valley Civilization to modern India, and the trash to modern Pakistan,
where the major archeological sites are situated today?)

Ancient Troy was also quite the trash generator, at about a 140
cm/century. Since those ancient times, accumulation rates have declined
dramatically (this doesnt mean weve been producing less trash per
capita; merely that weve stopped burying it under our own floors).
Historically, trash was also thrown out onto streets, and burned
outside cities. The composition of trash has changed as well. If you think
todays plastic water bottles are a menace, you should read the description
of the horse-manure problem that (literally) buried New York before the
automobile.
Skipping ahead a few thousand years, you get the modern sanitary
landfill. But the takeaway here is a sense of perspective. Historically
speaking, our modern times are not the trashiest time in our history.
Though the scale and chemical diversity of the trash management problem
is huge in our time simply because of the size of the global population, we
are relatively far ahead of older civilizations in managing our trash.
Much of the work described in the book is about the insights you can
obtained by drilling into landfills, or collecting garbage bags directly from
households. The findings provide fascinating glimpses into the delusions
of human beings. Take food habits for instance. One interesting research
exercise the book describes is a study comparing self-reported food habits
to the revealed food habits based on trash analysis. The authors call this
the Lean Cuisine Syndrome:
People consistently underreport the amount of regular
soda, pastries, chocolate, and fats that they consume; they
consistently over-report the amount of fruits and diet soda.
The book notes a related phenomenon called the Surrogate
Syndrome: people are able to describe the actual habits of family members
and neighbors with chilling accuracy.
Another fascinating analysis involves pull-tabs of beer cans. These
seem to be a sort of carbon-dating tool for modern garbage.

The unique punch-top on Coors beer cans, for


example, was used only between March of1974 and June of
1977 In landfills around the country, wherever Coors
beer cans were discarded, punch-top cans not only identify
strata associated with a narrow band of dates but also
separate two epochs fone from another.
Perhaps the most fascinating part of the book is the demographic
detective work stories. It turns out you can accurately figure out a lot of
things about neighborhoods: income levels, race, number of children,
consumption patterns and the like, simply by looking at and classifying the
trash. Trash also appears to be a goldmine of market research (I am
surprised there isnt a market research agency out there offering
segmentation reports based on personas/clusters derived from trash
analysis. Or perhaps there is). Interestingly, the hardest thing to infer from
trash is the proportion of men in a population. A Census Bureau funded
project failed to find any convincing models. For other variables, reliable
equations are available. For example,
Infant Population = 0.01506*(Number of diapers in a
5 week collection)
There are similar correlates for women. For men though, such
indicators are unreliable: Men are not exactly invisible in garbage, but
garbage is a more unreliable indicator of their live-in presence than it is
for any other demographic group
Overall, the book is fascinating in the sense that Levitts
Freakonomics is fascinating. There is no overarching conceptual
framework, just an entertainingly told story that weaves together a few
broad themes and dozens of anecdotes chosen as much for entertainment
as insight.
Garbage Land by Elizabeth Royte
Roytes book is much more of a popular science treatment. The
interesting part is her follow the trail approach to her subject.

She starts with an account of an urban adventure: canoeing in


Gowanus Canal, a highly polluted waterway in Brooklyn, in 2002, with
volunteers dedicated to keeping it clean. From there she moves on to an
analysis of her own life by examining her own garbage, an amateur selfstudy along the lines of the Rathje-Cullen study of larger communities.
Among her reflections:
Picking through garbage was smelly and messy and
time-consuming, but it was revelatory in a way. I hadnt
realized my diet was so boring. Anyone picking through my
castoffs would presume my family survived on peanut
butter, jelly, bread, orange juice, milk, and wine. And,
largely, we did.
The opening chapter includes a page from her garbage diary, and it
inspired me enough to stop and reflect on my own garbage and recycling
that week. Suffice it to say, the lessons were not pleasant.
From her home, Royte moves on to the next logical step: the curbside.
She arranges a ride-along with a garbage truck. This section is a
fascinating portrait of New Yorks Strongest, as the sanitation department
workers call themselves (the cops are the Finest and the firefighters are
the bravest). The NYC garbagemen lift about five to six tons a day, in
seventy-pound bags. The view from the garbagemans perspective is
disturbing. Royte notes:
I knew, after just one day on the job, that san men
constantly made judgments about individuals. They
determined residents wealth or poverty by the artifacts
they left behind. They appraised real estate by the height of
a discarded Christmas tree, measured education level by the
newspapers and magazines stacked on the curb. Glancing at
the flotsam and jetsam as it tumbled through their hopper,
they parsed health status and sexual practices.
It is not entirely a first-person narrative though. Bits of history and
research are woven through the narrative. There is an interesting section
on the history of New Yorks sanitation history, and the horse manure

problem I mentioned before. In 1880, we learn, 15,000 dead horses had to


be cleared from city streets. City horses dumped 500,000 pounds of
manure and 45,000 gallons of urine onto city streets daily. The situation
needed a hero, and Colonel George Waring was that hero. He created the
first modern civic garbage-handling infrastructure in the US.
The rest of the book continues in this vein, chronicling Roytes
explorations of landfills, incinerator plants, toilets and sewage. The story
is by turns alarming, amusing, disgusting and scary. While there is no
overt alarmism, the book, by virtue of being a very personal exploration,
gets to you in a way that the more detached and objective Rathje-Cullen
book does not.
Gone Tomorrow by Heather Rogers
For completeness, Ill offer just a note about Gone Tomorrow, since I
havent finished reading it. It covers much of the same ground as the first
two books, but primarily from an environmentalist perspective (there is
also a documentary). It lacks the open-ended curiosity and sense of
discovery you get from the other two books, but you do get the right
pattern of highlighting if you are interested in the environmental angle.
Trash Inc.
And lets wrap with the CNBC documentary. While rather shallow,
the documentary does have the largest scope of all the material I went
through. Of particular interest is a segment on the garbage problem in
China, another on the MIT Trash Track project, and the plastic water bottle
story I told in the beginning. Catch a rerun if you can.
Landfills
Through the three books and the documentary, the star of the show is
definitely the landfill. One particular landfill, the Fresh Kills landfill in
New York (closed about a decade ago) plays a role in all the stories (the
largest landfill in the US today is the Apex landfill in Nevada).

The closing of Fresh Kills turned out to be a big event in garbage


history, since it triggered possibly the biggest trash transport program in
history, as the city orchestrated a massive garbage trucking program that
today ships its trash out all over the country. Of New York Citys 1.3
billion dollar annual budget, about $330 million a year goes towards
exporting the trash.
New Yorks statistics are astounding: 12,000 tons a day, 24,000lb per
person per year, garbagemen making $70,000 a year with overtime (the
most experienced making six figures), a 300 square mile territory, a Mafia
angle, 1500 trucks, and a transport network that fans out hundreds of miles
into the American hinterland.
At the other end of the distribution chain are towns like Fox Township
in Pennsylvania, neighbor to the Greentree landfill owned by Veolia, a
French company. The residents are understandably ambivalent about the
presence of a giant garbage can in their backyard. On the one hand, the
landfill is a constant threat to the local environment, the water quality in
particular. But on the other hand, half the towns budget comes from the
fees paid by the landfill, which charges $3 per ton as tipping fees to
customers, and passes along a cut to the city.
The landfills themselves are fascinating civil engineering structures.
Todays modern sanitary landfills are dry landfills (the old theory that
garbage should be wet so it can degrade faster has been discarded in
favor of keeping it as dry as possible and sealing it in so that a landfill is
effectively forever). Liquid runoff (leachate: exactly the same stuff that
you sometimes find at the bottom of your trash can, the brown smelly
liquid) is carefully directed to the sewage stream, while vents release the
gases. The gases include methane and are a source of revenue, via power
generation (there is a BMW plant that runs off landfill gas).
But despite the engineering complexity, these are basically just large
trash cans. Lined with plastic like the one in your kitchen. The only
difference is that the trash has nowhere to go. Once it is full, it is capped
and landscaped, and you get all those strangely beautiful platonic
mountains you see when you drive along country highways (you can tell
when you are looking at a trash mountain: you will see venting pipes

sticking out, and the slopes will be at a precise 30 degree gradient). There
doesnt appear to be any need for alarmism though. America at least, has
plenty of room. Other parts of the world may not be as lucky.
There are 2300 landfills around the country. You could say the United
States is a collection of 2300 large families, each with one giant trash can.
The Global Picture
I havent found a good source that provides a global picture. The
CNBC documentary provides a glimpse into China, where Beijing alone
has a catastrophe looming (the city is overflowing with garbage in
unauthorized dump sites, because the available government-owned
landfills are insufficient for the growing citys waste stream).
Growing up in India, I have some sense of the world of garbage there.
There are both positives and negatives. On the positive side, the largescale consumerist levels of trash production are still relatively rare in
India, and limited to the most well-off, westernized households. Growing
up, we generated practically no trash, simply because we mostly ate homecooked food and did not consume the bewildering array of consumer
products that Americans routinely consume. As I recall, we owned a small
2-3 gallon trash basket, and generated perhaps one basket-full a week,
most of which was organic matter (which went to our garden). There was
little packaging. Groceries came in recycled newspaper bags, which we
recycled again.
But what little waste we did generate was poorly captured in the
organized waste stream. There were many disorganized small dumps in
the back alleys and few dumpsters.
By my teenage years in the 80s, modernity began catching up. Thin
plastic bags made from recycled (downcycled actually) plastic caught on
and replaced the newspaper bags. After reigning for about a decade, they
thankfully declined in popularity (thanks in part due to an unanticipated
consequence: stray cows eating them and then dying as the plastic choked
their intestines), and I believe have actually been banned, at least in major
cities.

On the other end, though much of the waste is basically un-managed,


recycling is probably vastly more efficient than anywhere in the West. But
the efficiency comes at a great human cost: there is an entire hierarchy of
impoverished classes (and socially immobile castes) that makes its living
off the waste stream. At the very top (which isnt saying much) are the
door-to-door used-newspaper buyers, who make paper bags or sell to
recycling plants (our gardener made some money on the side in this trade,
and I spent many evenings as a kid happily helping him and his son, who
was about my age, make paper bags). Also at the top are the wandering
traders who exchange junk and scrap metal for new aluminum
kitchenware. Below them you find a variety of roles, from the ragpickers
and scavengers, who clamber over landfills looking for anything of value,
to entire shantytowns of scrap merchants that spring up around the
landfills, buying from the scavengers. The system is efficient and picks the
waste-stream clean of anything of even the lowest potential value. But yes,
it involves humans running a daily risk of all sorts of infection and other
dangers.
To foreigners, looking out the window as an airplane comes in to land
at Mumbai can be a shock. The landing/take off glide paths often go right
over the main garbage dumps of Mumbai and the sprawling mess is
anything but pleasant to look at. But if you ever drive past through the
citys neighborhoods where the scavenger trade shops line the streets, you
cannot help but admire the gritty resourcefulness with which so many
people manage to live off garbage.
But the situation is gradually getting worse, driven both by the
exploding population and the rise of American-style consumerism. During
my last visit to India in 2008, I noticed that while my mother still ran the
same tight, low-footprint household she always has, many of the younger
yuppie couples seemed to have adopted the same lifestyle that had
shocked me when I first arrived in America in 1997. A lifestyle whose
story is written with discarded paper cups, too many paper napkins, water
bottles, product packaging and discarded, broken appliances. A culture of
home-cooked food is gradually transforming into a culture of take-out
food. And it isnt American-style fast-food that is to blame. You can now
buy frozen or packaged versions of almost everything that I thought of as

home-made Indian food, growing up. And I have to admit, every passing
year here in the States, I cook less, and buy more frozen, packaged foods
from my local Indian grocery store. Pizza boxes may be appearing in
Indian trash cans, but frozen chana masala boxes are appearing in
American trash cans as well (looking around the world though, it seems to
me that the Japanese are possibly the most in love with ridiculous amounts
of packaging).
But theres even more to the globalization of garbage than just
different country-level views. There is the international trade in garbage.
Places like India and China import garbage and recycling at all levels from
entire ships destined for the scrap-metal yard (which I wrote about earlier),
[January 28, 2010] to lead batteries to paper meant for recycling. The
waste stream is more than a network of dump routes that fans out from
cities like New York. It is a huge circulatory system that spans the globe.
Exploring Further
I have to admit, despite reading a ton of material on the subject, I am
merely a lot more informed, not much wiser. What is the true DNA of the
world of garbage? What is its significance within an overall understanding
of our world? Is it merely a treasure-trove of anthropological insights, or is
there a deeper level of analysis we can get to? The books left me with the
uncomfortable feeling that the garbage professionals were so absorbed in
the immediate details that they were missing something bigger. But I dont
know what that is. Somehow garbage in the literal sense probably fits into
the End of the World theme that I blogged about before (where I proposed
my garbage eschatology model of how the world might end).
Anyway, I expect my interest in this topic will continue to evolve.
Ive started a trail on the subject (click the image below), which you can
explore. Do send me link/resource suggestions to add to it. As you can tell
by the relative incoherence of the trail, I dont yet have a good idea about
how to put the jigsaw puzzle together in a more meaningful way.

The Disruption of Bronze


February 2, 2011
I pride myself on my hard-won sense of history. World history is
probably the subject Ive studied the most on my own, starting with
Somerset Plantagenet Frys beautifully illustrated DK History of the
World at age 15. I studied the thing obsessively for nearly a year, taking
copious notes and neglecting my school history syllabus. Its been the best
intellectual investment of my life. Since then, I periodically return to
history to refresh my brain whenever I think it my thinking is getting stale.
Most recently, Ive been reading Gibbons Decline and Fall of the Roman
Empire and Alfred Thayer Mahans The Influence of Sea Power Upon
History. My tastes have gradually shifted from straightforward histories by
modern historians to analytical histories with a specific angle, preferably
written by historians from eras besides our own.
The big value to studying world history is that no matter how much
you know or think you know, one new fact can completely rewire your
perspectives. The biggest such surprise for me was understanding the real
story (or as real as history ever gets) of how iron came to displace bronze,
and what truly happened in the shift between the Bronze Age and the Iron
Age.
What comes to mind when you think bronze? Hand-crafted
artifacts, right?
What about iron? Big, modern, steel mills and skyscrapers, right? Iron
metallurgy is obviously the more advanced and sophisticated industry in
our time.
The Iron Age displaced the Bronze Age sometime in the late second
millennium BC. The way the story is usually told, iron was what powered
the rise of the obscure barbarian-nomads known as the Aryans throughout
the ancient world.

You could be forgiven for thinking that this was a sudden event based
on iron being suddenly discovered and turning out to be a superior
material for weaponry, and the advantage accidentally happening to fall to
the barbarian-nomads rather than the civilization centers.
Far from it.
Heres the real (or less wrong) story in outline.
The Clue in the Tin
You see iron and bronze co-existed for a long time. Iron is a plentiful
element, and can be found in relatively pure forms in meteorites (think of
meteorites as the starter kits for iron metallurgy). Visit a geological
museum sometime to see for yourself (I grew up in a steel town).
It is hard to smelt and work, but basically once you figure out some
rudimentary metallurgy and can generate sufficiently high temperatures to
work it, you can handle iron, at least in crude, brittle and easily rusted
forms. Not quite steel, but then who cares about rust and extreme hardness
if the objective is to split open the skull of another guy in the next 10
seconds.
Bronze on the other hand is a very difficult material to handle. There
have been two forms in antiquity. The earlier Bronze Age was dominated
by what is known as arsenical bronze. Thats copper alloyed with arsenic
to make it harder. Thats not very different from iron. Copper is much
scarcer and less widely-distributed of course, but it does occur all over the
place. And fortunately, when you do find it, copper usually has trace
arsenic contamination in its natural form. So you are starting with all the
raw material you need.
The later Bronze Age though, relied on a much better material: tin
bronze. Now this is where the story gets interesting. Tin is an extremely
rare element. It only occurs in usable concentrations in a few isolated
locations worldwide.

In fact known sources during the Bronze Age were in places like
England, France, the Czech Republic and the Malay peninsula. Deep in
barbarian-nomad lands of the time. As far as we can tell, tin was first
mined somewhere in the Czech Republic around 2500 BC, and the
practice spread to places like Britain and France by about 2000.
Notice something about that list? They are very far from the major
Bronze Age urban civilizations around the Nile, in the Middle East and in
the Indus Valley, of 4000-2000 BC or so.
This immediately implies that there must have been a globalized longdistance trade in tin connecting the farthest corners of Europe (and
possibly Malaya) with the heart of the ancient world. Not only that, you
are forced to recognize that the metallurgists of the day must have had
sophisticated and deliberate alloying methods, since you cannot assume,
as you might be tempted to in the case of arsenical bronze, that the
ancients didnt really know what they were doing. You cannot produce tinbronze by accident. Tin implies skills, accurate measurements, technology,
guild-style education, and land and sea trade of sufficient sophistication
that you can call it an industry.
Whats more, the use of tin also implies that the Bronze Age
civilizations didnt just sit around inside their borders, enjoying their
urban lifestyles. They must have actually traded somehow with the far
corners of the barbarian-nomad world that eventually conquered them.
Clearly the precursors of the Aryans and other nomadic peoples of the
Bronze Age (including the Celts in Europe, the ethnic Malays, and so
forth) must have had a lot of active contact with the urban civilizations
(naive students of history often dont get that humans had basically
dispersed through the entire known world by 10,000 BC; civilization
may have spread from a few centers, but people didnt spread that way,
they spread much earlier).
In fact, tin almost defines civilization: only the 3-4 centers of urban
civilization of that period had the coordination capabilities necessary to
arrange for the shipping of tin over land and sea, across long distances. It
is well recognized that they had trade with each other, with different trade
imbalances (there is clear evidence of land and sea trade among the

Mesopotamian, Nile and Indus river valleys; the Yellow River portions of
China were a little more disconnected at that time).
What is not as well recognized is that the evidence of commodities
like tin indicates that these civilizations must have also traded extensively
with the barbarian-nomad worlds in their interstices and beyond their
borders in every direction. The iron-wielding barbarians were not
shadowy strangers who suddenly descended on the urban centers out of
the shadows. They were marginal peoples with whom the civilizations had
relationships.
So tin implies the existence of sophisticated international trade. I
suspect it even means that tin was the first true commodity money
(commodity monies dont just emerge based on their physical properties
and value; they must provide a raison detre for trade over long distances).
Iron vs. Bronze
So what about iron? Since it was all over the place, we cannot trace
the origins of iron smelting properly, and in a sense there is no good
answer to the question where was iron discovered? It was in use as a
peripheral metal for a long period before it displaced bronze (possibly
inside the Bronze Age civilizations and the barbarian-nomad margins). As
the Wikipedia article says, with reference to iron use before the Iron Age:
Meteoric iron, or iron-nickel alloy, was used by
various ancient peoples thousands of years before the Iron
Age. This iron, being in its native metallic state, required
no smelting of ores.By the Middle Bronze Age, increasing
numbers of smelted iron objects (distinguishable from
meteoric iron by the lack of nickel in the product) appeared
throughout
Anatolia,
Mesopotamia,
the
Indian
subcontinent, the Levant, the Mediterranean, and Egypt.
The earliest systematic production and use of iron
implements originates in Anatolia, beginning around 2000
BCE. Recent archaeological research in the Ganges Valley,
India showed early iron working by 1800 BC. However,
this metal was expensive, perhaps because of the technical

processes required to make steel, the most useful iron


product. It is attested in both documents and in
archaeological contexts as a substance used in high value
items such as jewelry.
Unlike tin-bronze, which probably required a specific sequence of
local inventions near the ore sources followed by diffusion, iron use could
(and probably did) arise and evolve in multiple places in unrelated ways,
because it didnt depend on special ingredients. The idea that it might have
been expensive enough, in the form of steel, to be jewelry, is reminiscent
of the modern history of another metal: aluminum. Like iron, it is one of
the most commonplace metals, and like iron, until a cheap manufacturing
process was discovered, it was a noble metal. Rich people ate off
aluminum ware and wore aluminum jewelry.
So you can tell a broader, speculative history: since you didnt need
complicated shipping and smelting to make a basic use of iron, its use
could develop on the peripheries of civilization, among barbarian-nomads
who didnt demand the high quality that the tin-bronze markets did. Iron
didnt need the complicated industry that bronze did. Whats more,
chances are, the bronze guilds were likely quite snooty about the crappy,
rusty material outside of highly-refined and expensive jewelry uses.
But the margins, which didnt have the tin trade or industry, had a
good reason to pay attention to iron. I speculate that for the barbariannomad cultures that were far from the Bronze Age urban centers, the
upgrade that iron provided over stone, even with the problems of rust and
brittleness that plagued primitive iron, was enough for them to take down
the old Bronze-powered civilizations, and then leisurely evolve iron to its
modern form. I suspect a bronze-leapfrogging transition from stone to iron
happened in many places, as with cellphones today in Africa.
(aside, I assume there is an equally sophisticated story about how
bronze displaced stone; neolithic stone age cultures like the ones the
Europeans encountered in America, were far from grunting cave-dwellers.
They had evolved stone use to a high art).

By the time iron got both good enough and cheap enough to take on
bronze as a serious contender for uses like weaponry, the incumbent
Bronze Age civilizations couldnt catch up. The pre-industrial barbariannomads had the upper hand.
Iron didnt completely displace bronze in weaponry until quite late.
As late as Alexanders conquests, he still used bronze; iron technology
was not yet good enough at the highest levels of quality, but the point is, it
was good enough initially for the marginal markets, and for masses of
barbarian soldiers.
Sound familiar?
This is classic disruption in the sense of Clayton Christensen. An
initially low-quality marginal market product (iron) getting better and
eventually taking down the mainstream market (bronze), at a point where
the incumbents could do nothing, despite the extreme sophistication of
their civlization, with its evolved tin trading routes and deliberate
metallurgical practices.

Rewinding History
Understanding the history of bronze and iron better has forced me to
rewind my sense of when history proper starts by at least 11,000 years.

The story has given me a new appreciation for how sophisticated human
beings have been, and for how long. I used to think that truly
psychologically modern humans didnt emerge till about 1200 AD. The
story of bronze made me rewind my assessments to 4000 BC. Now,
though I dont know the details (nobody does), I think psychologically
modern human culture must have started no later than 10,000 BC, the
approximate period of what is called the Neolithic revolution.
Now I think the most interesting period in history is probably 10,000
BC to 4,000 BC. Even 20,000 BC to 10,000 BC is fascinating (thats when
the caves in Lascaux were painted), but lets march backwards one
millennium at a time.

Bays Conjecture
May 21, 2009
A few years ago, I was part of a two-day DARPA workshop on the
theme of Embedded Humans. These things tend to be brain-numbing, so
you know an idea is a good one if it manages to stick in your head. One
idea really stayed with me, and well call it Bays conjecture (John Bay,
who proposed it, has held several senior military research positions, and is
the author of a well-known technical textbook). It concerns the effect of
intelligent automation on work. What happens when the matrix of
technology around you gets smarter and smarter, and is able to make
decisions on your behalf, for itself and the overall system? Bays
conjecture is the antithesis of the Singularity idea (machines will get
smarter and rule us, a la Skynet I admit I am itching to see Terminator
Salvation). In some ways its implications are scarier.
The Conjecture
Bays conjecture is simply this: Autonomous machines are more
demanding of their operator than non-autonomous machines. The
implication is this picture:

The point of the picture is this: when technology gets smarter, the
total work being performed increases. Or in Bays words, force
multiplication through accomplishment of more demanding tasks.
Humans are always taking on challenges that are at the edge of the current
capability of humans and machines combined. So like a muscle being
stressed to failure, total capacity grows, but work grows faster. We never
build technology that will actually relieve the load on us and make things
simpler. We only end up building technology that creates MORE work for
us.
The one exception is what we might call Bays corollary: he asserts
that if you design systems with the principle of human override
protection, total work capacity collapses back to the capability of humans

alone. We are both too greedy and too lazy for that. We are motivated by
the delusional picture in Case 1, and we end up creating Case 2.
Heres why this is the opposite of Skynet/Singularity. Those ideas are
based (in the caricature Sci-Fi/horror version) on the idea that machines,
once they get smarter than us, will want to enslave us. In the Matrix,
humans are reduced to batteries. In the Terminator series, it is unclear
what Skynet wants to do with humans, though I am guessing well find out
and it will probably be some sort of naive enslavement.
The point is: the greed-laziness dynamic will probably apply to
computer AIs as well. To get the most bang for the buck, humans will have
to be at their most free/liberated/creative within the Matrix. So thats good
news. But on the other hand, the complexity of the challenges we take on
cannot increase indefinitely. At some point, the humans+machines matrix
will take on a challenge thats too much for us, and well do it with a
creaking, high-entropy worldwide technology matrix that is built on
rotting, stratified layers of techno-human infrastructure. The whole thing
will fail to rise to the challenge and will collapse, dumping us all back into
the stone age.

Halls Law:
The Nineteenth Century Prequel to Moores Law
March 8, 2012
For the past several months, Ive been immersed in nineteenth century
history. Specifically, the history of interchangeability in technology
between 1765, when the Systme Gribeauval, the first modern technology
doctrine based on the potential of interchangeable parts, was articulated,
and 1919, when Frederick Taylor wrote The Principles of Scientific
Management.
Here is the story represented as a Double Freytag diagram, which
should be particularly useful for those of you who have read Tempo. For
those of you who havent, think of the 1825 Hall Carbine peak as the
Aha! moment when interchangeability was first figured out, and the
1919 peak as the conclusion of the technology part of the story, with the
focus shifting to management innovation, thanks in part to Taylor.

The unsung and rather tragic hero of the story of interchangeability


was John Harris Hall (1781 1841), inventor of the Hall carbine. So I am
naming my analog to Moores Law for the 19th century Halls Law in his
honor.

The story of Halls Law is in a sense a prequel to the unfinished story


of Moores Law. The two stories are almost eerily similar, even to
believers in the history repeats itself maxim.
Why does the story matter? For me, it is enough that it is a
fantastically interesting story. But if you must have a mercenary reason for
reading this post, here it is: understanding it is your best guide to the
Moores Law endgame.
So here is my telling of this tale. Settle in, its going to be another
long one.
Onion Steel
In A Brief History of the Corporation, I argued that there were two
distinct phases an early mercantile-industrial phase that was primarily
European in character, extending from about 1600 to 1800, and a later
Schumpeterian-industrial phase, extending from about 1800-2000, that
was primarily American and Russian in character.
Each phase was enabled by a distinct technological culture. In the
early, British phase, a scientific sensibility was the exception rather than
the rule. The default was the craftsman sensibility. In the later, AmericanRussian phase, the scientific sensibility was the rule and the craftsman
sensibility the exception (it is notable that the American-Russian phase
was inspired by French thought rather than British; call it Napoleons
revenge).
What was this (much romanticized today) craftsman sensibility?
Consider this passage about the state of steel-making in Sheffield, the
leading early nineteenth century technology center for the industry, before
the rise of American steel. The quote is from Charles Morris excellent
book The Tycoons, my primary reference for this post (it is nominally
about the lives of Rockefeller, Carnegie, J. P. Morgan and Jay Gould, but
is actually a much richer story about the broad sweep of 19th century
technology history; I am not done with it yet, but it has been such a
stimulating read that I had to stop and write this post):

Making a modest batch of steel could take a week or


more, and traditional techniques were carefully passed
down from father to son; one Sheffield recipe started by
adding the juice of four white onions.
Morris attributes the onion story to Thomas Misas Nation of Steel,
which is now on my reading list.
American steel displaced British steel not because it was based on the
Bessemer and open hearth processes (Bessemer was English), but because
the industry was built from the ground up along scientific lines, with no
craftsman-baggage slowing it down.
The interesting thing about this recipe for onion steel is that it
illustrates both the strengths and the weaknesses of the craftsman
sensibility. You can only imagine the tedious sort of uninformed
experimentation it took to consider adding onions to a steel recipe. There
is something beautiful about the absence of preconceived notions in this
sensibility. No modern metallurgist would even think to add onions to a
metal recipe.
On the other hand, if a modern metallurgist were faced with data
showing that onions improved the properties of steel, he or she would not
rest until theyd either disproved the effect, or explained it in less bizarre
terms. The recipe would certainly not get passed down from father to
son (mentor to mentee today) unexplained.
What America brought to manufacturing was a wholesale shift from
craftsman-and-merchant thinking about technology and business to
engineer-and-manager thinking. The shift affected every important 19th
century business sector: armaments, railroads, oil, steel, textile equipment.
And it created a whole new sector: the consumer market.
But this was not the result of an abstract, ideological quest for
scientific engineering and manufacturing, or a deliberate effort to replace
high-skill/high-wage craftsmen with low-skill/low-wage/interchangeable
machine operators.

It was a consequence of a relentless pursuit of interchangeability of


parts, which in turn was a consequence of a pursuit of greater scale, profits
and competition for market share (which drove greater complexity in
offerings) on the vast geographic canvas that was America. Craft was
merely a casualty along the way.
So why was interchangeability of parts a holy grail in this pursuit?
Interchangeability, Complexity and Scaling
The problem is that even the highest-quality craft does not scale.
When something like a rifle is mass-produced using interchangeable parts,
breakdowns can be fixed using parts cannibalized from other broken-down
rifles (so two broken rifles can be mashed-up to make at least one that
works) or with spare parts shipped from an warehouse. Manufacturing can
be centralized or distributed in optimal ways, and constantly improved.
Production schedules can be decoupled from demand schedules.
A craftsman-made rifle on the other hand, requires a custommade/fitted replacement part. The problem is especially severe for an
object like a rifle: small, widely-dispersed geographically, and liable to
break down in the unfriendliest of conditions. Conditions where
minimizing repair time is of the essence, and skilled craftsmen are rather
thin on the ground. It is no surprise that the problem was first solved for
guns.
Lets do some pidgin math to get a sense of what a true mathematical
model might look like.
Roughly speaking, scaling production for any mechanical widget
involves three key dimensions: production volume V, structural
complexity S (the number of interconnections in an assembly is a good
proxy measure for S, just like the number of transistors on a chip is a good
proxy for its complexity) and operating tempo of the machine in use, T
(since the speed of operation of a machine determines the stress and wear
patterns, which in turn determines breakdown frequency; clock-rate is a
similar measure for Moores Law).

For complex widgets, scaling production isnt just (or even primarily)
about making more new widgets; it is about keeping the widgets in
existence in the field functioning for their design lifetime through postsales repair and maintenance. The greater the complexity and cost, the
more the game shifts to post-sales.
You can combine the three variables to get a rough sense of
manufacturing complexity and how it relates to scaling limits. Something
like C=SxT provides a measure of the complexity of the artifact itself.
Breakdown rate B is some function of complexity and production
volumes, B=f(C, V). At some point, as you increase V, you get a
corresponding increase in B that overwhelms your manufacturing
capability. To complete this pidgin math model, you can think in terms of
some B_max=f(C, V_max) above which V cannot increase without
interchangeability.
Modern engineers use much more sophisticated measures (this crude
model does not capture the tradeoff between part complexity and
interconnection complexity for example, or the fact that different parts of a
machine may experience different stress/wear patterns), but for our
purposes, this is enough.
To scale production volume above V_max without introducing
interchangeability, you have to either lower complexity and/or tempo or
increase the number of skilled craftsmen. The first two are not options
when you are trying to out-do the competition in an expanding market.
That would be unilateral disarmament in a land-grab race. The last method
is simply not feasible, since education in a craft-driven industrial
landscape means long, slow and inefficient (in the sense that it teaches
things like onion recipes) 1:1 apprenticeship relationships.
There is one additional method that does not involve
interchangeability: moving towards disposability for the whole artifact,
which finesses the parts-replacement problem entirely. But in practice,
things get cheap enough for disposability to be a workable strategy only
after mass production is achieved. Disposability is rarely a cost-effective

strategy for craft-driven manufacturing, though I can think of a few


examples.
These facts of life severely limited the scale of early nineteenth
century technology. The more machines there are in existence, the greater
the proportion of craftsmen whose time must be devoted to repair and
maintenance rather than new production.
Since breakdowns are
unpredictable and parts unique, there is no way to stockpile an inventory
of spare parts cheaply. There is little room for cannibalization of parts in
the field to temporarily mitigate parts shortages.
What was needed in the 19th century was a decoupling of scaling
problems from manufacturing limitations.
Interchangeability and the Rise of Supply Chains
Interchangeability of parts breaks the coupling between scaling and
manufacturing capacity by substituting supply-chain limits for
manufacturing limits. For a rifle, you can build up a stockpile of spare
parts in peace time, and deliver an uninterrupted supply of parts to match
the breakdown rate. There is no need to predict which part might break
down in order to meaningfully anticipate and prepare. You can also
distribute production optimally (close to raw material sources or low-cost
talent for instance), since there is no need to locate craftsmen near the
point-of-use.
So when interchangeability was finally achieved and had diffused
through the economy as standard practice (a process that took about 65
years), demand-management complexity moved to the supply chain, and
most problems could be solved by distributing inventories appropriately.
These happy conditions lasted for nearly a century after widespread
interchangeability was achieved, from about 1880 to 1980, when supply
chains met their own nemesis, demand variability (that problem was
partially solved using lean supply chains, which relied in turn on the idea
of interchangeability applied to transportation logistics: container
shipping. But I wont get into that story here, since it is conceptually part
of the unfinished Moores Law story).

The price that had to be paid for this solution was that the American
economy had to lose the craftsmen and work with engineers, technicians
and unskilled workers instead. This creates a very different technology
culture, with different strengths and weaknesses. For example the scope of
innovation is narrowed by such codification and scientific systematization
of crafts (prima facie nutty ideas like onion steel are less likely to be
tried), but within the narrower scope, specific patterns of innovation are
greatly amplified (serendipitous discoveries like penicillin or x-rays are
immediately leveraged to the hilt).
Why must craft be given up? Even the best craftsmen cannot produce
interchangeable parts. In fact, the craft is practically defined by skill at
dealing with unique parts through carefully fitted assemblies.
(Interchangeability is of course a loose notion that can range from
functional replaceability to indistinguishability, but craft cannot achieve
even the coarsest kind of interchangeability at any meaningful sort of
scale).
Put another way, craft is about relative precision between unlike parts.
Engineering based on interchangeability is about objective precision
between like parts. One requires human judgment. The other requires
refined metrology.
From Armory Practice to the American System
It was the sheer scale of America, the abundance of its natural
resources (and the scarcity of its human resources), that provided the
impetus for automation and the interchangeable parts approach to
engineering.
As agriculture moved westward through New York, Pennsylvania and
Michigan, the older settled regions began to turn to manufacturing for
economic sustenance. The process began with the textile industry, born of
stolen British designs around what is now Lowell, Massachusetts. But
American engineering in the Connecticut river valley soon took on a
distinct character.

Like the OSD/DARPA/NASA driven technology boom after World


War II, the revolution was driven by the (at the time, fledgling) American
military, which had begun to acquire a mature and professional character
after the war of 1812 (especially during the John Quincy Adams
administration).
The epicenter of the action was the Springfield Armory, the PARC of
its day, and outposts of the technology scene extended as far south as
Harpers Ferry, West Virginia.
John Hall was among the hundreds of pioneers who swarmed all over
the Connecticut valley region, dreaming up mechanical innovations and
chasing local venture capitalists, much like software engineers in Silicon
Valley today.
There were plenty of other extraordinary people, including other
mechanical engineering geniuses like Thomas Blanchard, inventor of the
Blanchard gun-stock lathe (which was actually a general solution for
turning any kind of irregular shape using what is known today as a pattern
lathe). By the time he was done with gun stocks, a bottleneck part in gunmaking, with all sorts of subtle curves along multiple axes he had
created a system of 16 separate machines at the Springfield Armory that
pretty much automated the whole process, squeezing out all craft of what
had been the single most demanding component in gun-making.
British gun-making was like British steel-making before people like
Blanchard and Hall blew up the scene. Here is Morris again:
The workings of the British gun industry were
reasonably typical of the mid-nineteenth-century
manufacturing. It was craft-based and included at least
forty trades, each with its own apprenticeship system and
organizations. The gun-lock, the key firing mechanism, was
the most complicated, while the most skilled men were the
lock-filers[who] spent years as apprentices learning to
painstakingly hand-file the forty or so separate lock pieces
to create a unified assembly When the Americans
breezily described machine-made stocks, and locks that

required no hand fitting, they sounded as if there were


smoking opium.
Among the opium-smoking geniuses, Blanchard at least enjoyed a
good deal of success. Hall did not.
He put together almost the entire American System through his
single-minded drive, in the technology-hostile Harpers Ferry location far
from the Connecticut Valley hub. When he was done, he had created an
integrated manufacturing system of dozens of machines that produced
interchangeable parts for every component of his carbine. Even parts from
production runs from different years could be interchanged, a standard
some manufacturing operations struggle to reach even today.
The achievement was based on relentless automation to eliminate
human sources of error, increasingly specialized machines, and rigorous
and precise measurements (there were three of every measurement
instrument, one for production use, one for calibration, and a master
instrument to measure wear on the other two).
It was a massive systems-engineering accomplishment. The Hall
carbine was the starter pistol for the American industrial revolution.
Overtake, Pause, Overdrive
Hall did not reap much of the rewards. Thanks to unfortunate
exploitative relationships (in particular with a shameless patent troll,
William Thornton, a complete jerk by Morris account), he was banished
to Harpers Ferry rather than being allowed to work in Springfield. And
his work, when completed, was acknowledged grudgingly, and with poor
grace. The Hall carbine itself was obsolete by the time his system was
mature, and others who applied it to newer products reaped the benefits.
Between 1825 and the 1910s, the methods pioneered by Hall spread
through the region and beyond, and were refined and generalized. In the
process, first America, and then the world, experienced a Moores Law
type shock: rapidly increasing standards of living provided by an
increasing variety of goods whose costs kept dropping.

Culturally, the period can be divided into three partially overlapping


phases: an overtake phase (1851 1876) when America clearly pulled
ahead of Britain as the first nation in the technology world, a pause
represented by the recession of the 1870s, and finally an over-drive phase
beginning in the 1880s and continuing to the beginning of World War I,
when the American model became the global model (and in particular, the
Russian model, as Taylorism morphed into state doctrine).
Overtake: 1851 1876
The overtake phase has a pair of useful bookend events marking it. It
began with the 1851 Crystal Palace Exhibition, the first of the great 19th
century world fairs, when the world began to suspect that America was up
to something (McCormicks harvester and Colts revolver were among the
items on display), and ended with the 1876 Centennial World Fair in
Philadelphia, when all remaining doubt was erased and it became obvious
that America had now comprehensively overtaken Britain in technology.
When Britain finally caught on and hastily began copying American
practices following the Philadelphia fair, the result was a revitalization of
British industry that produced, among other things, the legendary Enfield
rifle (the rifle subplot in the story of interchangeability has an interesting
coda that is shaping the world to this day, the Russian AK-47, as pure an
example of the power of interchangeability-based mass manufacturing as
has ever existed).
It wasnt just guns. In every industry America began to show up
Britain. Much of the credit went to showboating hustlers who claimed
credit for interchangeability and the American System/Armory Practice,
and made a lot of money without actually contributing very much to core
technological developments. These included Eli Whitney of cotton gin
fame, the McCormicks of the harvester, Samuel Colt (revolvers) and Isaac
Singer (sewing machines). While they certainly contributed to the
development of individual products, the invention of the American model
itself was due to technologists like Blanchard and John Hall.

In the initial decades of the overtake, fueled in part by opportunity


(and profiteering) associated with the Civil War and government
subsidized building out of the railroad system, much of the impact was
invisible. But by the 1890s, as the infrastructure phase was completed, the
same methods were unleashed on everyday life, creating modern
consumer culture and the middle class within the short space of a single
generation.
The Pause: the 1870s
The Civil War looms large as the major political-economic event in
this history (1861 1865), but the bulk of the impact was felt in the
decade that followed, once the dust had settled and interrupted
infrastructure projects were completed.
This impact took the form of the rather strange long recession of the
1870s, which was very culturally very similar to the one we are currently
experiencing (increased economic uncertainty and fall in nominal
incomes, hidden technology-driven increases in standard of living,
foundational shifts in the nature of money back then it was a
greenbacks vs. gold thing).
One way to understand this process is that the infrastructure phase had
created both tycoons and an extremely over-leveraged economy. It was the
uncertain gap between build it and they will come. It was a huge,
collective pause, a national decade of breath-holding as people wondered
whether the chaos unleashed by the new infrastructure would create a
better social order or destroy everything without creating something new
in its place.
Starting in the 1880s, the bet began paying off in spades. The
recession ended and the over-drive boom began, as people figured out
what to do with the newfound capabilities in their environment.
Overdrive: 1880s 1913
A good early marker here is probably the first Montgomery Ward
catalog in 1872, the first major sign that the new infrastructure allowed old

businesses to be rethought, leading to the creation of the modern consumer


economy.
The mail-order catalog was by itself a simple idea (the first catalog
was just a single page), but the reason it disrupted old-school merchants
was that it relied on all the infrastructure complexity that now existed.
Trains that ran on reliable schedules, to deliver mail, telegraph lines
that brought instant price updates on western grain to the East Coast, steel
to build everything, oil and electricity to light up (and later, fuel)
everything, new financial systems to move money around, and of course,
the application of interchangeability technology to everything in sight.
It took Sears, starting in 1888, to scale the idea and truly take down
the merchant elites who had defined the old business culture, but by World
War I, middle-class consumer culture had emerged and had come to define
America. In another 50 years, it would come to define the world.
It was such a powerful boom that globally, it lasted a century, with
two world wars and a Great Depression failing to arrest its momentum (as
an aside, I wonder why people pay so much attention to the 1930s
depression to make sense of the current recession; the 1870s recession
makes for a far more appropriate comparison).
What ultimately killed it was its own success. Semiconductor
manufacturing probably represents the crowning achievement of the
Armory Practice/American System that began with a lonely John Hall
pushing ahead against all odds at Harpers Ferry.
Moores Law was born as the last and greatest achievement of the
parent it ultimately devoured: Halls Law.
Halls Law
When you step back and ponder the developments between 1825 and
1919, it can be hard to make sense of all the action.

There is the pioneering work in manufacturing technology. There is


the explosion of different product types as the American System diffused
through the industrial landscape. There is the story of the rise of the first
tycoons. There is the rise of consumerism and the gradual emergence of
the middle class. There is the connectivity by steam and telegraph.
Then there is the increasingly confident and strident American
presence on the global scene (especially through the World Fairs, two of
which I already talked about). And of course, you have the Civil War, the
California Gold Rush, the cowboy culture that existed briefly (and
permanently reshaped the American identity) before Jay Gould killed it by
finishing the railroad system.
There was the rise of factory farming and the meatpacking and
refrigerator-car industries together killing the urban butcher trade and
suddenly turning Americans into the greatest meat eaters in history.
Paycheck economics took over as the tycoon economy killed the free
agent.
In fact, there was a lot going on, to put it mildly. And that was just
America. The rest of the world wasnt exactly enjoying peace and stability
either. Perry had kicked down the doors of Japan, Opium wars had
ravaged China, the East India Company (the star of my History of
Corporations post) had been quietly put out to pasture and the Mughal
empire had collapsed. The Ottomans were starting on a terminal decline.
Continental Europe had begun its century-long post-Napoleon march
towards World War I (the US Civil War served as a beta test for the postBismarck model of total war, just as the Spanish Civil war served as a beta
test for World War II).
But just as Moores Law provides something of a satisfying
explanatory framework for almost everything that has happened in the last
50 years, the drive towards the holy grail of interchangeability provides a
satisfying explanatory framework for much of this action. Heres my
attempt at capturing what happened (someone enlighten me if something
like this has already been proposed under a different name) :

Halls Law: the maximum complexity of artifacts that


can be manufactured at scales limited only by resource
availability doubles every 10 years.
I believe this law held between 1825 and 1960, at which point the law
hit its natural limits.
Here, I mean complexity in the loose sense I defined before: some
function of mechanical complexity and operating tempo of the machine,
analogous to the transistor count and clock-rate of chips.
I dont have empirical data to accurately estimate the doubling period,
but 10 years is my initial guess, based on the anecdotal descriptions from
Morris book and the descriptions of the increasing presence of technology
in the world fairs.
Along the complexity dimension, mass-produced goods increased
rapidly got more complex, from guns with a few dozen parts to late-model
steam engines with thousands. The progress on the consumer front was no
less impressive, with the Montogmery Ward catalog offering massproduced pianos within a few years of its introduction for instance. By the
turn of the century, you could buy entire houses in mail-order kit form.
The cost of everything was collapsing.
Along the tempo dimension, everything got relentlessly faster as well.
Somewhere along the way, things got so fast thanks to trains and the
telegraph, that time zones had to be invented and people had to start
paying attention the second hand on clocks.
There is a ton of historical research on all aspects of this boom, but I
suspect nobody has yet compiled the data in a form that can be used to fit
a complexity-limit growth model and figure out the parameters of my
proposed Halls Law, since it is the sort of engineering-plus-history
analysis that probably has no hope of getting any sort of research funding
(it would take some serious archaeology to discover the part-count,
operating speed and production volumes for a sufficient number of sample
products through the period to fit even my simple model, let alone a model
that includes things like breakdown rates and actual, as opposed to
theoretical, interchangeability).

But even without the necessary empirical grounding, I am fairly sure


the model would turn out to be an exponential, just like Moores Law.
Nothing else could have achieved that kind of transformation in that short
a period, or created the kind of staggering inequality that emerged by the
Gilded Age.
Break Boundaries and Tycoon Games
Both Moores Law and Halls Law in the speculative form that I have
proposed, are exponential trajectories. These trajectories generally emerge
when some sort of runaway positive-feedback process is unleashed,
through the breaking of some boundary constraint (the term break
boundary is due to Marshall McLuhan).
The positive-feedback part is critical (if you know some math, you
can guess why: a doubling law in a difference/differential equation form
has to be at least a first-order process; something like compound interest,
if you dont know what the math terms mean).
Loosely speaking, this implies a technological process that can be
applied to itself, improving it. Better machines with interchangeable parts
also means better machine tools that are themselves made with
interchangeable parts and therefore can run continuously at higher speeds,
with low downtime. Computers can be used to design more complex
computers. This is not true of all technological processes. Better plastics
do not improve your ability to make new plastics, for instance, since they
do not play much of a role in their own manufacturing processes.
This is the inner, technological positive-feedback loop (think of an
entire technology sector engaging in a sort of 10,000 hours of deliberate
practice; a major sign is that the most talented people turn to tool-building:
Blanchard and Hall for Halls Law, people like the late Dennis Ritchie and
Linus Torvalds for Moores Law).
But the technological positive-feedback loop requires an outer
financial positive-feedback loop around it to fuel it. You need conditions
where the second million is easier to make than the first million.

This means tycoons who spot some vast new opportunity and play
land-grabbing games on a massive scale.
Both Halls Law and Moores Law led to wholesale management and
financial innovation by precisely such new tycoons.
For Halls Law, the process started with Cornelius Vanderbilt, the hero
of A. J. Stiles excellent The First Tycoon, who figured out how to tame
the strange new beast, the post-East-India-Company corporation and in the
process sidelined old money.
It is revealing that Vanderbilt was blooded in business through a
major legal battle for steamboat water rights: Gibbons vs. Ogden (1824)
that helped define the relationship of corporations to the rest of society.
From there, he went from strength to strength, inventing new business and
financial thinking along the way. Only in his old age did he finally meet
his match: Jay Gould, who would go on to become the archetypal Robber
Baron, taking over most of Vanderbilts empire from his not-so-talented
children.
Vanderbilt was something of a transition figure. He straddled both
management and finance, and old and new economies: he was a cross
between an old-economy merchant-pirate in the Robert Clive mold (he ran
a small war in Nicaragua for instance) and a new-economy corporate
tycoon. He transcended the categories that he helped solidify, which
helped define the next generation of tycoons.
Among the four tycoons in Morris book, Rockefeller (Chernows
Titan on Rockefeller is another must-read) and Carnegie appear on one
side, as the archetypes of modern managers and CEOs. Both were masters
of Wall Street as well, but were primarily businessmen.
On the financial side, we find the Joker-Batman pair of Gould and
Morgan. Jay Gould was the loophole-finder-and-destabilizer; J. P. Morgan
was the loophole-closer and stabilizer. While Gould was a competent, if
unscrupulous manager during the brief periods that he actually managed
the companies he wrangled, he was primarily a financial pirate par
excellence.

It makes for a very good story that he made his name by giving the
elderly Vanderbilt, who pretty much invented the playbook along with his
friends and rivals, the only financial bloody nose of his life (though
Vanderbilt exacted quite a revenge before he died). Through the rest of
his career, he exposed and exploited every single flaw in the fledgling
American corporate model, turning crude Vanderbilt-era financial tactics
into a high art form. When he was done, he had generated all the data
necessary for J. P. Morgan to redesign the financial system in a much
stronger form.
Morgans model would survive for a century until the Moores Law
era descendants of Gould (the financial pirates of the 1980s) started
another round of creative destruction in the evolution of the corporate
form.
From Halls Law to Moores Law
Halls Law was the prequel to Moores Law in almost every way. The
comparison is not a narrow one based on just one dimension like finance
or technology. It spans every important variable. Here is the corresponding
Double Freytag:

Ill save my analysis of the Moores Law era for another day, but here
is a short point-by-point mapping/comparison of fundamental dynamics

(i.e. things that were a consequence of the fundamental dynamics rather


than historical accidents).
1. Obviously Halls Law maps to Moores Law
2. Increasing interchangeability in mechanical engineering maps
to
increasing
transistor
counts
in
semiconductor
manufacturing. Increasing machine speeds map to increasing
chip clock-rates.
3. Both technologies radically drove down costs of goods and
created de facto higher standards of living
4. Both technologies saw the emergence of a new breed of
tycoons within a few leadership generations. Jack Welch maps
to Cornelius Vanderbilt. Bill Gates and Michael Dell map to
Rockefeller and Carnegie. Jeff Bezos maps to Montgomery
Ward and Sears.
5. The newer, younger digital native tycoons, starting with
Zuckerberg, map to the post 1890 3rd generation innovators
who were native to the new world of interchangeability rather
than pioneers, similar to the early 20th century automobile and
airplane industry tycoons (it is revealing that the Wrights were
bicycle mechanics; bicycles were the first major consumer
product to be designed around interchangeability from the
ground up; the airplane was a result of the careful application
of exactly the precise sorts of careful scientific measurement,
experimentation and optimization that had been developed in
the previous 75 years).
6. Each era was punctuated in the middle by a recessionary
decade marked by financial excesses, as the economy retooled
around the new infrastructure. The 1870s maps to the 2000s.
7. Each era enabled, and was in turn fueled by, new kinds of
warfare, exemplified by major wars that disturbed a balance of
power that had been maintained by old technology. The
American Civil War maps to the Cold War, while the wars of
the 1990s and 2000s are analogous to World War I.
8. Guns (including high-tempo machine guns) with
interchangeable parts map to nuclear weapons. John Halls
stint at Harpers Ferry was the Manhattan Project of its day
(here the mapping is not exact, since semiconductors were
spawned by the military-industrial research infrastructure

around electronics that emerged after World War II, rather than
through the Manhattan project itself).
9. Lincolns assassination is eerily similar to Kennedys. Just
checking to see if you are still paying attention. The first
person to call bullshit on this point gets a free copy of The
Tycoons.
10. The Internet and container shipping taken together are to
Moores Law as the railroad, steamship and telegraph networks
taken together were to Halls Law. The electric power grid
provides the continuity between Halls Law and Moores Law.
11. Each era changed employment patterns and class structures
wholesale. Halls Law destroyed nobility-based social
structures, created a new middle class defined by educational
attainments and consumer goods, and created paycheck
employment. Moores Law is currently destroying each of
these things and creating a Trading Up class, a new model of
free agency, and killing education-based reputation models.
12. A new mass entertainment model started in each case. With
Halls Law it was Broadway (which led on to radio, movies
and television). With Moores Law, Id say the analogy is to
reality TV, which like Broadway represents new-era content in
an old-era medium.
13. At the risk of getting flamed, Id say that Seth Godin is
arguably the Horatio Alger of today, but in a good way.
Somebody has to do the pumping-up and motivating to inspire
the masses to abandon the old culture and embrace the new by
offering a strong and simple message that is just sound enough
to get people moving, even if it cannot withstand serious
scrutiny.
14. Halls Law led on to the application of its core methods to
people, leading to new models of high-school and college
education and eventually the perfect interchangeable human,
The Organization Man. Moores Law is destroying these
things, and replacing them with Y-Combinator style education
and co-working spaces (this will end with the Organization
Entrepreneur, a predictably-unique individual, just like
everybody else).

15. Halls Law led to the industrial labor movement. Moores Law
is leading to a new labor movement defined, in its early days,
by things like standardized term-sheets for entrepreneurs ( the
5 day/40 hour week issue of our times; YC-entrepreneurs are
decidedly not the new capitalists. They are the new labor.
Thats a whole other post).
16. And perhaps most importantly, each era suffered an early crisis
of financial exploitation which led first to loophole closing,
and then to a new financial system and corporate governance
model. Jay Gould maps to the architects of the subprime crisis.
No J. P. Morgan figure has emerged to really clean up the
mess, but new corporate models are already emerging that look
so unlike traditional ones that they really shouldnt be called
corporations at all (hence the pointless semantic debate around
my history of corporations post; it is really irrelevant whether
you think corporations are dying or being radically reinvented.
You are talking about the same underlying creative-destruction
reality).
The New Gilded Age
When Mark Twain coined the term Gilded Age, he wasnt exactly
being complimentary. For some reason, the term seems to be commonly
used as a positive one today, by those who want to romanticize the period.
I started to read the book and realized that Twain had completely
missed the point of what was happening around him (the focus of the
novel is political corruption; an element that loomed large back then, but
was ultimately a sideshow), so I abandoned it.
But he got one thing right: the name.
Halls Law created a culture that was initially a layer of fake gloss on
top of much grimmer realities. Things were improving dramatically, but it
probably did not seem like it at the time, thanks to the anxiety and
uncertainty. Just as you and I arent exactly celebrating the crashing cost
of computers in the last two decades, those who lived through the 1870s
were more worried about farming moving ever westward (outsourcing)

and strange new status dynamics that made them uncertain of their place
in the world.
It took time for Gilded to turn into Golden (about 50 years by my
estimate, things became truly golden only after World War II). There were
decades of turmoil which made the lives of transitional generations quite
miserable. The 1870s were a youll-thank-me-later decade, but for those
who lived through the decade in misery, that is no consolation.
I abandoned The Gilded Age within a few pages. It is decidedly
tedious compared to Tom Sawyer and Huckleberry Finn. Sadly, Twains
affection for a vanishing culture, which made him such an able observer of
one part of American life, made him a poor observer of the new realities
taking shape around him.
He makes a personal appearance in the stories of both Vanderbilt and
Rockefeller, and appears to have strongly disliked the former and admired
the latter, though both were clearly cut from the same cloth.
To my mind, Twains best stab at describing the transformation
(probably A Connecticut Yankee in King Arthurs Court note the
significance of Connecticut) is probably much worse than the attempts of
younger writers like Edith Wharton and later, of course, everybody from
Horatio Alger to F. Scott Fitzgerald.
We are clearly living through a New Gilded Age today, and Bruce
Sterlings term Favela Chic (rather unfortunately cryptic ; perhaps we
should call it Painted Slum) is effectively analogous to Gilded Age.
We put on brave faces as we live through our rerun of the 1870s. We
celebrate the economic precariousness of free agency as though it were a
no-strings-attached good thing. We read our own Horatio Alger stories,
fawn over new Silicon Valley millionaires and conveniently forget the
ones who dont make it.
New Media tycoons like Arrington and Huffington fight wars that
would have made the Hearsts and Pulitzers of the Gilded Age proud, while
us lesser bloggers go divining for smaller pockets of attention with

dowsing rods, driven by the same romantic hope that drove the tragicomic
heroes of P. G. Wodehouse novels to pitch their plays to Broadway
producers a century ago.
History is repeating itself. And the rerun episode we are living right
now is not a pleasant one.
The problem with history repeating itself of course, is that sometimes
it does not. The fact that 1819-1880 map pretty well to 1959-2012 does
not mean that 2012-2112 will map to 1880-1980. Many things are
different this time around.
But assuming history does repeat itself, what are we in for?
If the Moores Law endgame is the same century-long economicoverdrive that was the Halls Law endgame, todays kids will enter the
adult world with prosperity and a fully-diffused Moores Law all around
them.
The children will do well. In the long term, things will look up.
But in the long term, you and I will be dead.
Some thanks are due for this post. It was inspired in part by Chris
McCoy of YourSports.com, who badgered me about the Internet =
Railroad analogy enough that I was motivated to go hunt for the best
place to anchor a broader analogy. His original hypothesis is now the
generalized point 10 of my list. Thanks also to Nick Pinkston for
interesting discussions on the future of post-Moores Law manufacturing;
the child may resurrect its devoured parent after all. Also thanks to
everybody who commented on the History of Corporations piece.

Hacking the Non-Disposable Planet


April 18, 2012
Sometime in the last few years, apparently everybody turned into a
hacker. Besides computer hacking, we now have lifehacking (using
tricks and short-cuts to improve everyday life), body-hacking (using
sensor-driven experimentation to manipulate your body), college-hacking
(students who figure out how to get a high GPA without putting in the
work) and career-hacking (getting ahead in the workplace without paying
your dues). The trend shows no sign of letting up. I suspect well soon
see the term applied in every conceivable domain of human activity.
I was initially very annoyed by what I saw as a content-free
overloading of the term, but the more I examined the various uses, the
more I realized that there really is a common pattern to everything that is
being subsumed by the term hacking. I now believe that the term hacking
is not over-extended; it is actually under-extended. It should be applied to
a much bigger range of activities, and to human endeavors on much larger
scales, all the way up to human civilization.

Ive concluded that were reaching a technological complexity


threshold where hacking is going to be the main mechanism for the further
evolution of civilization. Hacking is part of a future thats neither the
exponentially improving AI future envisioned by Singularity types, nor the
entropic collapse envisioned by the Collapsonomics types. It is part of a
marginally stable future where the upward lift of diminishing-magnitude
technological improvements and hacks just balances the downward pull of
entropic gravity, resulting in an indefinite plateau, as the picture above
illustrates.
I call this possible future hackstability.
Hacking as Anti-Refinement
Hacking is the term we reach for when trying to describe an
intelligent, but rough-handed and expedient behavior aimed at
manipulating a complicated reality locally for immediate gain. Two
connotations of the word hack, rough-hewing and mediocrity, apply to
some extent.
Ill offer this rather dense definition that I think covers the
phenomenology, and unpack it through the rest of the post.
Hacking is a pattern of local, opportunistic
manipulation of a non-disposable complex system that
causes a lowering of its conceptual integrity, creates
systemic debt and moves intelligence from systems into
human brains.
By this definition, hacking is anti-refinement. It is therefore a
barbarian mode of production because it moves intelligence out of systems
and into human brains, making those human brains less interchangeable.
Yet, it is not the traditional barbarian mode of predatory destruction of a
settled civilization from outside its periphery.
Technology has now colonized the planet, and there is no outside
for anyone to emerge from or retreat to. Hackers are part of the system,
dependent on it, and aware of its non-disposable nature. In evolutionary

terms, hacking is a parasitic strategy: weaken the host just enough to feed
off it, but not enough to kill it.
Breaching computer systems is of course the classic example. Another
example is figuring out hacks to fall asleep faster. A third is coming up
with a new traffic pattern to reroute traffic around a temporary
construction site.

In our first example, the hacker has discovered and thought


through the implications of a particular feature of a computer
system more thoroughly than the original designer, and
synthesized a locally rewarding behavior pattern: an exploit.
In our second example, the body-hacker has figured out a way
to manipulate sleep neurochemistry in a corner of design space
that was never explored by the creeping tendrils of evolution,
because there was never any corresponding environmental
selection pressure.
In our third example, the urban planner is creating a temporary
hack in service of long-term systemic improvement. The hacker
has been co-opted and legitimized by a subsuming system that
has enough self-awareness and foresight to see past the
immediate dip in conceptual integrity.

Urban planning is a better prototypical example to think about when


talking about hacking than software itself, since it is so visual. Even
programmers and UX designers themselves resort to urban planning
metaphors to talk about complicated software ideas. If you want to ponder
examples for some of the abstractions I am talking about here, I suggest
you think in terms of city-hacking rather than software hacking, even if
you are a programmer.
For the overall vision of hackstability, think about any major urban
region with its never-ending construction and infrastructure projects
ranging from emergency repairs to new mass-transit or water/sewage
projects. If a large city is thriving and persisting, it is likely hackstable.
Increasingly, the entire planet is hackstable.

The atomic prototype of hacking is the short-cut. The urban planner


has a better map and understands cartography better, but in one small
neighborhood, some little kid knows a shorter, undocumented A-to-B path
than the planner. Even though the planner laid out the streets in the first
place. Whats more, the short-cut may connect points on the map that are
otherwise disconnected for non-hackers, because the documented design
has no connections between those points.
Disposability and Debt
I got to my definition of hacking after trying to assemble a lot of folk
wisdom about programming into a single picture.
The most significant piece for me was Joel Spolskys article about
things you should never do, and in particular his counter-argument to
Frederick Brooks famous idea that you should plan to throw one away
(the idea in software architecture that you should plan to throw away the
first version of a piece of software and start again from scratch).
Spolsky offers practical reasons why this is a bad idea, but what I took
away from the post was a broader idea: that it is increasingly a mistake to
treat any technology as disposable. Technology is fundamentally not a doover game today. It is a cumulative game. This has been especially true in
the last century, as all technology infrastructure has gotten increasingly
inter-connected and temporally layered into techno-geological strata of
varying levels of antiquity. We should expect to see disciplines emerge
with labels like techno-geography, techno-geology and technoarchaeology. Some layers are functional (techno-geologically active),
while others are compressed garbage, like the sunken Gold Rush era ships
on which parts of San Francisco are built.
Non-disposability along with global functional and temporal
connectedness means technology is a single evolving entity with a
memory. For such systems the notion of technical debt, due to Ward
Cunningham, becomes important:
Shipping first time code is like going into debt. A
little debt speeds development so long as it is paid back

promptly with a rewrite The danger occurs when the debt


is not repaid. Every minute spent on not-quite-right code
counts as interest on that debt. Entire engineering
organizations can be brought to a stand-still under the debt
load of an unconsolidated implementation.
For me, the central implicit idea in the definition is the notion of
disposability. Everything hinges on whether or not you can throw your
work away and move on. We are so used to dealing with disposable things
in everyday consumer life that we dont realize that much of our
technological infrastructure is in fact non-disposable.
How ubiquitous is non-disposability? I am tempted to conclude that
almost nothing of significance is disposable. And by that I mean
disposable with insignificant negative consequences of course. Anything
can be thrown away if you are willing to pay the costs.
Your body, New York City and the English language are obviously
non-disposable. Reacting to problems with those things and trying to do
over is either impossible or doomed. The first is impossible to even do
badly today. You can try to do over New York City, but youll get
something else that will probably not serve. If you try to do-over English,
you get Esperanto.
Obviously, the bigger the system and the more interdependent it is
with its technological environment, the harder it is to do it over. The
dynamics of technical debt naturally leads us to non-disposability, but lets
make the connection explicit and talk about the value in the patchwork of
hacks and workarounds in a complex system that, as Spolsky argues,
represents value that should not be thrown away.
Quantified Technical Debt and Metis
If a system must last indefinitely, cutting corners in an initial design
leads to a necessary commitment to doing it right later. This deferral is
due to lack of both resources and information in an initial design. You lack
the money/time and the information to do it right.

When a new contingency arises, some of the missing information


becomes available. But resources do not generally become available at the
same time, so the design must be adapted via cheaper improvisation to
deal with the contingency a hack and the real solution deferred. A
hack turns an unquantified bit of technical debt into a quantified bit: when
you have a hack, you know the principal, interest rate and so forth.
It is this quantified technical debt that is the interesting quantity. The
designers original vague sense of incompleteness and inadequacy
becomes sharply defined once a hack has exposed a failing, illuminated its
costs, and suggested a more permanent solution. The new information
revealed by the hack is, by definition, not properly codified and embedded
in the system itself, so most of it must live in a human brain as tacit design
intelligence (the rest lives in the hack itself, representing the value that
Spolsky argues should not be throw away).
When you have a complex and heavily-used, but slowly-evolving
technology, this tacit knowledge accumulating in the heads of hackers
constitutes what James Scott calls metis. Distributed and contentious
barbarian intelligence. It can only be passed on from human to human via
apprenticeship, or inform a risky and radical redesign that codifies and
embeds it into a new version of the system itself. The longer you wait, the
more the debt compounds, increasing risk and the cost of the eventual
redesign.
Technological Deficit Economics
This compounding rate is very high because the longer a system
persists, the more tightly it integrates into everything around it, causing
co-evolution. So eventually replacing even a small hack in a relatively
isolated system with a better solution turns into a planet-wide exercise, as
we learned during Y2K.
Isolated technologies also get increasingly situated over time, no
matter how encapsulated they appear at conception, so that what looks like
a do-over from the point of view of a single subsystem (say Linux)
looks like a hack with respect to larger, subsuming systems (like the

Internet). So debt accumulates at levels of the system that no individual


agent is nominally responsible for. This is collective, public technical debt.
Most complex technologies incur quantified technical debt faster than
they can pay it off, which makes them effectively non-disposable. This
includes non-software systems. Sometimes the debt can be ignored
because it ends up being an economic externality (pollution for
automobiles, for instance), but the more all-encompassing the system gets,
the less room there is for anything to be an unaccounted-for externality.
The regulatory environment can be viewed as a co-evolving part of
technology and subject to the same rules. The US constitution and the tax
code for instance, started off as high-conceptual-integrity constructs which
have been endlessly hacked through case law and tax code exceptions to
the point that they are now effectively non-disposable. It is impossible, as
a practical matter, to even conceptualize a Constitution 2.0 to cleanly
accommodate the accumulated wisdom in case law.
In general, following Spolskys logic through to its natural
conclusion, it is only worth throwing a system away and building a new
one from scratch when it is on the very brink of collapse under the weight
of its hacks (and the hackers on the brink of retirement or death,
threatening to take the accumulated metis with them). The larger the
system, the costlier the redesign, and the more it makes sense to let more
metis accumulate.
Beyond a certain critical scale, you can never throw a system away
because there is no hope of ever finding the wealth to pay off the
accumulated technical debt via a new design. The redesign itself
experiences scope creep and spirals out of the realm of human capability.
All you can hope for is to keep hacking and extending its life in
increasingly brittle ways, and hope to avoid a big random event that
triggers collapse. This is technological deficit economics.
Now extend the argument to all of civilization as a single massive
technology that can never be thrown away, and you can make sense of the
idea of hackstability as an alternative to collapse. Maybe if you keep

hacking away furiously enough, and grabbing improvements where


possible, you can keep a system alive indefinitely, or at least steer it to a
safe soft-landing instead of a crash-landing.
Hacker Folk Theorems
With disposability as the anchor element, we can try to arrange a lot
of the other well-known pieces of hacker folk-wisdom into a more
comprehensive jigsaw puzzle view.
The pieces of wisdom are actually precise enough that I think of them
as folk theorems (item 5 actually suggests a way to model hackstability
mathematically as a sort of hydrostatic bug-o-static? equilibrium)
1. Given enough eyeballs, all bugs are shallow. Linus Law,
formulated by Eric S. Raymond
2. Perspective is worth 80 IQ points. Alan Kay
3. Fixing a bug is harder than writing the code. not sure who
first said this.
4. Reading code is harder than writing code. Joel Spolsky
5. Fixing a bug introduces 2 more. not sure where I first
encountered this quote.
6. Release early, release often. Eric S. Raymond
7. Plan to throw one away Frederick Brooks, The Mythical
Man-Month
Take a shot at using these ideas to put together a picture of how
complex technological systems evolve, using the definition of hacking that
I offered and the idea of technical debt as the anchor element (I started
elaborating on this full picture, but it threatened to run to another 5000
words).
When youre done, you may want to watch (or rewatch) Alan Kays
talk, Programming and Scaling, which Ive referenced before.
I dont know of any systematic studies of the truth of these folkwisdom phenomena (I think I saw one study of the bugs-eyeballs
conjecture that concluded it was somewhat shaky, but I cant find the

reference). But I have anecdotal evidence from my own limited experience


with engineering, and somewhat more extensive experience as a product
manager, that all the statements have significant substance behind them.
So these are not casual, throwaway remarks. Each can sustain hours
of thoughtful and stimulating debate between any two people whove
worked in technology.
The Ubiquity of Hacking
At this point, it is useful to look for more examples that fit the
definition of hacking I offered. The following seem to fit:
1. The pick-up artist movement should really be called femalebrain hacking (or alternatively, alpha-status hacking)
2. Disruptive technologies represent market-hacking.
3. Lifestyle design can be viewed as standard-of-living hacking
4. One half of the modern smart city/neo-urbanist movement can
be understood as city-hacking (smart cities includes cleansheet high-modernist smart cities in China, but lets leave those
out)
5. All of politics is culture hacking
6. Guerrilla warfare and terrorism represent military hacking
7. Almost the entire modern finance industry is economicshacking
8. Most intelligence on both sides of any adversarial human table
(VCs vs. entrepreneurs, interviewers vs. interviewees) is
hacker intelligence.
9. Fossil fuels represent energy hacking
Looking at these, it strikes me that not all examples are equally
interesting. Anything that has the nature of a human-vs.-human arms race
(including the canonical black-hat vs. white-hat information security race
and PUA) is actually a pretty wimpy example of hackstability dynamics.
The really interesting cases are the ones where one side is a human
intelligence, and the other side is a non-human system that simply gets
more complex and less disposable over time.

But interesting or not, all these are really interconnected patterns of


hacking in what is increasingly Planet Hacker.
The Third Future
So what is the hackstable future? What reason is there to believe that
hacking can keep up with the downward pull of entropy? I am not entirely
sure. The way big old cities seem to miraculously survive indefinitely on
the brink of collapse gives me some confidence that hackstability is a
meaningful concept.
Collapse is the easiest of the three scenarios to understand, since it
requires no new concepts. If the rate of entropy accumulation exceeds the
rate at which we can keep hacking, we may get sudden collapse.
The Singularity concept relies on a major unknown-unknown type
hypothesis: self-improving AI. A system that feeds on entropy rather than
being dragged down by it. This is rather like Talebs notion of antifragility, so I am assuming there are at least a few credible ideas to be
discovered here. These I have collectively labeled autopoietic lift. Antigravity for complex systems that are subject to accumulating entropy, but
are (thermodynamically) open enough that they might still evolve in
complexity. So far, weve been experiencing two centuries of lift as the
result of a major hack (fossil fuels). It remains to be seen whether we can
get to sustainable lift.
Hackstability is the idea that well get enough autopoietic lift through
hacks and occasional advances in anti-fragile system design to just balance
entropy gravity, but not enough to drive exponential self-improvement.
Viewed another way, it is a hydrostatic balance between global hacker
metis (barbarian intelligence) and codified systemic intelligence
(civilizational intelligence). In this view, hackstability is the slow
dampening of the creative-destruction dialectic between barbarian and
civilized modes of existence that has been going on for a few thousand
years. If you weaken the metis enough, the system collapses. If you

strengthen it too much, again it collapses (a case of the hackers shorting


the system as predators rather than exploiting it parasitically).
I dont yet know whether these are well-posed concepts.
I am beginning to see the murky outlines of a clean evolutionary
model that encompasses all three futures though. One with enough
predictive power to allow coarse computation of the relative probabilities
of the three futures. This is the idea Ive labeled the Electric Leviathan,
and chased for several years. But it remains ever elusive. Each time I
think Ive found the right way to model it, it turns out Ive just missed my
mark. Maybe the idea is my white whale and Ill never manage a digitalage update to Hobbes.
So I might be seeing things. In a way, my own writing is a kind of
idea-hacking: using local motifs to illuminate some sort of subtlety in a
theme and invalidate some naive Grand Unified Theory without offering a
better candidate myself. Maybe all I can hope for is to characterize the
Electric Leviathan via a series of idea hacks without ever adequately
explaining what I mean by the phrase.

Welcome to the Future Nauseous


May 9, 2012
Both science fiction and futurism seem to miss an important piece of
how the future actually turns into the present. They fail to capture the way
we dont seem to notice when the future actually arrives.
Sure, we can all see the small clues all around us: cellphones, laptops,
Facebook, Prius cars on the street. Yet, somehow, the future always seems
like something that is going to happen rather than something that is
happening; future perfect rather than present-continuous. Even the nearest
of near-term science fiction seems to evolve at some fixed recedinghorizon distance from the present.
There is an unexplained cognitive dissonance between changingreality-as-experienced and change as imagined, and I dont mean specifics
of failed and successful predictions.
My new explanation is this: we live in a continuous state of
manufactured normalcy. There are mechanisms that operate a mix of
natural, emergent and designed that work to prevent us from realizing
that the future is actually happening as we speak. To really understand the
world and how it is evolving, you need to break through this manufactured
normalcy field. Unfortunately, that leads, as we will see, to a kind of
existential nausea.
The Manufactured Normalcy Field
Life as we live it has this familiar sense of being a static, continuous
present. Our ongoing time travel (at a velocity of one second per second)
never seems to take us to a foreign place. It is always 4 PM; it is always
tea-time.
Of course, a quick look back to your own life ten or twenty years back
will turn up all sorts of evidence that your life has, in fact, been radically
transformed, both at a micro-level and the macro-level. At the micro-level,

I now possess a cellphone that works better than Captain Kirks


communicator, but I dont feel like I am living in the future I imagined
back then, even a tiny bit. For a macro example, back in the eighties,
people used to paint scary pictures of the world with a few billion more
people and water wars. I think I wrote essays in school about such things.
Yet were here now, and I dont feel all that different, even though the
scary predicted things are happening on schedule. To other people (this is
important).
Try and reflect on your life. I guarantee that you wont be able to feel
any big change in your gut, even if you are able to appreciate it
intellectually.
The psychology here is actually not that interesting. A slight
generalization of normalcy bias and denial of black-swan futures is
sufficient. What is interesting is how this psychological pre-disposition to
believe in an unchanging, normal present doesnt kill us.
How, as a species, are we able to prepare for, create, and deal with,
the future, while managing to effectively deny that it is happening at all?
Futurists, artists and edge-culturists like to take credit for this. They
like to pretend that they are the lonely, brave guardians of the species who
deal with the real future and pre-digest it for the rest of us.
But this explanation falls apart with just a little poking. It turns out
that the cultural edge is just as frozen in time as the mainstream. It is just
frozen in a different part of the time theater, populated by people who seek
more stimulation than the mainstream, and draw on imagined futures to
feed their cravings rather than inform actual future-manufacturing.
The two beaten-to-death ways of understanding this phenomenon are
due to McLuhan (We look at the present through a rear-view mirror. We
march backwards into the future.) and William Gibson (The future is
already here; it is just unevenly distributed.)
Both framing perspectives have serious limitations that I will get to.
What is missing in both needs a name, so Ill call the familiar sense of a

static, continuous present a Manufactured Normalcy Field. For the rest of


this post, Ill refer to this as the Field for short.
So we can divide the future into two useful pieces: things coming at
us that have been integrated into the Field, and things that have not. The
integration kicks in at some level of ubiquity. Gibson got that part right.
Lets call the crossing of the Field threshold by a piece of futuristic
technology normalization (not to be confused with the postmodernist
sense of the term, but related to the mathematical sense). Normalization
involves incorporation of a piece of technological novelty into larger
conceptual metaphors built out of familiar experiences.
A simple example is commercial air travel.
The Example of Air Travel
A great deal of effort goes into making sure passengers never realize
just how unnatural their state of motion is, on a commercial airplane.
Climb rates, bank angles and acceleration profiles are maintained within
strict limits. Back in the day, I used to do homework problems to calculate
these limits.
Airline passengers dont fly. The travel in a manufactured normalcy
field. Space travel is not yet common enough, so there is no manufactured
normalcy field for it.
When you are sitting on a typical modern jetliner, you are traveling at
500 mph in an aluminum tube that is actually capable of some pretty scary
acrobatics. Including generating brief periods of zero-g.
Yet a typical air traveler never experiences anything that one of our
ancestors could not experience on a fast chariot or a boat.
Air travel is manufactured normalcy. If you ever truly experience
what modern air travel can do, chances are, the experience will be framed
as either a bit of entertainment (fighter pilot for a day! which you will
understand as expensive roller-coaster) or a visit to an alien-specialist

land (American aerospace engineering students who participate in NASA


summer camps often get to ride on the vomit comet, modified Boeing
727s that fly the zero-g training missions).
This means that even though air travel is now a hundred years old, it
hasnt actually arrived psychologically. A full appreciation of what air
travel is has been kept from the general population through manufactured
normalcy.
All were left with is out-of-context data that we are not equipped to
really understand in any deep way (Oh, it used to take months to sail
from India to the US in the seventeenth century, and now it takes a 17 hour
flight, how interesting.)
Think about the small fraction of humanity who have actually
experienced air travel qua air travel, as a mode of transport distinct from
older ones. These include fighter pilots, astronauts and the few air
travelers who have been part of a serious emergency that forced (for
instance) an airliner to lose 10,000 feet of altitude in a few seconds.
Of course, manufactured normalcy is never quite perfect (passengers
on the Concorde could see the earths curvature for instance), but the point
is, it is good enough that behaviorally, we do not experience the censored
future. We dont have to learn the future in any significant way (what
exactly have you learned about air travel that is not a fairly trivial port
of train-travel behavior?)
So the way the future of air travel in 1900 actually arrived was the
following:

A specialized future arrived for a subset who were trained and


equipped with new mental models to comprehend it in the
fullest sense, but in a narrowly instrumental rather than
appreciative way. A fighter pilot does not necessarily
experience flight the way a bird does.
The vast majority started experiencing a manufactured
normalcy, via McLuhan-esque extension of existing media

Occasionally, the manufactured normalcy broke down for a


few people by accident, who were then exposed to the
future without being equipped to handle it

Air travel is also a convenient metaphor for the idea of existential


nausea Ill get to. If you experience air travel in its true form and are not
prepared for it by nature and nurture, you will throw up.
The Future Arrives via Specialization and Metaphor Expansion
So this is a very different way to understand the future: it doesnt
arrive in a temporal sense. It arrives mainly via social fragmentation.
Specialization is how the future arrives.
And in many cases, arrival-via-specialization means psychological
non-arrival. Not every element of the future brings with it a visceral
human experience that at least a subset can encounter. There are no
pilots in the arrival of cheap gene sequencing, for instance. At least not
yet. When you can pay to grow a tail, that might change.
There is a subset of humanity that routinely does DNA sequencing
and similar things everyday, but if the genomic future has arrived for
them, it has arrived as a clean, purely cerebral-instrumental experience,
transformed into a new kind of symbol-manipulation and equipmentoperation expertise.
Arrival-via-specialization requires potential specialists. Presumably,
humans with extra high tolerance for g-forces have always existed, and
technology began selecting for that trait once airplanes were invented.
This suggests that only those futures arrive for which there is human
capacity to cope. This conclusion is not true, because a future can arrive
before humans figure out whether they have the ability to cope. For
instance, the widespread problem of obesity suggests that food-abundance
arrived before we figured out that most of us cannot cope. And this is one
piece of the future that cannot be relegated to specialists. Others cannot
eat for you, even though others can fly planes for you.

So what about elements of the future that arrive relatively


successfully for everybody, like cellphones? Here, the idea I called the
Milo Criterion kicks in: successful products are precisely those that do not
attempt to move user experiences significantly, even if the underlying
technology has shifted radically. In fact the whole point of user
experience design is to manufacture the necessary normalcy for a product
to succeed and get integrated into the Field. In this sense user experience
design is reductive with respect to technological potential.
So for this bucket of experiencing the future, what we get is a
Darwinian weeding out of those manifestations of the future that break the
continuity of technological experience. So things like Google Wave fail.
Just because something is technically feasible does not mean it can
psychologically normalized into the Field.
The Web arrived via the document metaphor. Despite the rise of the
stream metaphor for conceptualizing the Web architecturally, the userexperience metaphor is still descended from the document.
The smartphone, which I understand conceptually these days via a
pacifier metaphor, is nothing like a phone. Voice is just one clunky feature
grandfathered into a handheld computer that is engineered to loosely
resemble its nominal ancestor.
The phone in turn was a gradual morphing of things like speaking
tubes. This line of descent has an element of conscious design, so
technological genealogy is not as deterministic as biological genealogy.
The smartphone could have developed via metaphoric descent from
the hand-held calculator; Oh, I can now talk to people on my calculator
would have been a fairly natural way to understand it. That it was the
phone rather than the calculator is probably partly due to path-dependency
effects and partly due to the greater ubiquity of phones in mainstream life.
What Century Do We Actually Live In?
I havent done a careful analysis, but my rough, back-of-the-napkin
working out of the implications of these ideas suggests that we are all

living, in user-experience terms, in some thoroughly mangled, overloaded,


stretched and precarious version of the 15th century that is just good
enough to withstand casual scrutiny. Ill qualify this a bit in a minute, but
stay with me here.
What about edge-culturists who think they are more alive to the real
oncoming future?
I am convinced that they frozen in time too. The edge today looks
strangely similar to the edge in any previous century. It is defined by
reactionary musical and sartorial tastes and being a little more outrageous
than everybody else in challenging the prevailing culture of manners.
Edge-dwelling is a social rather than technological phenomenon. If it
reveals anything about technology or the future, it is mostly by accident.
Art occasionally rises to the challenge of cracking open a window
onto the actual present, but mostly restricts itself to creating dissonance in
the mainstreams view of the imagined present, a relative rather than
absolute dialectic.
Edge culturists end up living lives that are continuously repeated
rehearsal loops for a future that never actually arrives. They do experience
a version of the future a little earlier than others, but the mechanisms they
need to resort to are so cumbersome, that what they actually experience is
the mechanisms rather than the future as it will eventually be lived.
For instance, the Behemoth, a futuristic bicycle built by Steven
Roberts in 1991, had many features that have today eventually arrived for
all via the iPhone. So in a sense, Roberts didnt really experience the
future ahead of us, because what shapes our experience of universal
mobile communication definitely has nothing to do with a bicycle and a
lot to do with pacifiers (I dont think Roberts had a pacifier in the
Behemoth).
At a more human level, I find that I am unable to relate to people who
are deeply into any sort of cyberculture or other future-obsessed edge
zone. There is a certain extreme banality to my thoughts when I think
about the future. Futurists as a subculture seem to organize their lives as
future-experience theaters. These theaters are perhaps entertaining and

interesting in their own right, as a sort of performance art, but are not of
much interest or value to people who are interested in the future in the
form it might arrive in, for all.
It is easy to make the distinction explicit. Most futurists are interested
in the future beyond the Field. I am primarily interested in the future once
it enters the Field, and the process by which it gets integrated into it. This
is also where the future turns into money, so perhaps my motivations are
less intellectual than they are narrowly mercenary. This is also a more
complicated way of making a point made by several marketers:
technology only becomes interesting once it becomes technically boring.
Technological futurists are pre-Fieldists. Marketing futurists are postFieldists.
This also explains why so few futurists make any money. They are
attracted to exactly those parts of the future that are worth very little. They
find visions of changed human behavior stimulating. Technological
change serves as a basis for constructing aspirational visions of changed
humanity. Unfortunately, technological change actually arrives in ways
that leave human behavior minimally altered.
Engineering is about finding excitement by figuring out how human
behavior could change. Marketing is about finding money by making sure
it doesnt. The future arrives along a least-cognitive-effort path.
This suggests a different, subtler reading of Gibsons unevenlydistributed line.
It isnt that what is patchily distributed today will become widespread
tomorrow. The mainstream never ends up looking like the edge of today.
Not even close. The mainstream seeks placidity while the edge seeks
stimulation.
Instead, what is unevenly distributed are isolated windows into the
un-normalized future that exist as weak spots in the Field. When the
windows start to become larger and more common, economics kicks in
and the Field maintenance industry quickly moves to create specialists,
codified knowledge and normalcy-preserving design patterns.

Time is a meaningless organizing variable here. Is gene-hacking


more or less futuristic than pod-cities or bionic chips?
The future is simply a landscape defined by two natural (and nontemporal) boundaries. One separates the currently infeasible from the
feasible (hyperspatial travel is unfortunately infeasible), and the other
separates the normalized from the un-normalized. The Field is
manufactured out of the feasible-and-normalized. We call it the present,
but it is not the same as the temporal concept. In fact, the labeling of the
Field as the present is itself part of the manufactured normalcy. The
labeling serves to hide a complex construction process underneath an
apparently familiar label that most of us think we experience but dont
really (as generations of meditation teachers exhorting us to live in the
present try to get across; they mostly fail because their sense of time has
been largely hijacked by a cultural process).
What gets normalized first has very little to do with what is easier,
and a lot to do with what is more attractive economically and politically.
Humans have achieved some fantastic things like space travel. They have
even done things initially thought to be infeasible (like heavier-than-air
flight) but other parts of a very accessible future lie beyond the
Manufactured Normalcy Field, seemingly beyond the reach of economic
feasibility forever. As the grumpy old man in an old Readers Digest joke
grumbled, We can put a man on the moon, but we cannot get the jelly
into the exact center of a jelly doughnut.
The future is a stream of bug reports in the normalcy-maintenance
software that keeps getting patched, maintaining a hackstable present
Field.
Field Elasticity and Attenuation
A basic objection to my account of what you could call the futurism
dialectic is that 2012 looks nothing like the fifteenth century, as we
understand it today, through our best reconstructions.

My answer to that objection is simple: as everyday experiences get


mangled by layer after layer of metaphoric back-referencing, these
metaphors get reified into a sort of atemporal, non-physical realm of
abstract experience-primitives.
These are sort of like Platonic primitives, except that they are reified
patterns of behavior, understood with reference to a manufactured
perception of reality. The Field does evolve in time, but this evolution is
not a delayed version of real change or even related to it. In fact
movement is a bad way to understand how the Field transforms. Its
dynamic nature is best understood as a kind of stretching. The Field
stretches to accommodate the future, rather than moving to cover it.
It stretches in its own design space: that of ever-expanding, reifying,
conceptual metaphor. Expansion as a basic framing suggests an entirely
different set of risks and concerns. We neednt worry about acceleration.
We need to worry about attenuation. We need not worry about not being
able to keep up with a present that moves faster. We need to worry about
the Field expanding to a breaking point and popping, like an over-inflated
balloon. We need not worry about computers getting ever faster. We need
to worry about the document metaphor breaking suddenly, leaving us
unable to comprehend the Internet.
Dating the planetary UX to the fifteenth century is something like
chronological anchoring of the genealogy of extant metaphors to the
nearest historical point where some recognizable physical basis exists.
The 15th century is sort of the Garden of Eden of the modern experience
of technology. It represents the point where our current balloon started to
get inflated.
When we think of differences between historical periods, we tend to
focus on the most superficial of human differences that have very little
coupling to technological progress.
Quick, imagine the fifteenth century. Youre thinking of people in
funny pants and hats, right (if youre of European descent. Mutatis
mutandis if you are not)? Perhaps you are thinking of dimensions of social
experience like racial diversity and gender roles.

Think about how trivial and inconsequential changes on those fronts


are, compared to the changes on the technological front. Weve landed on
the moon, we screw around with our genes, we routinely fly at 30,000 feet
at 500 mph. You can repeat those words a thousand times and you still
wont be able to appreciate the magnitude of the transformation the way
you can appreciate the magnitude of a radical social change (a Black man
is president of the United States!).
If I am still not getting through to you, imagine having a conversation
over time-phone with someone living in 3000 BC. Assume theres a Babel
fish in the link. Which of these concepts do you think would be easiest to
get across?
1. In our time, women are considered the equal of men in many
parts of the world
2. In our time, a Black man is the most powerful man in the world
3. In our time, we can sequence our genes
4. In our time, we can send pictures of what we see to our friends
around the world instantly
Even if the 3000 BC guy gets some vague, magic-based sense of what
item 4 means, he or she will have no comprehension of the things in our
mental models behind that statement (Facebook, Instagram, the Internet,
wireless radio technology). Item 3 will not be translatable at all.
But this does not mean that he does not understand your present. It
means you do not understand your own present in any meaningful way.
You are merely able to function within it.
Appreciative versus Instrumental Comprehension
If your understanding of the present were a coherent understanding
and appreciation of your reality, you would be able to communicate it. I
am going to borrow terms from John Friedman and distinguish between
two sorts of conceptual metaphors we use to comprehend present reality:
appreciative and instrumental.

Instrumental (what Friedman misleadingly called manipulative)


conceptual metaphors are basic UX metaphors like scrolling web pages,
or the metaphor of the keypad on a phone. Appreciative conceptual
metaphors help us understand present realities in terms of their
fundamental dynamics. So my use of the metaphor smartphones are
pacifiers (it looks like a figurative metaphor, but once you get used to it,
you find that it has the natural depth of a classic Lakoff conceptual
metaphor) is an appreciative conceptual metaphor.
Instrumental conceptual metaphors allow us to function. Appreciative
ones allow us to make sense of our lives and communicate such
understanding.
So our failure to communicate the idea of Instagram to somebody in
3000 BC is due to an atemporal and asymmetric incomprehension: we
possess good instrumental metaphors but poor appreciative ones.
So this failure has less to do with Arthur C. Clarkes famous assertion
that a sufficiently advanced technology will seem like magic to those from
more primitive eras, and more to do with the fact that the Field actively
prevents us from ever understanding our own present on its own terms.
We manage to function and comprehend reality in instrumental ways
while falling behind in comprehending it in appreciative ways.
So my update to Clarke would be this: any sufficiently advanced
technology will seem like magic to all humans at all times. Some will
merely live within a Field that allow them to function within specific
advanced technology environments.
Take item 4 for instance. After all, it is Instagram, a reference to a
telegram. We understand Facebook in terms of school year-books. It is
exactly this sort of pattern of purely instrumental comprehension that leads
to the plausibility of certain types of Internet hoaxes, like the one that did
the rounds recently about Abraham Lincoln having patented a version of
the Facebook idea.
The fact that the core idea of Facebook can be translated to the
language of Abes world of newspapers suggests that we are papering over

(I had to, sorry) complicated realities with surfaces we can understand.


The alternative conclusion is silly (that the technology underlying
Facebook is not really more expressive than the one underlying
newspapers).
Facebook is not a Yearbook. It is a few warehouse-sized buildings
containing racks and racks of electronic hardware sheets, each containing
etched little slivers of silicon at their core. Each of those little slivers
contains more intricacy than all the jewelry designers in history together
managed to put into all the earrings they ever made. These warehouses are
connected via radio and optic-fiber links to.
Oh well, forget it. Its a frikkin Yearbook that contains everybody.
Thats enough for us to deal with it, even if we cannot explain what were
doing or why to Mr. 3000 BC.
The Always-Unreal
Have you ever wondered why Alvin Tofflers writings seem so
strange today? Intellectually you can recognize that he saw a lot of things
coming. But somehow, he imagined the future in future-unfamiliar terms.
So it appears strange to us. Because we are experiencing a lot of what he
saw coming, translated into terms that would actually have been
completely familiar to him.
His writings seem unreal partly because they are impoverished
imaginings of things that did not exist back then, but also partly because
his writing seems to be informed by the idea that the future would define
itself. He speaks of future-concepts like (say) modular housing in terms
that make sense with respect to those concepts.
When the future actually arrived, in the form of couchsurfing and
Airbnb, it arrived translated into a crazed-familiarity. Toffler sort of got
the basic idea that mobility would change our sense of home. His failure
was not in failing to predict how housing might evolve. His failure was in
failing to predict that we would comprehend it in terms of Bed and
Breakfast metaphors.

This is not an indictment of Tofflers skill as a futurist, but of the very


methods of futurism. We build conceptual models of the world as it exists
today, posit laws of transformation and change, simulate possible futures,
and cherry-pick interesting and likely-sounding elements that appear
robustly across many simulations and appear feasible.
And then we stop. We do not transform the end-state conceptual
models into the behavioral terms we use to actually engage and understand
reality-in-use, as opposed to reality-in-contemplation. We forget to do the
most important part of a futurist prediction: predicting how user
experience might evolve to normalize the future-unfamiliar.
Something similar happens with even the best of science fiction.
There is a strangeness to the imagining that seems missing when the
imagined futures finally arrive, pre-processed into the familiar.
But here, something slightly different plays out, because the future is
presented in the context of imaginary human characters facing up to
timeless Campbellian human challenges. So we have characters living out
lives involving very strange behaviors in strange landscapes, wearing
strange clothes, and so forth. This is what makes science fiction science
fiction after all. George Lucas space opera is interesting precisely because
it is not set in the Wild West or Mt. Olympus.
We turn imagined behavioral differences that the future might bring
into entertainment, but when it actually arrives, we make sure the
behavioral differences are minimized. The Field creates a suspension of
potential disbelief.
So both futurism and science fiction are trapped in an always-unreal
strange land that must always exist at a certain remove from the
manufactured-to-be-familiar present. Much of present-moment science
fiction and fantasy is in fact forced into parallel universe territory not
because there are deep philosophical counterfactuals involved (a lot of
Harry Potter magic is very functionally replicable by us Muggles) but
because it would lose its capacity to stimulate. Do you really want to read
about a newspaper made of flexible e-ink that plays black-and-white

movies over WiFi? That sounds like a bad startup pitch rather than a good
fantasy novel.
The Matrix was something of an interesting triumph in this sense, and
in a way smarter than one of its inspirations, The Neuromancer, because it
made Gibsons cyberspace co-incident with a temporally frozen realitysimulacrum.
But it did not go far enough. The world of 1997 (or wherever the
Matrix decided to hit Pause) was itself never an experienced reality.
1997 never happened. Neither did 1500 in a way. What we did have
was different stretched states of the Manufactured Normalcy Field in 1500
and 1997. If the Matrix were to happen, it would have to actually keep that
stretching going.
Breathless
There is one element of the future that does arrive on schedule,
uncensored. This is its emotional quality. The pace of change is
accelerating and we experience this as Field-stretching anxiety.
But emotions being what they are, we cannot separate future anxiety
from other forms of anxiety. Are you upset today because your boss yelled
at you or because subtle cues made the accelerating pace of change leak
into your life as a tear in the Field?
Increased anxiety is only one dimension of how we experience
change. Another dimension is a constant sense of crisis (which has,
incidentally, always prevailed in history).
A third dimension is a constant feeling of chaos held at bay (another
constant in history), just beyond the firewall of everyday routine (the Field
is everyday routine).
Sometimes we experience the future via a basic individual-level it
wont happen to me normalcy bias. Things like SARS or dying in a plane
crash are uncomprehended future-things (remember, you live in a

manufactured reality that has been stretching since the fifteenth century)
that are nominally in our present, but havent penetrated the Field for most
of us. Most of us substitute probability for time in such cases. As time
progresses, the long tail of the unexperienced future grows fatter. A lot
more can happen to us in 2012 than in 1500, but we try to ensure that very
little does happen.
The uncertainty of the future is about this long tail of waiting events
that the Field hasnt yet digested, but we know exists out there, as a space
where Bad Things Happen to People Like Me but Never to Me.
In a way, when we ask, is there a sustainable future, we are not really
asking about fossil fuels or feeding 9 billion people. We are asking can the
Manufactured Normalcy Field absorb such and such changes?
We arent really tied to specific elements of todays lifestyles. We are
definitely open to change. But only change that comes to us via the Field.
Weve adapted to the idea of people cutting open our bodies, stopping our
hearts and pumping our blood through machines while they cut us up. The
Field has digested those realities. Various sorts of existential anesthetics
are an important part of how the Field is manufactured and maintained.
Our sense of impending doom or extraordinary potential have to do
with the perceived fragility or robustness of the Field.
It is possible to slide into a sort of technological solipsism here and
declare that there is no reality; that only the Field exists. Many
postmodernists do exactly that.
Except that history repeatedly proves them wrong. The Field is
distinct from reality. It can and does break down a couple of times in
every human lifetime. Were coming off a very long period since World
War II of Field stability. Except for a few poor schmucks in places like
Vietnam, the Field has been precariously preserved for most of us.
When larger global Fields break, we experience dark ages. We
literally cannot process change at all. We grope, waiting for an age when it
will all make sense again.

So we could be entering a Dark Age right now, because most of us


dont experience a global Field anymore. We live in tiny personal fields.
We can only connect socially with people whose little-f fields are similar
to ours. When individual fields also start popping, psychic chaos will start
to loom.
The scary possibility in the near future is not that we will see another
radical break in the Field, but a permanent collapse of all fields, big and
small.
The result will be a state of constant psychological warfare between
the present and the future, where reality changes far too fast for either a
global Field or a personal one to keep up. Where adaptation-byspecialization turns into a crazed, continuous reinvention of oneself for
survival. Where the reinvention is sufficient to sustain existence
financially, but not sufficient to maintain continuity of present-experience.
Instrumental metaphors will persist while appreciative ones will collapse
entirely.
The result will be a world population with a large majority of people
on the edge of madness, somehow functioning in a haze where past,
present and future form a chaotic soup (have you checked out your
Facebook feed lately?) of drunken perspective shifts.
This is already starting to happen. Instead of a newspaper feeding us
daily doses of a shared Field, we get a nauseating mix of news from
forgotten classmates, slogan-placards about issues trivial and grave,
revisionist histories coming at us via a million political voices, the future
as a patchwork quilt of incoherent glimpses, all mixed in with pictures of
cats doing improbable things.
The waning Field, still coming at us through weakening media like
television, seems increasingly like a surreal zone of Wonderland madness.
We arent being hit by Future Shock. We are going to be hit by Future
Nausea. Youre not going to be knocked out cold. Youre just going to
throw up in some existential sense of the word. Id like to prepare. I wish
some science fiction writers would write a few nauseating stories.

Welcome to the Future Nauseous.


For the record, I havent read Sartres novel Nausea. From
Wikipedia, it seems vaguely related to my use of the term. I might read it.
If somebody has read it, please help connect some dots here.

Technology and the Baroque Unconscious


November 11, 2011
Engineering romantics fall in love with the work of Jorge Luis Borges
early in their careers. Long after Douglas Hofstadter is forgotten for his
own work in AI (which seems dated today), he will be remembered with
gratitude for introducing Borges to generations of technologists.
Borges once wrote:
I should define the baroque as that style which
deliberately exhausts (or tries to exhaust) all its own
possibilities and which borders on its own parodyI would
say that the final stage of all styles is baroque when that
style only too obviously exhibits or overdoes its own
tricks.
The baroque in Borges sense is self-consciously humorous. Borges
own work in this sense is a baroque exploration of the processes of
thought. As one critic (see the footnote on this page) noted, Borges
writings serve to dramatize the process of thought in the apprehension of
truth.
Unlike art, complex and mature technology (not all technology) is
baroque without being self-conscious. At best there is a collective
sensibility informing its design that can be called a baroque unconscious.
This post is a sequel of sorts to The Gollum Effect. You can read it
stand-alone, but you will probably get more out of it if you read that first.
Within the Lord of the Rings metaphor I developed in that post, baroque
unconscious is basically my answer to the question, if extreme consumers
are Gollums, who is Sauron?
This idea of a baroque unconscious helps clarify things about the
phenomenon of technological refinement that have been bothering me for
a while. In particular, it helps distinguish among three kinds of refinement

in technological artifacts: refinement that is useful to the user, refinement


(often exploitative) that is useful to somebody besides the user, and
refinement that benefits nobody at all.
It is this last characteristic that interests me. Refinement that benefits
nobody anything that attracts the adjective overwrought is what I
attribute to the workings of the baroque unconscious. And I write this fully
aware of the irony that this kind of post, might be viewed as overwrought
analysis by some.
Interestingly though, viewed from this perspective, the other two
kinds of apparently intentional refinement can be seen as opportunistic
exploitation. They arise through manipulation of those elements of the
workings of the baroque unconscious that happen to be consciously
recognized.
In other words, I am arguing that the collective unconscious
component in the evolution of technology is primary. The conscious
component is peripheral.
Or to borrow another idea from art, it is technology for technologys
sake. And unlike in art, there is no primary artist.
The Baroque in Art
There is no such thing as the baroque unconscious in art.
When art exhausts its own possibilities unintentionally we generally
characterize it as camp (what Susan Sontag aptly called failed
seriousness in Notes on Camp). The baroque element in the work is
evident to observers, even if the creator lacks the self-awareness to
recognize it.
When art exhausts its own possibilities as a side-effect, while
pursuing other objectives, we do not call it baroque. We call it either
cynical or tasteless. The auteur theory of art applies well enough that if we
cannot reasonably impute baroque intentions to the artist, we feel safe
assuming that artist was aware of the baroque consequences of his/her

decisions. Michael Bays Transformers movies (especially the last


installment) are examples. They are both tasteless and cynical, but they are
not campy or baroque.
Technology is generally more complex and collaborative than even
the most collaborative kinds of art, such as movies. The process can create
things that exhaust certain possibilities, with no single creator or observer
being fully conscious of it. Yet, we cannot call such things campy, cynical
or tasteless.
To understand this, suspend for a moment your default idea of what it
means for something to be baroque. You are probably thinking of
European architecture of a certain period with an exaggerated and visible
sort of drama on the surface. That prototypical idea of the baroque is what
we tend to apply, in unreconstructed form, to technology: clunky user
interfaces and a degree of featuritis that has us groaning.
This is a narrow sense of the baroque. The original architectural
instances served a specific function: to impress and intimidate commoners
with a display of awe-inspiring grandeur (some art historians have argued
that the original examples of baroque were therefore not baroque at all, but
cynical). The exhaustion of possibilities in that kind of baroque is all on
the surface.
But things can be baroque without being visibly so, depending on the
audience for the original function. The key is that the governing aesthetic
must seek to self-consciously exhaust its own possibilities.
Invisible, but still intentional baroque is particularly common in
modern American pop culture. Most viewers of The Simpsons for instance,
miss the bulk of the hidden pop-culture references in the show. A loyal
subculture of fans devotedly mines these references and discusses them
online. While this sort of thing is often cynical (deliberate creation of
baroque plots to create addiction, as in the show Lost), in the case of The
Simpsons, I suspect the writers genuinely seek to exhaust the possibilities
of the artistic technique of reference, without annoying the mainstream
audience.

The Baroque in Technology


In technology, Apples products border on the baroque in their
exaggerated simplicity. Once the iPad achieves the edge-to-edge display
and maximal technically feasible thinness for instance, it is hard to
imagine how one would parody it there is no room left for exaggeration
in the physical form at least. Certain possibilities will have been
exhausted.
This sort of intentional (and therefore artistic) baroque in technology,
however, is not really what interests me. What fascinates me is technology
that grows baroque without anyone consciously intending to exhaust any
design possibilities. Social forces, such as the competitive pressures of an
arms race, or the demands of extreme lead customers, dont seem to be
sufficient explanations.
Art is usually the outcome of a singular vision. But technology, even
the auteur form of technology practiced by Steve Jobs, is deeply
collectivist. Engineering real things is far too hard for one mind to impose
a singular vision on all but the simplest of products. When a piece of
technology appears to be the work of a single mind and possesses the
dense layers of coherent complexity that can only be the product of a large
team, it is evidence of a deep coherence in the team itself. In such a team,
individuals trust the collective to the point that they feel comfortable
narrowing their domain of conscious concern to their own work.
The baroque sensibility resides in the collective unconscious of the
team that produces it. The baroque in the whole is greater than the sum of
the baroque accounted for by the self-awareness of the many individuals.
Moderately obsessive-compulsive attention to detail at the level of
individuals oblivious to larger purposes, eventually turns into baroque
exhaustion of possibilities at the level of the whole product.
This brings us to the idea of refinement, and the question of when,
why and how wrought keels over into overwrought.

Refinement and the Baroque


When I first started thinking about refinement, in the context of
addictive consumption (as in, refined cocaine), I had examples such as
American fast food in mind: precisely engineered concoctions of key
refined substances (salt, sugar and fat) designed to cause addictive overconsumption.
The pathologies of consumerism can be traced to an entire universe of
such refined goods. I offered the term gollumized to describe humans who
end up being entirely defined by a pattern of such consumptive behavior,
much like the character of Gollum in the Lord of the Rings, with his
addictive, enslaving attachment to the One Ring: a highly refined, pure
essence.
Something bothered me however, about the implicit equation of
refinement with pathological addictive dependence on the one hand, and
cynical exploitation on the other.
The refinement in the construction of something like the space shuttle
does not seem pathological. It seems necessary.
A highly refined kitchen knife that plays a role in your creative selfexpression as a chef seems somehow different from a McDonalds
hamburger or an expensive wine, both of which are consumptionaddiction refined in their own ways.
Even with hamburgers, while acknowledging that they are effectively
exploitative and addictive foods designed to enrich the food industry by
ruining the health of consumers, it is clearly farfetched to believe that
there is some vast conspiracy that includes every biochemist.
The idea that the creation and sale of such foods is more a matter of
cynical opportunism is more reasonable. You could accuse the industry of
carefully engineering high-fructose corn syrup as a way to make money
off corn surpluses, but the industry didnt create the necessary
biochemistry knowledge or surplus-creating agricultural advances with the

idea of eventually selling cheap and addictive burgers (for one thing, the
evolutionary processes took longer than the lifetime of any individual
involved in the story). You could say that the existence of HFCS is 10%
intentional and 90% a consequence of the baroque unconscious driving
food technology.
In other words, the existence of a Gollum does not imply the
existence of a Gollumizer. Sauron in the The Lord of the Rings is at best a
personification of the baroque unconscious (with Saruman being one of
the cynical exploiters an HFSC creator so to speak).
But lets figure out what refinement in technology really means.
Consider the following senses of the word refinement:
1. Refinement as in purity or purification of substances: ore, oil,
drugs, foods
2. Refinement in the sense of highly developed and cultivated
sensibilities, as in refined palate
3. Refinement in the sense of elaborate sophistication of mature
or declining cultures
4. Refinement in the sense of detailed, attentive design in
advanced technologies
5. Refinement in the sense of an Apple product (or any other
possibility-exhausting product aesthetic)
How do these different senses of the idea of refinement relate to each
other and to the baroque? What distinguishes the space shuttle, quality
kitchen knife from an iPad, an expensive wine, or a McDonalds
hamburger?
The Sword, the Nail and the Machine Gun
I found a key clue when Greg Rader decided (to my slight discomfort)
to overload this sense of refinement with an economic meaning in his 22
model of types of economies.
In Gregs model, the economic role of refinement is to make it easy to
value artifacts in an impersonal way, in a cash economy. Unrefined

artifacts get you attention or help build social capital in relationships.


Refined artifacts help you earn money or participate in the gift economy.
But why should refinement lead to easier valuation and thence to
exchange for money.
The crucial missing piece is the role of interchangeability in mass
production. As Joseph Ellis writes in The Social History of the Machine
Gun:
It was always theoretically possible to conceive of a
gun that would spew out vast numbers of bullets or
whatever in a short period of timemanufacturing
techniques [were not] sufficiently well-advanced to allow
individual craftsmen to work to the fractional tolerances
demanded for every part of such a complex gun.
The key point here is often lost in discussions of industrialization that
use Adam Smiths simple example of a nail to highlight the division of
labor aspect of industrial production. Nail manufacture illustrates the
reductionist capacities of industrialization, but it is the integration capacity
of industrialization that drives refinement.
The machine gun illustrates the dynamics of integration. It is a
complex machine, and as such, liable to break down more easily.
Reliability involves network effects within a complex artifact. Roughly
speaking, in a design with no redundancy, the more parts you have, and
the more complex and fast-moving the linkages among them, the less
reliable the machine.
Unless you find an opposed network effect that can scale at least as
fast, machines will get less reliable as they scale.
The opposed network effect that was discovered late in the industrial
revolution was interchangeability. Interchangeability creates a network
effect between artifacts. Crucially, they need not be functionally similar.
They only need share a structural language. A machine gun can be
cannibalized to repair a telescope for instance.

The significance of Ellis point about fractional tolerances has to do


with replacement and cannibalization. Craftsmen are capable of very
refined work, but the work tends to be unique. It involves fitting this hilt
on this sword with great precision. You can get away with this because
craft also tends to involve fewer parts, static linkages and performance
regimes where breakdowns are infrequent.
With interchangeability comes the possibility of easy valuation, since
it is possible to talk of supply and demand at the level of many non-unique
parts that can be compared to each other. That helps connect the dots to
Gregs economic hypotheses.
But we still havent fingerprinted the essence of refinement itself.
Replacement and Repair
The first key threshold crossed on the road to industrialization was the
replacement of human, animal and uncontrolled inanimate power (wind or
water) with controlled inanimate power: coal and oil. Much of the
attention in attempts to characterize industrialization is given over to the
study of this threshold-crossing.
The second key threshold crossed was the shift from repair to
replacement. When breakdowns became frequent enough that anticipatory
manufacture of replacement parts became cheaper than reactive repair or
replacement, the network effects of industrialization truly kicked in.
The network effects of reliability in a sword are not strong enough
that you need to counteract them with interchangeability effects. In fact,
much of the complexity in a sword may well be in baroque artistic
elements that serve no purpose (a sword that loses a diamond from its hilt
is still equally effective on the battlefield).
Even early industrial-age artifacts, do not have enough complexity
and speed to really require interchangeability. This is one reason I find
elaborate steampunk fantasies fundamentally uninteresting. They involve
imagined machines that come across as laughably Rube Goldberg-esque

precisely because they dont comprehend reliability problems, and the


methods actually created during the industrial age to mitigate them.
When you get to something like a machine gun though, where
breakdowns are frequent and waiting for custom replacement parts is
hugely expensive, you must meet absolute tolerances, so that any
replacement part can replace any broken part (and equally crucially, so
that two broken, complex assemblies can be cannibalized to produce at
least one working assembly).
So we can conclude that:
1. Refinement in craft based on relative tolerances leads to
uniqueness.
2. Refinement in manufacturing based on absolute tolerances
leads to interchangeability.
From these two basic kinds of refinement, we get the five
connotations of the word I listed earlier. This happens via the appearance
of a refinement surplus.
The Refinement Surplus
Interchangeable parts based on absolute tolerances solve the
reliability problem and then some. The network effects of
interchangeability turn out to be stronger than the network effect of
increasing unreliability in individual complex artifacts.
Whats more, since interchangeability limits the need for
communication among collaborating makers, refinement of component
technologies can progress much faster (as Adam Smith noted). This is
what we call specialization. It happened in physical engineering before
object-oriented programing ported the idea to software engineering.
You could say that work previously achieved by communication
among makers is now achieved via communication among artifacts. This
is most obvious with software objects, but the core idea is present even

when you shift from a custom-made nut-bolt pair to a standardized pair


that communicates via numerical absolute tolerances.
So interchangeability creates a social network of (say) machine guns.
There are functional linkages within complex artifacts that make them
useful, and substitution and reuse linkages between them that make them
reliable (redundancy inside an artifact is merely a semantic distinction:
think of it as carrying interchangeable spare parts inside the boundary of
the artifact, with the capacity to automatically switch out broken parts).
Interchangeability and standardization make every machine gun less
unique, and more a part of a sort of hive-machine-gun beast.
Dramatic as this effect is, it pales in comparison to the effect of
commonalities across the needs of different types of complex systems.
This connects all complex artifacts into a giant social network. The One
Machine.
A high-tolerance part can serve a low-tolerance function, but not vice
versa. Economies of scale then kick in and dictate that many components
become more refined than they need to be, for typical artifacts that make
use of them. The result is that systems gradually get more refined than
they functionally need to based on immediate intentions. The needs of a
few artifacts drive the refinement levels in all technologies.
This creates a refinement surplus. Industrial technology, unlike craft
work, runs a continuous refinement surplus. The surplus was initially
triggered by the need for interchangeability to solve the reliability
problem, but that turned out to be a case of using a sledgehammer to kill a
fly.
Or so it might seem if you only look at individual artifacts. Ill argue
in a future post that once software and the Internet kick in, reliability
problems can once again overtake what interchangeability can mitigate. As
the One Machine gets increasingly interconnected, the unreliability
network effect may overtake the interchangeability network effect, hence
the fundamental Singularity-vs.-Collapse debate.

The possibilities represented by limiting refinement levels are always


greater than the universe of artifacts in existence at any given time.
Exploitation of this refinement surplus is fundamentally what creates
the predictable growth in industrial age Schumpeterian creative
destruction. But it isnt the intent to exploit that drives the evolution. It is a
collective unconscious drive to exhaust possibilities and find limits,
independent of any specific need.
The Platonic Baroque
The Lord of the Rings captures artistic anxieties about engineering:
the good races create beautiful craft, the evil ones engineer ugly
things.
Where LOTR goes wrong is in focusing on beauty in craft as the
distinguishing factor (there is a line in The Hobbit which goes something
like the Goblins create many clever things, but few beautiful ones).
In LOTR, evil engineering artifacts are crude, unrefined and possess
little symmetry. Good ones made with craft are intricate, refined and
highly symmetric.
This is obviously the exact opposite of what actually happens.
Open up a laptop and compare what you see to (say) a beautiful handcrafted necklace. Not only is the inside of the laptop more intricate than
the necklace, it is more intricate than you can even see. You would need
electron microscopes to get a sense of how unbelievably intricate, refined
and symmetric a laptop is.
The technological landscape is defined by two kinds of beauty. On the
one hand, you have the possibility-exhausting conscious baroque artifacts
that we view as pushing the envelope. Both the iPad and the space
shuttle belong on this end of the spectrum. One contains chips at the limit
of fabrication technology, the other contains materials that can handle
enormous heat and cold, produce unimaginable levels of thrust, and so on.

On the other hand you have things that are not at the edge of
technological capability, but manufactured out of component and process
technologies created for those leading edge technologies. And I dont just
mean obviously over-engineered things like space pens that write upside
down (which you can buy at NASA museums). I mean everything.
Regular Bics included.
In this category, makers strive to exhaust the possibilities, but always
lag
behind. The surplus refinement potential shows up in the
unnecessarily clean lines of modernism. Unused bits. Unbroken
symmetries. Blank engineering canvases that expand faster than designers
and technicians can paint.
The interaction of the two kinds of beauty is what creates the texture
of the modern technological landscape. I call it platonic baroque. This
may seem like a contradiction in terms, but bear with me for a moment.
The baroque unconscious is the force that drives technological
evolution: a force whose potential increases faster than it can be exploited.
Recall that the baroque seeks to exhaust its own possibilities. It is a
technical exercise in exploring process limits, not an exercise in
expressing ideas or creating utility. But this process needs ideas to fuel it.
In the days when royalty and religion loomed large in the minds of
creators, it was natural to exhaust possibilities by filling them up with the
content of the mythology associated with the power and money that drove
their work. It was natural to fill up blank walls with gargoyles and
cherubs, popes and princes.
But when the power and money come from a force whose main
characteristic is vast and featureless potential, the baroque aesthetic seeks
to exhaust possibilities by expressing that emptiness with platonic forms.
So the Bauhaus chair is not a rejection of the baroque. The modernist
designer merely seeks to build cathedrals to his new master: a vast
emptiness of possibility within the refinement surplus. This possibility is

the father of industrial invention, a restless, paternalist force that replaces


necessity, the mother of craft-like invention.
I am tempted to explore that male/female symbolism further, but Ill
limit myself to one overwrought metaphor. This unexploited possibility
that is the father of industrial invention is at once a Dark Lord and
engineering Dark Matter.
Maker Addiction and Exponential Technology
Where there is surplus, it will be exploited. Possibility, rather than
necessity, drives invention. When ideas for exploitation lag the potential to
be exploited you get baroque unconscious design.
Why would somebody build something simply because it is possible?
Both craft and engineering are driven by an addiction to making. It
does not matter whether needs or possibilities enable the making. Makers
will make. What determines how fast they make is whether they are able
to focus on their strengths or whether they are limited by their weaknesses.
This is the shift in maker psychology due to industrialization: from
deliberative craft work limited by individual weaknesses, to reactive
engineering work that is not limited in this way, thanks to specialization.
Need-driven making requires a focus on function and utility. Nonfunctional making in craft is easily recognized as artistic embellishment.
The idealized craftsman and it was usually a he was a deliberate
and mindful creator. He made the whole, and he made the parts. When
things broke, he made repairs or crafted new parts. Each whole was
unique. When craftspeople collaborated on larger projects stonemasons making blocks for cathedrals say assembly itself became a craft
that was limited by the skill of the best (if you look at the history of
masonry, you can see an obvious and gradual progression from roughhewn blocks carefully fitted together, to more refined blocks that look
increasingly interchangeable in late pre-modern architecture).

In industrial artifacts based on interchangeability, however, the role of


craftsman bifurcates into the twin roles of technician and engineerdesigner (for now, we can safely conflate engineer and designer). Both are
reactive roles where function and utility take a backseat to sheer maker
addiction.
The technician reacts to component work defined in terms of absolute
tolerances by pushing the boundaries of process capabilities and
component quality with addictive urgency. I explored this earlier in my
post, The Turpentine Effect (though I didnt connect the dots until now).
The result is Six Sigma, an explosion of process tools, and the dominance
of an intrinsic and abstract notion of potential future value over an
extrinsic and specific notion of realizable current value. Somebody will
use this in the future beats nobody can use this right now. By and large,
this trust is justified: increasing demands for refinement from the most
demanding applications keep up with the possibilities.
In this process of reactive design, refinement in available components
and processes starts to drive refinement levels in complete artifacts that
have already been invented, and suggests new inventions. A positive
feedback loop is set in motion: increasing component and process
refinement overtakes application needs as individual artifacts mature, but
then new applications emerge as pace-setters. Design bottlenecks migrate
freely across the entire technological landscape, via the coupled
technological web, instead of remaining confined within the design space
of individual artifacts.
For those of you who are familiar with the S-curve models of
technology maturation and disruption, imagine disruption S-curves
bleeding across unrelated artifact categories via shared components and
processes, creating an overall exponential technology evolution curve of
the sort that both Singularity and Collapsonomics watchers like to obsess
about, and that I will obsess about in future posts.
Across the fence from the technician, the engineer-designer loses
mindfulness by shifting from deliberately dreaming up useful ideas to
reacting to the possibilities of available component and process
sophistication levels.

A perfect example is Moores Law: semiconductor companies began


pushing fabrication technology to extremes before applications for the
increased capability became clear.
On the other end we have Alan Kays reaction to Moores Law in the
early 70s: the idea that computing should strive to waste bits in
anticipation of decreasing cost. Computer design shifted from
fundamentally deliberative before PARC to fundamentally reactive after.
Effects, Large and Small
So the net effect of maker addiction faced with refinement surplus is
that existing artifacts get pulled into a baroque stage of their evolution and
new artifacts appear to exploit possibilities rather than respond to
necessities. I am not sure this is much better than Gollumizing
consumption.
The One Machine gets increasingly integrated, and takes on an eerily
coherent appearance due to uniform refinement levels and the operation of
the platonic baroque aesthetic at the level of individual artifact design.
Design bottlenecks drift around within this technological body politic,
making it more coherent, more eerily platonic-baroque over time.
If the creation of unrealized refinement potential ever slows, and
exploitation starts to catch up, you can expect the platonic baroque to
become less platonic and more visibly overwrought. The blank canvas will
start to fill up.
Thanks to this eerie collectively created aesthetic coherence, the One
Machine takes on the appearance of subsuming intelligence and
intentionality that suggests visions of a Singularity-AI to some. Whether
this is a case of anthropomorphic projection onto a smooth facade beneath
which unreliability-driven collapse lurks, or whether there is an emerging
systemic intelligence to the process, is something I still havent made up
my mind about. If youve been following my writing, you know that at the
moment, I lean towards the collapse interpretation. Darwinian evolution as

refined complexity created by a blind watchmaker is too much of a


precedent to ignore.
At more mundane levels, the baroque unconscious creates a critical
shift in the nature of engineering: the pull of under-exploited refinement
surplus is so strong that nominally less useful things that exploit the
surplus can diffuse far faster, and suck away resources, far faster than
nominally more useful things that ignore it.
All you need is a human behavior with potential for escalating
addiction. You can then move as fast as the refinement surplus will allow. I
explored this idea in The Milo Criterion.
Ignoring this leads to the classic entrepreneurial mistake: attempting
to build useful things instead of things that exploit refinement surplus. The
most high-impact technologies of the day are almost never whatever the
wisdom of the day identifies as the most potentially useful ones. They are
the ones that can spread most rapidly through The One Machine, mopping
up refinement surplus.
So the best and brightest flock to Facebook or Google, and cancer
remains uncured. Again, I am not sure whether this a good thing or not.
Perhaps from the perspective of the Dark Lord, optimizing the One
Machine, now is simply not the right time to cure cancer. One day
perhaps, the design bottlenecks will drift to that corner of the
technological Web. Until then, well have to content ourselves with
doctors who tweet during surgeries and webcast the proceedings, but still
cannot cure cancer.
Ill stop here for now. This post has been something of a stream of
conscious expression of my own baroque-unconscious addicted-maker
tendencies. But then, I figure I can allow myself one of these selfindulgent posts every once in a while. Especially since my birthday is
coming up in a couple of days.

The Bloody-Minded Pleasures of Engineering


September 1, 2008
Welcome back. Labor Day tends to punctuate my year like the eye of
a storm (Ive been watching too much Hurricane-Gustav-TV). For those,
like me, who do not vacation in August, it tends to be the hectic anchor
month for the years work. On the other side of Labor Day, September
brings with it the first advance charge of the year to come. The tense
clarity of Labor Day is charged with the urgency of the present. There is
none of the optimistic blue-sky vitality of spring-time visioning. But
neither is there the wintry somnolence and ritual banality of New-YearResolution visioning. So I tend to pay attention to my Labor Day thoughts.
This year I asked myself: why am I an engineer? The answer I came up
with surprised me: out of sheer bloody-mindedness. In this year of viral
widgetry, when everyone, degreed or not, became an engineer with a click
on an install-this dialog on Facebook, this answer is important, because
the most bloody-minded will win. Here is why.
***
Why Engineer?
There are three answers that preceded mine (out of sheer bloodymindedness).
Engineering outgrew its ancestry in the crafts, and acquired a unique
identity, around the turn of the century. Between about 1880-1910, as
engineering transformed the world with electricity, steam and oil, the
answer to the question, Why engineer? was an officiously triumphalist one
to conquer nature and harness its powers for humanity. Then, as World
War I and II left the world with the mushroom cloud as the enduring
symbol of engineering, the answer went from apologetic and defensive to
subtle. Samuel Florman, in his 1976 classic The Existential Pleasures of
Engineering, reconstructed engineering as primarily a private,
philosophical act. The social impact of engineering was the responsibility,
he suggested, of all of society. Making his case retroactive, he suggested

that the triumphalist answer was largely an imputed one: part of a social
perception of engineering that was mostly manufactured by non-engineers.
Flormans answer to Why engineer? can probably be reduced to
because it helps me become me.
Curiously, this denial of culpability on the part of engineers was
largely accepted as legitimate . Possibly because it was true. As James
Scott argues brilliantly in Seeing Like a State, to the extent that there is
blame to be assigned, it attaches itself rather clearly to every citizen who
participates in the legitimization of a state. Sign here on the social
contract; well try to make sure bullies dont beat you up; you consent to
be governed by an entity the State with less than 20/20 vision; you
accept your part of the blame if we accidentally blow ourselves up by
taking on large-scale engineering efforts.
So the first shift in the Big Answer, post WWII (lets arbitrarily say
1960) was the one from triumphalist to existential. The third answer,
which succeeded the triumphalist one around 1980, was the ironic one.
The ironic rhetorical non-answer goes, in brief, Why Not?
***
Lets return for a moment to the surging waters pounding the levees of
New Orleans as I write this. Levees are a symbol of that oldest of all
engineering disciplines, civil engineering. As I watch Hurricane Gustav
pound at this meek and archaic symbol of human defiance, with anxious
politicians looking on, it is hard to believe that we ever had the hubris to
believe that we could either discipline or destroy nature. The
environmentalists of the 90s and the high modernists of 1910 were both
wrong. They are as wrong about, say, Facebook, as they were about the
dams and bridges of 1908.
This isnt because technology cannot destabilize nature. It is because
nature does such a bang-up job on its own. For every doomsday future we
make possible say nuclear holocaust or a nasty-minded all-conquering
post-Singularity global AI nature cheerfully aims another asteroid at
Earth. I was particularly amused by all the talk of the Large Hadron

Collider possibly destroying the planet. I dont understand the physics of


the possibility, but I suspect there is an equal probability that nature will
randomly lob a black hole at us.
We are not capable of being the stewards of nature, anymore than we
are capable of mastering it. The most damage we are likely to do is just
destroy ourselves as a species, after which Nature will probably shed a
tear at the stupidity of a disobedient child, and move on.
The ironic answer then, is based on two observations. The first
observation is that the legitimacy of the lets-preserve-nature ethic, at least
as an objective, selfless stance, is suspect. Postmodernist critiques of
simple-minded environmentalism have been around for a while, and it
seems to me that the takeaway is that the only good reason to have
environmental concerns is a selfish one to save ourselves. And maybe
to be nicer to the cows in our factory farms.
The other observation that leads to the ironic answer is that, unlike in
Flormans time, no credible person today is asking the question why
engineer? in the sort of accusatory tone that engineers endured in the 70s.
Nobody is suggesting a return to nature in the sense of an ossified,
never-changing Garden-of-Eden stable ecosystem. An entire cybergeneration has grown up with William Gibson as its saint. And this
generation, rather shockingly, is the first human generation that
understands at a deep, subconscious level that there is no such thing as
technology. It is all nature. Herbert Simon may have been the first to
articulate this idea at an intellectual level, but the Millenials are the first
generation to get it at gut-level. Even if they havent read Gibson, the
irony of creating a Facebook group to Save the Rainforests By
Abandoning Technology isnt lost on them.
So the ironic answer to why engineer? is really one that does not
differentiate technology at all from, say, art, science or any other human
endeavor or natural phenomenon. The rhetorical non-answer, why not? is
not quite as shy, private and retiring as the existential one. And the point of
this rhetorical non-answer is not (just) to create a decentered debate about
engineering, but to legitimize a view of engineering as a socially-engaged
aesthetic enterprise.

The iPod, perhaps, is the apotheosis of ironic engineering. It is


because it can be, and because Steve Jobs chose to make it be. Its
utilitarian inevitability (something like it had to disrupt the music industry)
is overwhelmed by its overweening sense of Big-D Design; the aspects of
it that didnt have to be. By being so essentially artistic, the iPod
reductively defines technology as art. Which is why the ironic answer
fails.
And its child, the iPhone, is a symbol of the end of the short, fewdecades-long age of Ironic Engineering. As another iconic designer of our
times, James Dyson, said, I have an iPhone and a BlackBerry. And I have
to confess that I use the BlackBerry more. Steve Jobs, for all his
phenomenal creativity, seems to be missing an essential idea about what
technology is.
So ironic engineering will not do. The raison detre of engineering
cannot be borrowed from art or science (both of which, I think, may truly
be at a terminally ironic stage).
***
C. P. Snow (he of the Two Cultures fame), was wrong. There arent
two opposed cultures in the world today. There are three. Besides the
sciences and the humanities, engineering represents a third culture. One
that is only nominally rooted in the epistemic ethos of the sciences and the
design ethos of the fine arts. At heart, engineering is a wild, tribal,
synthetic culture that builds before it understands. Its signature
characteristics are quantity, energy and passion. By contrast, the dominant
characteristics of science and the humanities are probably reason and
emotion. Nature, in an editorial dated 22nd June, 2006, titled The Mad
Technologist, discusses this very subtle distinction:
We find that pure scientists are often treated kindly by
film-makers, who have portrayed them sympathetically, as
brooding mathematicians (A Beautiful Mind) and heroic
archaeologists (Raiders of the Lost Ark). It is technology
that movie-makers seem to fear. Even the best-loved
science-fiction films have a distinctly ambivalent take on it.

Blade Runner features a genetic designer without empathy


for his creations, who end up killing him. In 2001: A Space
Odyssey, computers turn against humans, and Star Wars
has us rooting for the side that relies on spiritual power
over that which prefers technology, exemplified by the
Death Star.
Science is content to poke at nature with just enough force to help
verify or falsify its models. The humanities, to the extent that they engage
the non-human at all, through art, return quickly to anthropocentric selfabsorption, with entirely human levels of energy and passion.
Engineering asks, for the hell of it, just how powerfully can I mess
with the world, in all its intertwined natural and artificial beauty?
Sometimes and this is why engineering is sometimes the agnostic force
that the Hitlers and Saddams co-opt the most interesting answer is
blow it up.
***
Lets return to today, and the it-idea of the Singularity. To the idea that
some form of artificial intelligence might surpass human intelligence
(scroll to the end of this earlier piece for some pointers on this interesting
topic).
Here is a simple illustration of the sorts of reasoning that make people
panic about a Googlezon global intelligence taking over the world. Start
with the (reasonable) axiom that it takes a smarter person to debug a
computer program than to write it in the first place. Conclusion: if the
smartest programmer in the world were to write a flawed program, nobody
will be able to debug it. If it happens to be some sort of protean, selfreconfiguring, critical-to-the-Internet sort of program, it might well trigger
the Singularity.
This particular line of reasoning is suspect (a too-complex-foranyone-to-debug program is far more likely to acquire entropy than
intelligence), but the overall line of thinking is not. The idea that the
connected beast of technology might become too complex to manage is a

sound one. I personally suspect that in this sense, the Singularity actually
occurred with the invention of agriculture.
So contemplate, as an an engineer (and remember, this includes
anyone who has every chosen to install a Facebook widget), this globespanning beast called nature+technology (or nature-including-technology).
It has a life of its own, and it is threatening today to either die of a
creeping entropy that we arent smart enough to control, or become
effectively sentient and smarter than us.
How can you engage it productively?
By being even more creatively-destructive than it is capable of being
without human intervention. Bloody-minded in short.
***
Let me make it more concrete. Imagine engineers from 1900, 1965,
1995 and 2008 (time-ported as necessary) answering the question why are
you an engineer? within the 2008 context.
1900-engineer: I thought it was to make the world a better place, but
clearly technology is so complex today that any innovation is as likely to
spawn terrorism or exacerbate climate change as it is to improve our lot. I
quit; I will become a monk.
1965-engineer: I thought I was doing this to self-actualize within my
lonely existence, but clearly engineering in 2008 has become as much selfindulgent art as engagement of the natural world. I will not write a
Facebook widget. I will become a monk.
1995-engineer: I thought I did it for the same reasons that drive that
guy to make art and that other guy to do science, but it seems like
whatever I do, be it designing a fixture or writing a piece of code, I am
fueling the emergence of this strange Googlezon beast. Thats scarily large
and impactful. It changes reality far more than any piece of art or science
could, and I want no part of it. I am off to become a monk.

2008-engineer: Crap! this will either blow up in our faces or it will


be the biggest thrill-ride ever. Awesome! lemme dive in! Carpe Diem.
***
I spent ten days in August in California, mostly in the Bay Area. It is a
part of world that cannot be matched for the sheer obscenity of its
relentlessly positive technological energy. There is none of the sense of the
tragic that pervades the air on the East Coast.
California is full of people who are cheerfully bloody-minded about
their engagement of technology.
Here is a thought experiment about these curious folks. Imagine that a
mathematician proved conclusively that a particular type of Uber-Machine
was the most complex piece of technology theoretically possible. Call this
the Uber-Machine theorem. Maybe the Uber-Machine is the theoretically
most complex future-Internet possible, powered by the theoretically mostcomplex computer chip within its nodes.
Nothing more complex, intelligent or capable is theoretically possible.
But there is a corollary. The theorem also implies that it is possible to
make a different kind of ultimate artifact, call it Uber-Machine B. One that
annihilates the Universe completely. Maybe Uber-Machine B is some
descendant of the Large Hadron Collider, capable of provably destroying
the fabric of space-time.
Which would you choose to help build? Secretly, I believe the
bloody-minded technologists (and I am among them) would want to build
Uber-Machine B because it represents the most impact we could ever have
on reality. Uber-Machine A would depress us as representing a
fundamental plateau.
There is even a higher morality to this. Technology-fueled growth
what Joel Mokyr called Schumpeterian growth is the only kind of
growth, towards the unknown, that leaves open the possibility that we may
solve the apparently intractable problems of today. The cost is that we may
create the truly intractable problems of tomorrow civilizational death-

forces that we may have to accept the way we accept the inevitability
of our individual deaths. Maybe weve already created these problems.
And that is why bloody-mindedness is the only defensible motivation
for being a technologist today. You may delude yourself with culturally
older reasons, but this is the only one that holds up. It is also the only
reason that will allow you to dive in without second-guessing yourself too
much, with enough energy to have any hope of having an impact. Because
the people shaping the technology tomorrow arent holding back out of
fear of (say) green-house emissions from large data centers.
***
Alright. Holiday over. Back to recycling tomorrow.

Towards a Philosophy of Destruction


July 21, 2008
Somewhere in the back of our minds, we know that creation and
growth must be accompanied by destruction and decline. We pay lip
service to this essential dichotomy, or attempt to avoid it altogether, by
using false-synthesis weasel words like renewal. I too have been guilty of
this, as in this romanticized treatment of creative destruction (though I
think that was a fine piece overall). Though I define innovation as
creative destruction in the sense of Schumpeter, most of the time I spend
thinking about this subject is devoted to creativity and growth. The
reasons for this asymmetry are not hard to find. Destruction is often
associated (and conflated) with evil. More troubling it is often
associated with pain, even if there is no evil intent involved. Finally,
destruction lets loosely define it as any entropy-increasing process
is also more likely to happen naturally. It therefore requires less deliberate
attention, and is easier to deny and ignore. Still, the subject of destruction
does deserve, say, at least 1/5 the attention that creation commands. A
thoughtful philosophy of destruction is essential to a rich life, at the very
least because each of us must grapple with his/her own mortality. So here
is a quick introduction to non-evil destruction, within the context of
business and innovation. Before we begin, lodge this prototypical example
of creative destruction, the game of Jenga, in your mind:

Destruction in Business and Innovation


It is relatively easy to separate out obviously evil destruction (Hitler,
9/11). It is also easy to separate out non-evil and non-painful destruction
(demolition of unsafe, derelict buildings, controlled burns to contain the
risk of forest fires). Here are three gray-area examples:

Version 6.2 of your companys great software, everybody


recognizes, represents an end-of-life technology (for example, a
famous product beginning with V and ending in ista). Layers of
band-aid bug-fixes and patches have destroyed the architectural
integrity of the original product. You must make the painful
decision of completely discarding it and starting with a cleansheet design based on a more advanced architecture. Maybe
some key employees, for whom the product represents their life
work, and who still believe in it, quit in bitter disappointment,
seething with a sense of betrayal.
Widget Inc. has an old legacy product A, that is nearing the end
of its design life. Most new investment is going towards a new
product B, that requires a completely new set of business and
technical competencies. There is buzz and excitement around
B, while A is surrounded by an atmosphere of quiet despair .
You gradually stop hiring A-relevant skills and increase hiring
of B-relevant skills. A population of employees, too old or too
set in its ways to learn new skills, is left providing legacy
system support as the product slowly dies out of the economy.
As CEO, you eventually offer a very attractive trade-in program
to the few remaining customers, stop support, and lay off the
few remaining employees who dont adapt.
What do you think of all the great (including life-saving)
technology that came out of both the Allied and Axis sides of
World War II (radar, microwaves, rocketry, computing, jet
engines, the Volkswagen Beetle)? Is it morally possible to
appreciate these technologies without condoning Hitler?

These examples illustrate the complexity of thinking about


destruction. All reasonable people, I suspect, try to simplify things and

operate with an attitude of kindness and gentleness. But does the world
always allow our actions to be kind or gentle?
The Phenomenology of Destruction
Creation and growth can be gradual, steady, linear and calm, but this
is rarely the case. More often, we either see head-spinning Kool-Aid
exponential dynamics, critical-mass effects, tipping points and the like. Or
slowing, diminishing-returns effects. Steady progress is a myth.
Destruction is the same way. Wed like all destruction to be strictly
necessary, linear and peaceful. Thats why phrases like graceful
degradation are engineering favorites. Thats why my friend and animal
rights activist Erik Marcus champions dismantlement of animal agriculture
rather than its destruction. The world unfortunately, rarely behaves that
way. Our rich vocabulary around destruction is an indication of this:
decay, rot, neglect, catastrophe, failure mode, buckle, shatter, collapse,
death, life-support, apocalypse. Destruction isnt this messy simply
because we are unkind or evil. Destruction is fundamentally messy, and
keeping it gentle takes a lot of work.
I once read that nearly 70% of deaths are painful (no clue whether this
is true, but much as my first experience of euthanasia hurt, I still believe in
it). Reliability engineering provides some clues as to why this is so
IEEE Spectrum had this excellent cover story a few years ago, analyzing
biological death from a reliability engineering perspective. The shorter
version: complex systems admit cascading, exponentially-increasing
failure modes that are hard to contain. Any specific failure can be
contained and corrected, but as failures pile on top of failures, and the
body starts to weaken and destabilize overall as a system, doctors can
scramble, but eventually cannot keep up. The shortest version: He died of
complications following heart surgery.
Jenga as Metaphor
The game of Jenga illustrates why it is so hard to keep destruction to
linear-dismantlement forms. Once you throw in an element of creation in
parallel (removing blocks and stacking them on top to make the tower

higher), you are constrained. If you had the luxury of time, you could
unstack all the blocks carefully, and restack them in a taller, hollow
configuration with only 2 bricks per layer. Thats graceful reconstruction.
The world rarely allows us to do this. We must reconstruct the tower while
deconstructing it, and eventually the growth creates the kind of brittle
complexity where further attempts at growth cause collapse.
Milton, the real star of Office Space, provides a more true-to-life
example of the Jenga mode of destruction.

Remember how Lumberg gradually took away Miltons work and


authority, degraded his office space, took him off the payroll, stole his
stapler and consigned him to the basement? When Milton ultimately
snaps, he burns down the office. He escapes to a tropical island paradise
with a lot of loot, but his victory does not last waiters ignore his drink
requests, causing him to mumble about further arson attempts.
In less dramatic forms, you can observe similar dynamics in any
modern corporation. Look away from the bright glow of the new
product/service lines and exciting areas with plenty of growth and cool
technology. Look into the darkness that defines the halo around the new,
and youll see the slow undermining and ongoing multi-faceted
destruction of the old. Resources are moved, project priorities are lowered,
incentives are handed out to the participants in the growth. Things

crumble, with occasional smaller and larger collapses. Watch closely, and
you will feel the actual pain. You will participate in the tragedy.
If you happen to be part of new growth, recognize this. One day, a
brighter light will put you in the shadows, and you will have to face the
mortality of your own creations. One of my favorite Hindi songs gets at
this ultimately tragic, Sisyphean nature of all human creation:
Main pal do pal ka shayar hun, pal do pal meri kahani hain
pal do pal meri hasti hai, pal do pal meri jawaani hain
Mujhse pehle kitne shayar, aaye aur aa kar chale gaye
kuch aahe bhar kar laut gaye, kuch naghme gaa kar chale gaye
woh bhi ek pal ka kissa they, main bhi ek pal ka kissa hun
kal tumse juda ho jaoonga, jo aaj tumhara hissa hun
Kal aur aayenge naghmo ki, khilti kaliyan chunne wale
Mujhse behtar kehne waale, tumse behtar sunne wale
kal koi mujhko yaad kare, kyon koi mujhko yaad kare
masroof zamaana mere liye, kyon waqt apna barbaad kare?
Which roughly translates to the following (better translators, feel free
to correct me):
I am but a poet of a moment or two, a moment or two is as
long as my story lasts
I exist but for a moment or two, for a moment or two does
my youth last
Many a poet came before me, they came and then they
faded away
they took a few breaths and left, they sang a few songs and
left
they too were but anecdotes of the moment, I too am an
anecdote of a moment
tomorrow, I will be parted from you, though today I am a
part of you

And tomorrow, there will come other pickers of blooming


flower-songs
Poets who speak more eloquently than I, listeners more
sophisticated than you
Were somebody to remember me tomorrow why would
anybody remember me?
this busy, preoccupied world, why should it waste its time
on me?
Life After People
It seems likely that the universe at large is likely a place of
destruction-by-entropy. Yet, on our little far-from-equilibrium home here
on earth, the picture, at least for a few millenia, is one of renewal,
emphasizing creation over destruction.
The history channel recently aired a show about what would happen
to our planet if all humans were to suddenly vanish. There is also a
brilliant book devoted to this thought experiment, which I am currently
reading:
Though the events in both the show and book are largely about how
human-created reality would collapse, the overall story is an uplifting one
of growth and renewal, as nature not as brittle and in-danger as we like
to think gradually reclaims the human sphere.

Creative Destruction: Portrait of an Idea


February 6, 2008
The phrase creative destruction has resonated with me since I first
heard it, and since then, it has been an organizing magnet in my mind for a
variety of ideas. I was reminded of the concept again this weekend while
reading William Duggans Strategic Intuition, which mentioned Joseph
Schumpeter as a source of inspiration. Visually, I associate the phrase most
with Eschers etching, Liberation, which shows a triangular tessellation
transforming into a flock of birds. As the eye travels up the etching, the
beauty of the original pattern must be destroyed in order that the new
pattern may emerge

I dont know when I first heard the phrase, but I first used it in the
frontispiece of my PhD thesis. Here are the three quotes I put there, back
in 2003, when I was searching for just the right sort of imagery to give my
research the right-brained starting point it needed. My first quote was a
basic, bald statement due to Schumpeter:
Creative Destruction is the essential fact about capitalism.
Joseph Schumpeter, Capitalism, Socialism, and Democracy
I followed that up with a Rabindranath Tagore bit that Id found
somewhere (update: Googling rediscovered the somewhere on the
frontispiece of Hugo Reinerts draft version of a paper on Creative
Destruction which seems to have finally appeared in the collection:
Friedrich Nietzsche: Economy and Society), and for which, to this day, I
havent found a citation (update: Hail! Google books; a work-colleague,
Tom K., dug the reference out for me the extract is from Brahma,
Vishnu, Siva, which appears in Radices translation of selections from
Tagore so much for the detractors of Googles book scanning project:
plain Googling did not get me the source).
From the heart of all matter
Comes the anguished cry
Wake, wake, great Siva,
Our body grows weary
Of its law-fixed path,
Give us new form
Sing our destruction,
That we gain new life
Rabindranath Tagore
And concluded the Grand Opening of my Immortal Thesis with a
dash of Nietzsche:
[H]ow could you wish to become new unless you had first become
ashes!
Freidrich Nietzsche, Thus Spake Zarathustra

Curiously, I havent read any of these (something I dont mind


admitting, since I actually read a lot more of the books I quote than most
people). For me creative-destruction has always been a right-brained sort
of thing. In fact I almost titled my thesis The Creation and Destruction of
Teams but I decided that was way too ponderous and self-important, even
for me, and settled for the more prosaic Team Formation and Breakup in
Multiagent Systems. But throughout the process of doing the research
and writing up the results, the metaphor of creative destruction and the
associated imagery was in my mind. Sometimes I dreamed of swarms of
airplanes making and breaking formation (formation flight was one of the
applications I worked on).
But looking further back, I can see that my first serious infatuation
with the idea goes further back to a beautiful Urdu poem by Ali Sardar
Jaffri, Mera Safar, ably translated by Philip Nikolayev. Nowhere else have
I encountered the idea captured with such poetic precision. If you know
Hindi/Urdu, reading the original is well worth it.
The infatuation continues creative destruction is at the heart of my
latest research at work.
A Possible History of the Idea
A long time ago, I read on the Web one speculative history of the idea
of creative destruction that traced it from a particular form of a school of
Saivaite philosophy called Kashmir Saivism through Schopenauer,
through to Nietzsche and finally to the most familiar name associated with
it today: Schumpeter. It is curious that this abstract idea went from
religious philosophy, through metaphysics and finally to economics. I
wouldnt be surprised if this story were apocryphal the basic
abstraction of renewal and change as continuous creation and destruction
is pretty elemental, and Id expect it to have been rediscovered multiple
times.
Certainly the idea is certainly a favorite in classical Indian
metaphysics, and mostly approached through the metaphor of Siva
(usually characterized as the destructive aspect of the creator-preserverdestroyer trinity of Brahma-Vishnu-Siva, but, I am told by more

knowledgeable people, better understood as symbolizing continuous


renewal through creative-destruction). Alain Danielou seems to have
written a lot on this topic, including a book relating Siva to Dionysius, and
Ive seen references to the idea in comments by people like Camille
Paglia.
Elsewhere, both Hegel and Nietzsche seem to have had this idea in
their head (the former particularly through what we now know as the
Hegelian dialectic). I suspect it is also at the heart of the methodological
anarchy model of discovery proposed by Feyerabend in Against Method.
In short, there is probably enough to this idea to fuel a dozen PhDs.
But curiously, Ive always felt a little reluctant to go read all this stuff.
To me, the idea of creative destruction is so fundamental and basic
practically axiomatic, that I am wary of contaminating my raw intuition of
the idea with a lot of postmodern (or for that matter, Vedantic) verbiage. In
a way, monosyllabic gym-jocks making cryptic remarks about muscles
being torn down and rebuilt stronger get it better than academics do. I fear
the magic of the idea may disappear if I over-analyze it.

Part 3:
Getting Ahead, Getting Along,
Getting Away

Getting Ahead, Getting Along, Getting Away


June 13, 2012
Sometimes I think that if I were much more famous, female and in
Hollywood instead of the penny theater circuit that is the blogosphere, Id
be Greta Garbo. Constantly insisting that I want to be left alone while at
the same time being drawn to a kind of work that is intrinsically public
and social. Simultaneously inviting attention and withdrawing from it.
Which I suppose is why ruminations on the key tensions of being a
self-proclaimed introvert, in a role that seems better suited to extroverts,
occupies so much bandwidth on this blog. Thats the theme of this third
installment in my ongoing series of introductory sequences to ribbonfarm
(here are the first two). This is the longest of the sequences, at 21 posts,
and also has the most commentary. So here you go. I hope this will be
useful to both new and old readers.
Future of Work The Human Condition
This sequence probably represents the single biggest category of
writing on ribbonfarm. It originally started out with several posts on the
Future of Work theme, which was a popular blogosphere bandwagon
around 2007-08, when I was still half-heartedly trying various
bandwagons on for size.
Though I had a few modest hits in that category, it took me a couple
of years to realize that I was fundamentally not interested in the subject of
work per se. I was primarily interested in work as a lens into the human
condition.
Once I realized that, the writing in this category got a lot more fluid,
and I got off the bandwagon. I still use work as the primary approach
vector, rather than relationships or family, since I think in the modern
human condition, work is the most basic (and unavoidable) piece of the
puzzle.

The best tweet-sized description of the human condition Ive


encountered is due to personality psychologist Robert Hogan: getting
along and getting ahead. To this I like to add the instinct towards selfexile and perverse (for our species) seeking out of solitude: getting away.
So Ive divided the selections into three corresponding sections.
Heres the sequence. Theres a little more commentary at the end.
Getting Ahead
1. The Crucible Effect and the Scarcity of Collective Attention
2. The Calculus of Grit
3. Tinker, Tailor, Soldier, Sailor
4. The Turpentine Effect
5. The World is Small and Life is Long
Getting Along
1. My Experiments with Introductions
2. Extroverts, Introverts, Aspies and Codies
3. Impro by Keith Johnstone
4. Your Evil Twins and How to Find Them
5. Bargaining with your Right Brain
6. The Tragedy of Wiios Law
7. The Allegory of the Stage
8. The Missing Folkways of Globalization
Getting Away
1. On Going Feral
2. On Seeing Like a Cat
3. How to Take a Walk
4. The Blue Tunnel
5. How Do You Run Away from Home?
6. On Being an Illegible Person
7. The Outlaw Sea by William Langewiesche
8. The Stream Map of the World

Triumph and Tragedy


Since I am not a credentialed social scientist, but frequently stomp
rudely into areas where academic social scientists rule, a few words of
warning and contextualization are in order.
The warning first. It should be clear that my approach to these
subjects is nothing like the academic approach. It is amateurish,
speculative, fanciful (occasionally bordering on the literary or mystical)
and resolutely narrative-driven. Empiricism plays second fiddle to
conceptualization, if it is present at all. And this at a time when narrative is
becoming a dirty word in mainstream intellectual culture. If you like any
of the ideas in the posts above, you would probably be well advised to
hide or disguise the fact that youve gone shopping in an intellectually
disreputable snake-oil marketplace.
Surprisingly though, I dont think those are the most important
differences between the way I approach these subjects and the way
academics do. Criticism I get is more often due to my overall
philosophical stance rather than my lack of credentials or non-empiricist
snake-oil methods.
I approach these themes with a sort of tragic-realist philosophical
stance, while the academic world is going through a seriously positivist
phase that is marked by extreme self-confidence and optimism about its
own future potential for somehow fixing the world. At least for a chosen
few.
Social scientists are going through a period of extreme belief in their
own views and methods. This is most true of behavioral economics, which
exhibits an attitude that borders on triumphalism. The attitude appears to
have spilled over to the rest of the social sciences. Thanks to tools and
concepts like social graphs, fMRI mapping and so forth, a great
mathematization, quantification and apparent empiricization of the social
sciences is now underway. Freud and Jung are in the doghouse. There is a
good chance that Shakespeare and Dostoevsky will follow.

This is not a new kind of attitude, but the last time we saw this kind of
social science triumphalism, it was derivative. The triumphalism of late
19th century engineering triggered a wave of High Modernist social
engineering in its wake that lasted till around 1970. That project failed
across the world and social scientists quickly abandoned the engineers and
turned into severe critics overnight (talk about fair weather friends). But
social scientists today have found a native vein of confidence to mine.
They are now rushing in boldly where engineers fear to tread.
It is rather ironic that much of the confidence stems from discoveries
made by the Gotcha Science of cognitive biases. In case it isnt obvious,
the irony is that revelations about the building blocks of the tragic DNA of
the human condition have been pressed into service within a
fundamentally bright-sided narrative. This narrative (though the believers
deny that there is one) is based on the premise that cataloging and
neutralizing biases will eventually leave behind a rationally empiricist
core of perfectible humanity, free of deluded narratives. One educational
magic bullet per major bias. The associated sociological grand narrative is
about separating the world of the Chosen Ones from the world of the
Deluded Masses, and using some sort of Libertarian Paternalism as the
basis for the former to benevolently govern the latter without their being
aware of it.
I suppose it is this sort of overweening patronizing attitude that leads
me to occasionally troll the Chosen Ones by triggering completely
pointless Batman vs. Joker Evil Twin debates.
Sometimes I feel like going to a behavioral economics conference and
yelling out from the audience, youre reading the evidence wrong you
morons, it is turtles biases and narratives all the way down; we should be
learning to live with and through them, not fighting them!
Unlike the woman who yelled the original line at an astronomer in the
apocryphal story, I think Id be right. In this case, anthropocentric thinking
lies in believing that there is a Golden Universal Turing Machine Running
the Perfect Linux Distro at the bottom. There is no good reason to believe
that natural selection designed us as perfect (or perfectible) cores wrapped
in a mantle of biases and narrative patterns.

In my more mean-spirited and uncharitable moments, I like to think


of Biasocial Science as an enterprise driven by the grand-daddy of all
biases: the bias towards believing that cataloging biases advances our
understanding of the human condition in a fundamental way that can
enable the construction and enactment of a progressive Ascent of
Quantified Man narrative.
Oh well, I am probably going to be proved wrong. I seem to have a
talent for championing lost causes. Anyway, that warning and
contextualization riff aside, go ahead and dive in. Youve been warned of
the dangers.

The Crucible Effect and the Scarcity of Collective


Attention
July 21, 2009
This article is about a number I call the optimal crucible size. Ill
define this number call it C in a bit, but I believe its value to be
around 12. This article is also about an argument that Ive been
unconsciously circling for a long time. Chris Andersons Free provided me
with the insight that helped me put the whole package together: economics
is fundamentally a process driven by abundance and creative-destruction
rather than scarcity. The reason we focus on scarcity is that at any given
time, the economy is constrained by a single important bottleneck
scarcity. Land, labor, factories, information and most recently, individual
attention, have all played the bottleneck role in the past. I believe we are
experiencing the first major bottleneck-shift in a decade. Attention, as
an unqualified commodity is no longer the critical scarcity. Collective
attention is: the coordinated, creative attention of more than 1 person. It is
scarce and it is horrendously badly allocated in the economy today. The
free-agent planet under-organizes it, and the industrial economy overorganizes it. Thats the story of C, the optimal size of a creative group.
There are seven other significant numbers in this tale: 0, 1, 7, 150, 8, 1000
and 10,000. The big story is how the economy is moving closer to Cdriven allocation of creative capital. But the little story starts with my
table tennis clique in high school.
A Table-Tennis Story
R and I played table-tennis nearly every day in high school. We were
regular partners in a loose clique of serious players at our club, comprising
approximately a dozen players. The score in nearly every 3-game match
would go something like 21-14, 21-7, 23-22. It wasnt that I was getting
creamed every time; Id occasionally take a game off R. He was only
slightly better than me, in just about every department, but that all added
up to him beating me nearly every time. He knew his strengths
(defense/offense, forehand/backhand) enough to always pick a better
strategy for each game. He selected his shots better and executed them

better. The net result was that I was beaten mentally and physically. Errors
would accumulate, and Id invariably choke.
Then one day, I managed to convince S, whose father had been a
state-level champion, to practice with me (there was no point playing, he
would have beaten me 21-0, 21-0, 21-0). S was the sort of calm,
unflappable guy who simply cannot be psyched-out or forced into error.
He had an almost robotic level of perfection in all basic elements of the
game. S put me through half an hour of very basic forehand-to-forehand
top spin practice rallies, and it completely changed my game. After that, I
still mostly got beaten by R, my regular partner (who was fundamentally
more talented than me), but I actually began winning the occasional
match, and all games were a lot closer.
Fast-forward 15 years. At the University of Michigan, I organized an
informal tournament at the residential scholarship house I was living in at
the time. Out of the field of about 8-10, I came in second. Most Americans
in the house fared as well as youd expect; since they view ping pong as
not really a sport, most of them lack basic skills. I beat most of them
relatively easily, but was beaten pretty handily by a Korean-American guy.
A final data point. About 2 years ago, with rather foolhardy
confidence, I joined in a Saturday afternoon group of serious Chinese
players. The result: I was beaten comprehensively by everybody. In
particular, by a bored, tired-looking 14 year old (clearly first-generation)
who looked like he hated the game and had been dragged there by his
immigrant father.
Collective Attention and Arms Races
Now step back and analyze this for a moment. Table tennis is
primarily information work. It is not among the more physically
demanding games except at the highest levels. My serious table-tennis
clique in an apathetic-to-the-game country, with a lousy athletic culture
(India) got me to a certain level of competence: enough to beat many
casual players in a vastly more athletic country (the US). But a disengaged
kid from the diaspora of an athletic country that is crazy about the game

(China) was able to beat me with practically no effort, despite being far
less interested (apparently) in the game than me.
This little story captures the most essential features of collective
attention. It exists at all scales (from small clique to country to planet).
Within a group that is paying coordinated attention to any informationwork domain, skill levels rapidly escalate, leaving isolated individuals far
behind. I call this the arms race effect, and it is a product of a fertile mix of
elements in the crucible: competition, mutual teaching, constant practice
and sufficient, but not overwhelming variety. This is a very particular kind
of attention. It isnt passive consumption by spectators, and it isnt
performance for an audience. It is co-creation of value: that same dynamic
that is starting to drive the entire economy, blurring lines between
producers and consumers.
So our challenge in this article is to answer the question: what is the
optimal size of a creative group? Is country level attention the best (China
and table tennis) or clique (my high school)? Is it perhaps 1 (solo loneranger creative blogger)? Our quest starts with the first of our supportingcast numbers, 10,000. As in the 10,000-hour rule studied by K. Anders
Ericsson and made famous by Gladwell in Outliers.
10,000 Hours and Gladwells Staircase
Gladwell is a jump-the-gun trend-spotter. He nearly always finds a
uniquely interesting angle on a subject, and nearly always analyzes it
prematurely in flawed ways. Thats a story for another day, but lets talk
about his latest, Outliers. The basic thesis of the book is that there are all
sorts of subtly arbitrary effects in the structure of nurture (Gladwells way
too smart to play up a naive nature/nurture angle) that make runaway
success a rather unfair and random game of chance. In particular, Gladwell
focuses on a key argument: that to get really good at anything, you need
about 10,000 hours of steadily escalating practice, with opportunities to
take your game to the next level becoming available at the right times.
For instance, due to some weird cutoff-date effects, nearly all top
Canadian hockey players are born in winter (thereby, Gladwell implies,
unfairly penalizing burly talents born in warmer months). This basic
argument is just plain wrong for the simple reason that no human talent is

that specifically matched to particular arbitrary opportunity paths like


hockey. No talented human being is starkly hockey star or schmuck.
There are presumably other things demanding strength and athletic ability
available in Canada and other parts of the world, that have no winter bias
(or perhaps, complementary summer biases). As Richard Hamming put it
eloquently in his famous speech at Bell Labs, You and Your Research,
There is indeed an element of luck, and no, there isnt. The prepared
mind sooner or later finds something important and does it. So yes, it is
luck. The particular thing you do is luck, but that you do something is
not.
But that said, Gladwell is on to something. The pattern of increasing
opportunity stage-gates he spotted is real, but most of the arbitrary effects
he talks about (being born at certain times, your university having one of
the first computers, and so forth) are red herrings/minor elements that
confuse the issue. But one effect is not a red herring, and that is the fact
that the staircase of opportunity puts you in increasingly intense crucibles
of collective co-creative attention.
The Distillation Effect
Start with 1728 (12^3) people and let them learn widget-making
in144 groups of 12, for 3000 hours. Then take the top talent in each group
and make 12 groups of 12, and again let them engage in an arms race for
3000 hours. Then take the final top 12 and throw in another 4000 hours.
With two levels of distillation, youve got yourself a widget-making dream
team. Or a fine scotch. A team that will be leaving the remaining 1716 far,
far behind. You can watch this process accelerated and live today on
Americas Got Talent and American Idol. Imagine the same process
playing out more slowly over 20 years. What does that transformation
look like?
That is what is scarce. Collective attention. Thats what creates the
10,000 hour staircase-of-opportunities that Gladwell talks about.
Information may want to be free, but live attention from other humans
never will be (AI is a different story).

A note of irony here: Gladwell was also among the first to stumble
across the importance of such dream-team crucibles, in The Tipping
Point. Today, researchers like Duncan Watts have pointed out that viral
effects dont necessarily depend on particularly talented or connected
special people (the sort Gladwell called mavens and salesmen). But
special people do have a special role in shaping culture. It is just that
their most important effect isnt in popularizing things like Hush Puppies,
but in actually creating their own value. New kinds of music, science,
technology, art or sporting culture.
This is the signal in the noise, and here is the lesson. Information
work in any domain is like weight training: you only grow when you
exercise to failure. The only source of weight to overload your mental
muscles is other people. And the only people who can load you without
either boring you or killing you are people of approximately the same
level of talent development. And that leads to the question: what happens
when you hit the top crucible of 12 in your chosen field? Where do you go
when there are no more levels (or if youve reached the highest level you
can, short of the top)? That brings us to the next two numbers in our story:
how you innovate and differentiate as a creative.
1 Free Agent and 1000 Raving Fans?
Ive hated the phrase raving fan since the day I heard it. If you are
not familiar with the argument, Kevin Kelly, who originated the idea,
claims that an individual creative blogger or musician say can
scrape along and subsist in Chris Andersons Long Tail, by attracting a
1000 raving fans who buy everything he/she puts out (blogs, books,
special editions, t-shirts, mousepads; 1000 raving fans times $100 per year
per fan is a $100,000 income). Kellys original adjective is a lessobjectionable true rather than raving but raving has caught on, and
the intended meaning is the same.
This basic model of creative capital is just not believable for two
reasons. First, it reduces a prosumer/co-creation economic-cultural
environment to a godawful unthinking bleating-sheep model of
community. I try to imagine my blog, for instance, as the focal point of a
stoned army of buy-anything idiot groupies, and fail utterly. I would not

want to serve such a community, and I dont believe it can really form
around what I do. I certainly refuse to sell ribbonfarm.com swag.
The second problem is the tacit assumption that creation is
prototypically organized in units of 1. The argument is seductive. The bad
old corporations will die, along with its committees of groupthink. The
brave new solo free agent, wandering in the woods of cultural anarchy,
finds a way to lead his tribe to the promised land of whatever his niche is
about. Tribe is a related problematic term that Seth Godin recently ran
amok with.
The reason Kelly (and others like Godin) ends up here is that he
answers my question after the dream team, what? with individuals
break away, brand themselves and become individual innovators. Kinda
like Justin Timberlake leaving NSync. A dream team of 12, in this view,
turns into 12 soloists. Not that he ignores groups, but his focus is on the
individual.
Individuals vs. Groups
Thats not what happens. You cannot break the crucible rule. 12 is
always the magic number for optimal creative production. The reason
people make this mistake is because they draw a flawed inference from the
(correct) axiom that the original act of creativity is always an individual
one. Ive talked about this before: I am a believer in radical individualism;
I believe, as William Whyte did, that innovation by committee is
impossible. Good ideas nearly always come from a single mind. What
makes the crucible of 12 important is that it takes a group of
competing/co-operating individuals, each operating from a private
fountainhead of creative individual energy, to come up with enough of a
critical mass of individual contributions to spark major revolutions.
Usually thats about 12 people for major social impact, though sometimes
it can happen with smaller crucibles. These groups arent the deadening
committees of groupthink and assumed consensus. They are the fertile,
fiercely contentious and competitive collaborators who at least partly hate
the fact that they need the others, but grudgingly admire skills besides
their own.

What happens when you exit the dream team level in a mature
disciplinary game is that you get out there and start innovating beyond
disciplinary boundaries; places where there are no experts and no managed
progression of levels with ritualistic gatekeeper tests. But you dont do
that by going solo. You look for crucibles of diversity, multidisciplinary
stimulation and cross-pollination. But you still need the group of 12 or so,
training your brain muscles to failure.
This gives me a much more believable picture. As a blogger, I am the
primary catalyst on this site, but I am not creating the value solo. If I try to
think of the most valuable commenters on this site, I can think of no more
than 12. My best writing has come from trying to stay ahead of their
expectations, and running with themes they originally introduced me to.
But thats far from optimal, since I still am the dominant creator on this
blog. The closer I get that number to 12 via regular heavy-weight
commeters, guest bloggers and mutually-linked blogroll friends (Ive
turned my blogroll off for now for unrelated reasons), the closer Ill get to
optimum. Think of all the significant power blogs: they are all team-acts.
Now, I may never get there, and theres multiple ways to get to 12, but the
important thing is to be counting to 12. At work these days, I am pretty
close to that magic number 12, and enjoying myself a lot as a result.
So the important number for the creative of the future is 12, not 1 or
1000. But what about money and volume? Dont we need a number like
1000? Not really. As the creative class matures, you wont really ever find
1000 uncritical sheep-like groupie admirers. That is a relic of the celebrity
era. The real bigge- than-crucible number is not 1000 but 150. Dunbars
number.
The Dunbar Number and $0.00
Why 150? Thats the Dunbar number. The most people you can
cognitively process as individuals (the dynamics are entertainingly
described in the famous Monkeysphere article). Thats the right number to
drive long-tail logic. By Kellys logic though, I have to get to, say,
100,000 casual occasional customers before I find my 1000 raving fans
(1% conversion is realistic).

Face it: theres no way in hell most of us will get there. If I


accidentally did, through this blog, Id probably erect walls to keep the
scary crowds out somehow. That picture makes sense for almost nobody. I
write long, dense epic posts and dont bother to be accessible. I look to
attract readers who can keep up with me. Unapologetic intellectuals in
fact, whose own eclectic interests overlap sufficiently with mine to create
the right mix of resonance, dissonance and dissent. In terms of Geoffrey
Moores classic pair of business models: complex systems (a few hightouch, high-personalization customers) and volume operations (massconsumption stuff), this blog is a complex-systems play. I can (and have)
written posts entirely with one reader-muse in mind. I have more chance
of making a living off 100% of a base of 150 powerful micropatrons than
from 1% of a base of 100,000. The question is: which is actually the right
type of model for the individual creative (in a crucible of 12 similarminded others; not selling to each other, but collectively representing a
high-value-concentration crucible)?
I am going to make a prediction: personalization and customization
will rule. Without that common prefix of the day, mass-. Mass
customization/personalization is a good model for Enterprise 2.0, but
individual creatives have a far better chance of creating an economically
sustainable lifestyle by paying close individual attention to 150 people
than by selling the same thing to 100,000 and hoping 1% of the sheep
convert to your religion. This isnt to say that volume games cant
succeed. But it isnt the way most people will succeed, because the
numbers will not add up. Can you really imagine a significant proportion
of the worlds information worker/creative class being able to draw
100,000 unique visitors per month to their blogs, most of whom will be
other creatives trying to build their own 100,000/1000?
The 100,000 base argument can be safely ignored for most of us.
And thats what Ive done to most of the 62,000 unique visitors Google
Analytics tells me have visited this blog since I opened up shop in July
2007. An overwhelming majority of them bounced away before I could
even say Hi! Some read one article and never came back, leaving only
an IP address behind. In an age where superhits and celebrities are on their
way out, thats what any crowd of ~100,000 will do. Your actual goal as
creative today is to find and keep your 150, to whom you pay individual

attention. Pass-through crowds dont deserve much attention. In fact, the


monetary value of your transaction with them is exactly $0.00. Anderson
hammered home the point that to the masses, the right price for your work
is $0.00, but he didnt address the flip side. They are also worth only $0.00
to you on average. Which means you should put no marginal effort into
pleasing them. If one of them finds something you did for your 150 useful,
let them have it. You get paid in word-of-mouth, they get free stuff. Small
serendipitous barter transaction. Aggregate over 100,000 and net harddollar value is still 100,000x$0=$0. The barter is non-zero sum, but
doesnt pay your rent.
Personal Economic Neighborhoods
By carefully curating your Dunbar neighborhood of at most 150 (in
practice, likely much less), in collaboration with your crucible of 12 (each
curating their own 150-neighborhoods, with a good deal of overlap),
through actual personal attention, you create the foundation for your life as
a cultural creative and information worker. Free agency is an important
piece of this, but dont dismiss traditional economics: a good part of your
150 is likely to remain inside the formal organizations you are part of.
The Kelly number, 1000, is important, but not in his sense. If you and
your crucible of 12 are creating value in a loose coalition, and each have a
150 circle with some high-value overlap, the total is probably near 1000.
So thats 12 people sharing a community of 1000, each of whom gets
personal attention from at least 1 of the 12. The members of the 1000 get
the overhead savings of finding more than 1 useful, personally-attentive
creator in one place.
Count the 12 most valuable co-creators you work with. Now consider
the overlap in your Dunbar neighborhoods. If the average level of overlap
isnt in the double digits (the actual set-theoretic math is tricky), you
probably havent reached critical mass yet. Guess where you can still find
such critical mass today? Inside large corporations. Any pair of people in
my immediate workgroup of around 12 can probably find 20-30 common
acquaintances. Our collective personalized-attention audience at is
probably around 1000. Large corporations still allocate collective attention
pretty badly (they hit the numbers, but get the composition wrong), but

still do a better job than say, the blogsphere. But the free-agent nation is
catching up rapidly. The wilderness is becoming more capable of
sustaining economics-without-borders-or-walls every day.
So how will you create and monetize your Dunbar neighborhood? By
definition, there are no one-size-fits-all answers, because the point of
working this way is that youll find opportunities through personalized
attention. Not a great answer, I know, but still easier for most of us than
dreaming up ideas that can net 100,000 regulars of whom 1000 turn into
raving fans.
8: The Maximal Span of Control
Weve argued that the optimal crucible size must be greater than 1 and
less than 150, but we still havent gotten to the reasoning behind 12 rather
than 30 or 5. Another number will help get us there: 8, the upper end of the
range of a number known as the span of control. The number of direct
reports a manager can effectively handle, and still keep the individualized
calculus of interpersonal relationships tractable.
What happens when you exceed the span of control? You get
hierarchies. You cannot organize, complex coupled work (think space
shuttle) requiring more than 8 people in a flat structure. But heres the
dilemma: between 9 and 15, if you split the group into 2, you may get high
overhead and micromanagement by managers with too little to do, and
other pathologies. So between the limit of a single managers abilities, and
the optimal point at which to force cell division, ontogeny and
organization, you get a curious effect: the edge of control. Single-manager
structures fail, but team chemistry can take over. The whole thing is just
barely in control, and teetering on chaos.
Should sound familiar. Those are the conditions, complexity theorists
have been telling us for decades, that spark creative output. More than 8,
less than 16. Why 12, besides being a nice mean? Anecdotal data.

The Ubiquity of 12
I hope you are too smart to conclude that I am making 12 a number of
religious significance. It is simply the mean of a fairly narrow distribution.
Still, it turns up in a surprising number of creative crucible places in
practice:
1.
2.
3.
4.
5.
6.
7.
8.
9.

The dirty dozen (alright, there were also the 7 samurai)


Juries (creative judicial decision-making)
Teams in cricket and soccer (~12)
The number of apostles required to start a major religion
The approximate size of famous cliques of mathematicians,
scientists, engineers, philosophers, writers and so forth.
Ideal class sizes in education
G-8, G-12
Ensemble casts (Friends, Seinfeld, counted with frequent
regular side characters who appear often enough that you
recognize them).
Improv comedy groups (typical size of a generation of SNL
regulars).

(I believe there is some research related to Dunbar number research


that actually talks about how small world groupings where you really
intimately know the others in the group tend to be around 12. All our craze
for weak links has tended to distract us from the fact that the small world
is small in 2 ways: the maximum distance on the social graph between 2
nodes being 6, and the fact that we cluster in small groups).
The Magic Number 7
Lets get to the last of our big list of numbers. Seven. As in, Millers
famous magic number, the number of unrelated chunks of information you
can manage in your short-term memory at a given time, and a big implicit
hidden variable in everything weve talked about so far. Its why lists of 7
are effective. So lets make up a list of 7 to summarize the key concepts so
far.

1. Collective Attention: a group of people paying attention to the


same thing, with the group size varying in size from 2 to 6
billion.
2. Arms Race: The effect by which groups paying collective
attention to something force individuals within the group to
rapidly improve their skills and separate the group from
outsiders.
3. Mental Exercise-to-Failure: The fact that only people close to
your talent level can load your mind in ways that cause you to
grow
4. Crucible: The optimal-sized creative group. Stages of crucibles
reach successively higher plateaus and culminate at the dreamteam level, beyond which lies innovation.
5. Innovation: The graduation of a creative from a dream-team
level in a disciplinary game to a more diverse and unstructured
type of crucible, with few rules.
6. Dunbar personalization: The idea that you are more likely to
succeed as a creative in the new economy by paying personal
attention to up to 150 people, than by paying mass attention to
100,000 in hopes of harvesting a 1000 raving fans.
7. Span of Control: The threshold group size for a crucible.
Above this number creativity is possible. Below this, the group
can be brought under the dictatorial and low-creativity control
of a single individual.
Thats CAMCIDS for you acronym buffs. I know. I am a terrible
person.
Managing Collective Attention Scarcity: The Dynamics of 12
A little paper math shows you why collective attention is scarce.
Marketers will recognize a classic example: marketing sugary cereals to a
kid, but closing the sale with Mommy, is much trickier than marketing and
selling to the same person. You need the collective coordinated attention
of both, and you need that attention to have good chemistry. But unlike
individual-level attention scarcity and its complement, the myth of
information overload, you cannot solve the collective attention problem
through appropriate reframing and a good set of automated filters. Also

unlike individual attention dynamics, the action isnt in stuff like


advertising, fame or too many emails. It is co-creation groups.
Individual attention economics was merely poorly managed, and is
now technology exists to manage it well, even though it hasnt diffused
completely. Collective attention, even if it is optimally allocated, is
fundamentally scarce, since it requires live people in the other 11 empty
spots in the crucible.
Optimal allocation is hard because the numbers blow up in your face.
A Dunbar community of 150 admits 11175 unique pairings. Most will be
fights, divorces and toxic messes. A few will create great value. Searching
that space for the Gates-Ballmers and the Jobs-Wozniaks is horrendously
hard. Thats why it is a scarcity. Things do get simpler once a core of 2-3
start to attract enough to reach that critical mass of 12, but not by much.
The math is much harder for more complex ways of thinking about
groups, but you get the idea. Can you think of all possible ways to break
down the worlds population of 6 billion into constantly shifting crucibles
of 12?
But lets at least work on the right problem.

The Calculus of Grit


August 19, 2011
I find myself feeling strangely uncomfortable when people call me a
generalist and imagine that to be a compliment. My standard response is
that I am actually an extremely narrow, hidebound specialist. I just look
like a generalist because my path happens to cross many boundaries that
are meaningful to others, but not to me. If youve been reading this blog
for any length of time, you know the degree to which I keep returning to
the same few narrow themes.
I think I now understand the reason I reject the generalist label and
resonate far more with the specialist label. The generalist/specialist
distinction is an extrinsic coordinate system for mapping human potential.
This system itself is breaking down, so we have to reconstruct whatever
meaning the distinction had in intrinsic terms. When I chart my life course
using such intrinsic notions, I end up clearly a (reconstructed) specialist.
The keys to this reconstruction project are: the much-abused idea of
10,000 hours of deliberate practice, the notion of grit, and an approach to
keeping track of your journey through life in terms of an intrinsic
coordinate system. Think of it as replacing compass or GPS-based
extrinsic navigation with accelerometer and gyroscope-based inertial
navigation.
I call the result the calculus of grit. It is my idea of an inertial
navigation system for an age of anomie, where the external world has too
little usable structure to navigate by.
The Generalist-Specialist Distinction
The generalist/specialist distinction constitutes an extrinsic coordinate
system. We think of our environment as containing breadth and depth
dimensions. The breadth dimension is chopped up by disciplinary
boundaries (whether academic, trade-based or business-domain based),
while the depth dimension is chopped up by markers of validated

progressive achievement. What you get is a matrix of domains of


endeavor: bounded loci within which you can sustain deepening practice
of some skilled behavior.
The boundedness is key. Mathematicians do not suddenly discover, in
the 10th year of their practice, that they need advanced ballroom dancing
skills to progress further. Ballroom dancers do not suddenly encounter a
need for advanced aircraft engine maintenance skills after a few years of
practice. Based on your strengths, you can place fairly safe bets early on
about what you will/will not need to do if you make your home
somewhere in the matrix.
Or at least, you used to be able to. Ill get to how these expectations
from the twentieth century are breaking down.
There is a social structure that conforms to these breadth/depth
boundaries as well. A field of practitioners in each domain, stacked in a
totem pole of increasing expertise, that legitimizes the work of individuals
and provides the recognition needed for both pragmatic ends (degrees and
such) and existential ends (recognition in the sense of say, Hegel).
In his book Creativity, Mihaly Csikzentmihalyi made up exactly such
a definition of extrinsically situated creativity as the behavior of an
individual within a field/domain matrix.
We are now breaking away from this model. Ironically,
Csikzentmihalyis own work makes little sense within this model that he
helped describe in codified ways; his work makes a lot more sense if you
dont attempt to situate it within his nominal home in psychology.
Extrinsically situated creativity with reference to some global,
absolute scheme of generalist/specialist dimensions is unworkable. At best
we can hope for local, relative schemes and an idea of intrinsically
situated individual lives.

The Vacuity of Multi-Disciplinarity


The problem with this generalist/specialist extrinsically situated
creativity model is that the extrinsic frames of references are getting
increasingly dynamic, chaotic and murky. To the point that the distinction
is becoming useless. Nobody seems to know which way is up, which way
is down, and which way is sideways. If you guess and get lucky, the
answers may change next year, leaving you disoriented once more.
The usual response to this environment is to invoke notions of multidisciplinarity.
Unfortunately, this is worse than useless. In the labor market for
skilled capabilities, and particularly in academia, multi-disciplinarity is the
equivalent of gerrymandering or secession on an already deeply messedup political map. Instead of votes, you are grubbing for easily won
markers of accomplishment. Its main purpose (in which it usually fails) is
to create a new political balance of power rather than unleash human
potential more effectively.
The purpose is rarely to provide a context for previously difficult
novice-to-master journeys.
How do I know this? Its patently obvious. If it takes 10,000 hours (K.
Anders Ericssons now-famous threshold of deliberate practice, thanks to
Gladwell, which translates to about 10 years typically) to acquire mastery
in any usefully bounded domain, and you assume that there is at least one
generation of pioneers who blazed that path to a new kind of mastery,
what are you to make of fields that come and go like fruit flies in 2-3
years, in sync with business or funding cycles? The suspicious individual
is right to suspect faddishness.
I have come to the conclusion that if I cannot trace a coherent history
of at least 20 years for something that claims the label discipline, it isnt
one.

The problem with this though is that increasing amounts of valuable


stuff is happening outside disciplines by this definition. It isnt multidisciplinary. It isnt inter-disciplinary. It is simply non-disciplinary. Its in
the miscellaneous folder. It is so fluid that it resists extrinsic organization.
So given that most excitement centers around short-lived fruitfly nondisciplines, how do people even manage to log 10,000 deliberate practice
hours in any coherent journey to mastery? Can you jump across three or
four fruit-fly domains over the course of a decade and still end up with
mastery of something, even if you cannot define it?
Yes. If you drop extrinsic frames of reference altogether.
The Compass and the Gyroscope
We are used to describing movement in terms of x, y and z
coordinates, with respect to the Greenwich meridian, the Equator and sea
level. Our sense of space is almost entirely based on such extrinsic
coordinate systems (or landmarks within them). Things that we understand
via spatial metaphors naturally tempt us into metaphoric coordinate
systems like the depth/breadth one we just talked about. In academic
domains, for instance, you could say the world is mapped with reference
to an origin that represents a high-school graduate, with disciplinary
majors and years of study forming the two axes that define further
movement.
Somewhere in graduate school, I encountered an idea that blew my
mind: you can also describe movement entirely intrinsically. Actually, I
had encountered this idea before, in vague popular science treatments of
Einsteins general theory of relativity, but learning the basics of the math
is what truly blows your mind.
The central idea is not hard to appreciate: imagine riding a
complicated roller coaster and keeping track of how far along you are on
the track, how youve been turning, and how youve been twisting. That
much is easy.

What is not easy is appreciating that thats all you need. You can
dispense with extrinsic coordinate systems entirely. Just keeping track of
how those three variables (known as arc-length, curvature and torsion if
my memory serves me) are changing, is enough. For short periods, you
can roughly measure them using just your intrinsic sense of time and how
your stomach and ears feel. To keep the measurements precise over longer
periods, you need a gyroscope, an accelerometer and a watch.
If you want motifs for the two modes of operation, think of it as the
difference between a magnetic compass and a gyroscope (these days, GPS
might be a better motif for the former, but the phrase the compass and the
gyroscope has a certain ring to it that I like).
We need another supporting notion before we can construct an
intrinsic coordinate system for human lives.
Behavioral Boundedness
Remember that the primary real value of an extrinsically defined
discipline in a field/domain matrix is predictable boundedness.
Mathematicians can trust that they wont have to suddenly start dancing
halfway through their career to progress further.
This predictability allows you to form reasonable expectations for
decades of investment, and make decisions based on your upfront
assessment of your strengths, and expectations about how those strengths
will evolve as you age.
If I decide that I have certain strengths in mathematics and that I want
to bet on those strengths for a decade, to get to mastery, I shouldnt
suddenly stumble into a serious weakness along the way that blocks me,
like a lack of natural athleticism.
So a disciplinary boundary is very useful if it provides that kind of
predictability. I call this behavioral boundedness. An expectation that your
expected behaviors in the future wont wander too far out of certain
strengths-based comfort zones you can guess at fairly accurately, upfront.
Before putting in 10,000 hours.

What happens when that sort of predictability breaks down? It is


certainly happening all over the place. For instance, I didnt realize I
lacked the strengths needed for a typical career in aerospace engineering
(the sort high-school kids fantasize about when they first get interested in
airplanes and rockets) until well into a PhD program in the subject.
Fortunately, I was able to pivot and head in another direction with almost
no wasted effort. Few people are that lucky.
There are domains where the boundedness is very weak indeed. The
upfront visible boundedness is a complete illusion. Marketing is one such
domain. You might get into it because you love creative messaging or
talking to people. You may discover the idea of positioning two years into
the journey and realize that creativity in messaging is a sideshow, and the
real job is somewhat tedious analysis of the mental models of prospects. A
further two years down the road, you may discover that to level-up your
game once more, you need to become a serious quantitative analytics ninja
and database geek.
This can also work out in positive ways. You might wake up one fine
day and realize that your life, which makes no sense in nominal terms,
actually adds up to expertise in some domain youd never identified with
at all. That actually happened to me with respect to marketing. On paper, I
am the opposite of a marketer. I have a PhD in aerospace engineering, am
introverted, and write in long-winded and opaque ways rather than in
catchy sound-bytes.
Nevertheless, at some point I realized with a shock that I had
accidentally logged several thousand hours along a marketing career path
without realizing it. I had just completely misunderstood what
marketing meant based on the popular image the field presents to
novices.
When I went free-agent a few months ago, most of my consulting
leads I had coming in had to do with marketing work. This did not surprise
me, but it certainly surprised my father and several close friends, who
assumed I was doing some sort of technical consulting work around
computational modeling and scientific computing.

Id never thought of myself as a marketer. A computational modeler,


yes. A hustler perhaps. A fairly effective corporate guerrilla, yes. A
marketer, not really. I viewed my previous marketing work as the work of
a curious tourist in a strange land. I viewed my marketing writing as
outsider-anthropology amongst strange creatures. But apparently, thats
not how others view me.
Looking back, and trying to make sense of my life in retrospect as
the training of an accidental marketer, it makes sense though: Ive
logged the right mix of complementary experiences. Marketing is still not
my primary identity though (that would mean returning to a Procrustean
bed of disciplinary identity).
Many people luck out like me, accidentally. We recognize what
particular path to mastery were on, long after we actually get on it.
Many do not. They bum around in angsty anomie, craving structure
where none exists, and realizing after a decade of wandering that theyve
unfortunately gotten nowhere.
Is it possible to systematically do things to put yourself on a path to
mastery, and know youre on one, without actually knowing what that path
is until youre already far down it?
Inside and Outside Views of Grit
If there is no external frame of reference, how do you know where
you are, where you are going and whether you are progressing at all, as
opposed to bumming around?
Can you log any old time-sheet of 10,000 hours, slap a label on it, and
claim mastery?
Thankfully, intrinsic navigation is not quite that trite.
A clue to the mystery is the personality trait known as grit, probably
the best predictor of success in the modern world.

Grit is the enduring intrinsic quality that, for a brief period in recent
history, was coincident with the pattern of behavior known as progressive
disciplinary specialization.
Grit has external connotations of extreme toughness, a high apparent
threshold for pain, and an ability to keep picking yourself up after getting
knocked down. From the outside, grit looks like the bloody-minded
exercise of extreme will power. It looks like a super-power.
I used to believe this understanding of grit as a superhuman trait. I
used to think I didnt possess it. Yet people seem to think I exhibit it in
some departments. Like reading and writing. They are aghast at the
amount of reading I do. They wonder how I can keep churning out
thousands of words, week after week, year after year, with no guarantee
that any particular piece of writing will be well-received.
They think I must possess superhuman willpower because they make
a very simple projection error: they think it is hard for me because it
would be hard for them. Well of course things are going to take
superhuman willpower if you go after them with the wrong strengths.
For a while, I went around calling this faux-grit. The appearance of
toughness. But the more I looked around me at other people who seemed
to display grit in other domains, the more I realized that it wasnt hard for
them either. What they did would merely be superhuman effort for me.
Faux grit and true grit are the same thing (the movie True Grit is actually
quite a decent showcase of the trait; it showcases the superhuman
outside/fluid inside phenomenon quite well).
So what does the inside view of grit look like? I took a shot at
describing the subjective feel in my last post on the Tempo blog. It simply
feels like mindful learning across a series of increasingly demanding
episodes that build on the same strengths.
But the subjective feel of grit is not my concern here. I am interested
in objective, intrinsically measurable aspects of grit that can serve as an
internal inertial navigation system; a gyroscope rather than GPS.

The Grit Gyroscope: Reworking, Referencing, Releasing


In physical space, latitude, longitude and altitude get replaced by arclength, curvature and torsion when you go intrinsic.
In endeavor space, field, domain and years of experience get replaced
by three variables that lend themselves to a convenient new 3Rs acronym:
reworking, referencing, releasing (well, technically, it is internal
referencing and early-and-frequent releasing, but lets keep the phrase
short and alliterative). I believe the new 3Rs are as important to adults as
the old ones (Reading, wRiting and aRithmetic) are for kids.
Reworking
I stumbled upon rework as a key variable when I tried to answer a
question on Quora: what are some tips for advanced writers?
Since writing is something everybody does, logging 10,000 writing
hours is something anyone can do. My aha! moment came when I realized
that it isnt the writing hours that count, it is the rewriting hours.
Everybody writes. People who are trying to walk the path towards mastery
rewrite. I wont say more about this variable. If you want a worked
example, read my Quora answer. If you want a quick and pleasant read on
the subject, Jason Frieds Rework gets at some of the essential themes
(though perhaps in a slightly gimmicky way).
Referencing
For referencing, my clue was my recent discovery that new readers of
this blog often dive deep into the archives and read nearly everything Ive
written in the last four years. I dubbed it the ribbonfarm absurdity
marathon because I didnt understand what would possess anyone to
undertake it.
But then I realized that I write in ways that practically demand this
reading behavior if people really want to get the most value out of what I
am talking about: I reference my own previous posts a lot. Not to tempt

people into reading related content, but out of sheer laziness. I dont like
repeating arguments, definitions or key ideas. So I back-link. I do like
most of my posts to be stand-alone and comprehensible to a new reader
though, so I try to write in such a way that you can get value out of
reading a post by itself, but significantly more value if youve read what
Ive written before. For example, merely knowing what I mean by the
word legibility, which I use a lot, can increase what you get out of some
posts by 50%. This is one reason blogging is such a natural medium for
me. The possibilities of hyperlinking make it easy to do what would be
extremely tedious with paper publishing.
The key here is internal referencing. I use far fewer external reference
points (theres perhaps a dozen key texts and a dozen papers that I
reference all the time). It sounds narcissistic, but if youre not referencing
your own work at least 10 times as often as youre referencing others,
youre in trouble in the intrinsic navigation world. Instead of developing
your own internal momentum and inertia, you are being buffeted by
external forces, like a grain of pollen being subjected to the forces of
Brownian motion.
Releasing
And finally, releasing. As in the agile software dictum of release
early and often. In blogging, frequency isnt about bug-fixing or
collaboration. It isnt even about market testing (none of my posts are
explicitly engineered to test hypotheses about what kind of writing will do
well). It is purely about rational gambling in the dollar-cost averaging
sense. It is the investing advice dont try to time the market applied to
your personal work.
If the environment is so murky and chaotic that you cannot
strategically figure out clever moves and timing, the next best thing you
can do is just periodically release bits of your developing work in the form
of gambles in the external world. I think theres a justifiable leap of faith
here: if you are work admits significant reworking and internallyreferencing, youre probably on to something that is of value to others.
If a post happens to say the right thing at the right time, it will go
viral. If not, it wont. All I need to do is to keep releasing. This realization

incidentally, has changed my understanding of phenomena like iteration in


lean startups and serial entrepreneurs who succeed on their fifth attempt.
Its mostly about averaging across risk/opportunity exposure events, in an
environment that you cannot model well. I am pretty sure you can apply
this model beyond blogging and entrepreneurship, but Ill leave you to
figure it out.
These three variables together can measure your progress along any
path to mastery. Whats more, they can be measured intrinsically, without
reference to any external map of disciplinary boundaries. All you have to
do is to look for an area in your life where a lot of rework is naturally
happening, maintain an adequate density of internal referencing to your
own past work in that area, and release often enough that you can forget
about timing the market for your ouput.
What does navigating by these three variables look like from the
outside?
If you only do a lot of internal referencing, thats like marching along
a straight, level road.
If you do a lot of internal referencing and a lot of rework, thats like
marching along a steady uphill road thats gradually getting steeper from
an external point of view (in other words, you are on your own
exponential path of progress). What you are doing will look impossible to
observers. It may look like you are marching up a vertical cliff. A great
example is the Silicon Valley archetype of the 10x engineer.
And finally, if you are releasing frequently, thats like turning and
twisting: spiraling around an increasingly steep mountain (or zig-zagging
up via a series of switchbacks).
The Path of Least Resistance
Navigating with the 3Rs as an adult isnt enough. You still have to
recover the value the old disciplinary model provided: behavioral
boundedness. Whether you are navigating intrinsically or extrinsically,
suddenly running into a mountain a major weakness is just as bad.

The key here is very simple and very Sun Tzu: with respect to the
external world, take the path of least resistance.
Why? Think of it this way. The disciplinary world very coarsely
measured your aptitudes and strengths once in your lifetime, pointed you
in a roughly right direction and said Go! The external environment had
been turned into a giant obstacle course designed around a coarse global
mapping of everybodys strengths.
So there was no distinction between the map of the external world
you were navigating, and the map of your internal strengths. The two had
been arranged to synchronize. If you navigated through a map of external
achievement, landmarks and honors, youd automatically be navigating
safely through the landscape of your internal strengths.
But when you cannot trust that youve been pointed in the right
direction in a landscape designed around your strengths, you cannot afford
to navigate based on a one-time coarse mapping of your own strengths at
age 18.
If you run into an obstacle, it is far more likely that it represents a
weakness rather than a meaningful real-world challenge to be overcome,
as a learning experience.
Dont try to go over or through. It makes far more sense to go around.
Hack and work around. Dont persevere out of a foolhardy superhuman
sense of valor.
Hard Equals Wrong
If it isnt crystal clear, I am advocating the view that if you find that
what you are doing is ridiculously hard for you, it is the wrong thing for
you to be doing. I maintain that you should not have to work significantly
harder or faster to succeed today than you had to 50 years ago. A little
harder perhaps. Mainly, you just have to drop external frames of reference
and trust your internal navigation on a landscape of your own strengths. It

may look like superhuman grit to an outsider, but if it feels like that inside
to you, youre doing something wrong.
This is a very contrarian position to take today. Thomas Friedman in
particular has been beating the harder is better drum for a decade now,
most recently in his take on the London riots, modestly titled A Theory of
Everything (Sort Of):
Why now? It starts with the fact that globalization and
the information technology revolution have gone to a whole
new level. Thanks to cloud computing, robotics, 3G
wireless connectivity, Skype, Facebook, Google, LinkedIn,
Twitter, the iPad, and cheap Internet-enabled smartphones,
the world has gone from connected to hyper-connected.
This is the single most important trend in the world
today. And it is a critical reason why, to get into the middle
class now, you have to study harder, work smarter and
adapt quicker than ever before. All this technology and
globalization are eliminating more and more routine
work the sort of work that once sustained a lot of
middle-class lifestyles.
The environment that really matters isnt the external world. It is
pretty much pure noise. You can easily find and process the subset that is
meaningful for your life. It isnt about harder, smarter, faster. If it were, Id
be dead. Ive been getting lazier, dumber and slower. Its called aging. I
think Friedman is going to run out of superlatives like hyper- before I
run out of life. If I am wrong, the world is going to collapse before he gets
around to writing The World is Hyper-Flatter-er. Humans are simply not
as capable as Friedmans survival formula requires them to be.
Exhortation is pointless. Humans dont suddenly become superhuman just because the environment suddenly seems to demand
superhuman behavior for survival. Those who attempt this kill themselves
just as surely as those dumb kids who watch a superman movie and jump
off buildings hoping to fly.

It is the landscape of your own strengths that matters. And you can set
your own, completely human pace through it.
The only truly new behavior you need is increased introspection. And
yes, this will advantage some people over others. To avoid running faster
and faster until you die of exhaustion, you need to develop an increasingly
refined understanding of this landscape as you progress. You twist and
turn as you walk (not run) primarily to find the path of least resistance on
the landscape of your strengths.
The only truly new belief you need is that the landscape of
disciplinary endeavors and achievement is meaningless. If you are too
attached to degrees, medals, prizes, prestigious titles and other extrinsic
markers of progress in your life, you might as well give up now. With 90%
probability you arent going to make it. Its simple math: even if they were
worth it, as our friend Friedman notes with his characteristic scaremongering, there simply isnt enough to go around:
Think of what The Times reported last February: At
little Grinnell College in rural Iowa, with 1,600 students,
nearly one of every 10 applicants being considered for the
class of 2015 is from China. The article noted that dozens
of other American colleges and universities are seeing a
similar surge as well. And the article added this fact: Half
the applicants from China this year have perfect scores of
800 on the math portion of the SAT.
If youre paying attention to the Chinese kids who score a perfect 800,
youre paying attention to the wrong people. I mean, really? You should
worry about some Chinese kid terrorized into achieving a perfect-800
math score by some Tiger Mom, and applying to Grinnell College?
Its the Chinese kids who are rebelling against their Tiger Moms,
completely ignoring the SAT, and flowing down the path of least
resistance that you should be worried about. After all Sun Tzu invented
that whole idea.
So rework, reference, release. Flow through the landscape of your
own strengths and weaknesses. Count to 10,000 rework hours as you walk.

If you arent seeing accelerating external results by hour 3300, stop and
introspect. That is the calculus of grit. Its the exponential human
psychology you need for exponential times. Ignore everything else.
Factoid: this entire 4000-plus word article is a working out of a 21word footnote on page 89 of Tempo. Thats how internally-referenced my
writing has become. Never say I dont eat my own dogfood.

Tinker, Tailor, Soldier, Sailor


February 17, 2010
What did you want to grow up to be, when you were a kid? Where did
you actually end up? For a few weeks now, I have been idly wondering
about the atavistic psychology behind career choices. Whenever I develop
an odd intellectual itch like this, something odder usually comes along to
scratch it. In this case, it was a strange rhyme that emerged in Britain
sometime between 1475 and 1695, which has turned into one of the most
robust memes in the English language:
tinker, tailor, soldier, sailor
richman, poorman, beggarman, thief
Everybody from John LeCarre to the Yardbirds seems to have been
influenced by this rhyme. For the past week, it has been stuck in my head;
an annoying tune that was my only clue to an undefined mystery about the
nature of work that I hadnt yet framed. So I went a-detecting with this
clue in hand, and ended up discovering what might be the most
fundamental way to view the world of work.
The Clue in the Rhyme
With the tinker, tailor rhyme stuck in my head, I was browsing
some old books in a library last week. A random 1970s volume, titled In
Search of History, caught my eye. In the prologue was this interesting
passage:
Most ordinary people lived their lives in boxes, as bees
did in cells. It did not matter how the boxes were labeled:
President, Vice President butcher, baker, beggarman,
thief, doctor, lawyer, Indian chief, the box shaped their
identity. But the box was an idea. Sir Robert Peel had put
London policemen on patrol one hundred fifty years ago
and the bobbies in London or the cops in New York
now lived in the box invented by Sir Robert PeelAll

ordinary people below the eye level of public recognition


were either captives or descendants of ideas Only a very,
very rich man, or a farmer, could escape from this system
of boxes. The very rich could escape because wealth itself
shelters or buys identityAnd farmers tooor
perhaps? not even a farmer could escape. After all [in
the 1910s] more than half of all Americans lived in villages
or tilled the fields. And now only four percent worked the
land. Some set of ideasmust have had something to do
with the dwindling of their numbers.
It was rather a coincidence that I found this passage just when I was
thinking of the tinker-tailor rhyme (the butcher, baker bit is an
American variant). A case of serendipitously mistaking the author,
Theodore Harold White, who Id never heard of, for Terence Hanbury
White, author of The Once and Future King, which I love. That sort of
coincidence doesnt happen too often outside of libraries, but oh well.
The important insight here is that the structure of professions and
work-identities is neither fundamental, nor a consequence of the industrial
revolution. Between macroeconomic root causes and the details of your
everyday life, there is an element of deliberate design. Design of
profession boxes that is constrained by some deeply obvious natural
laws, and largely controlled by those who are not themselves in boxes.
The tinker, tailor archetypes began emerging four centuries before the
modern organization of the workforce took shape, during the British
industrial revolution (which started around the 17th century).
Besides the peculiar circumstances of late medieval Britain, and the
allure of alliteration and rhyme, ask yourself, why has this rhyme become
such a powerful meme? Well return to this question shortly. But for now,
lets run with Theodore Whites insight about professions being conceptual
boxes created by acts of imagination, rather than facts of economics, and
see where it gets us. Well also get to the meaning of a revealing little
factoid: the rhyme was originally part of a counting game played by young
girls, to divine who they might marry.

And yes the basic political question of capitalism versus social justice
rears its ugly head here. Choosing a calling is a political act, and Ill
explain the choices you have available.
The Central Dogma in the World of Work
There are three perspectives we normally utilize when we think about
the world of work.
The first is that of the economist, who applies the laws of demand and
supply to labor markets. In this world, if a skill grows scarce in the
economy, wages for that skill will rise, and more people will study hard to
acquire that skill. Except that humans perversely insist on not following
these entirely reasonable laws. As BLS (Bureau of Labor Statistics)
statistics reveal, people insist on leaving the skilled nursing profession
perennially thirsting for new recruits, while the restaurant industry in Los
Angeles enjoys bargain labor prices, thanks to those hordes of Hollywood
hopefuls, who are good for nothing other than acting, singing and waiting
tables.
Then there is the perspective of the career counselor. That theatrical
professional who earnestly administers personality and strengths tests, and
solemnly asks you to set career goals, think about marketability of
skills, weigh income against personal fulfillment, and so forth. I say
theatrical because the substance of what they offer is typically the same,
whether the mask is that of a drill sergeant, guardian angel or an earth
mother; whether the stance is one of realism, paternalism or romanticism.
Somewhere in the hustle and bustle of motivational talk, resume critiquing
and mock interviews, they manage to cleverly hide a fact that becomes
obvious to the rest of us by the time we hit our late twenties: most of us
have no clue what to do with our lives until weve bummed around, testdriven, and failed at, multiple callings. Until weve explored enough to
experience a career Aha! moment, most of us cant use counselors. After
we do, they cant really help us. If we never experience the Aha!
moment, we are lost forever in darkness.
And finally there is the perspective of the hiring manager. That
hopeful creature who does his or her best to cultivate a pipeline of

fungible labor, in the fond and mostly deluded hope that cheap talent
will fit neatly into available positions. It is a necessary delusion. To
admit otherwise would be to admit that the macroeconomic purpose an
organization appears to fulfill is the random vector sum of multiple people
pulling their own way, with some being fortunate enough to be pulling in
the accidental majority direction, while others are dragged along, kicking
and screaming, until they let go, and still others pretend to pull whichever
way the mass is moving. Mark Twains observations of ants are more
applicable than hiring managers ideas that talent-position fit is a
strongly-controllable variable.
Heres the one common problem that severely limits the value of each
of these perspectives. There is a bald, obvious and pertinent fact that is so
important, yet so rarely acknowledged, let alone systematically
incorporated, that each of these perspectives ends up with a significant
blind spot.
That bald fact is this: it takes two kinds of work to make a society
function. First, there is the sexy, lucrative and powerful (SLP) work that
everybody wants to do. And then there is the dull, dirty and dangerous
(DDD) work that nobody wants to do. There is a lot of gray stuff in the
middle, but thats the basic polarity in the world of work. Everything
depends on it, and neither pole is dispensable.
The economist prefers not to model this fact. The career counselor
does not want to draw attention to it. The hiring manager has good reason
to deny it.
This brings us to the central dogma in the world of work: everyone
can simultaneously climb the Maslow pyramid, play to their strengths, and
live rewarding lives. That somehow magically, in this orgy of selfactualization, Adam Smith will ensure that the trash will take itself out.
Like all dogmas, it is false, but still manages to work, magically.
The dull, dirty and dangerous work does get done. Trash gets hauled,
sewers get cleaned, wars get fought by cannon-fodder types. And yet the
dogma is technically never violated. You see, there is a loophole that

allows the dogma to remain technically true, while being practically false.
The loophole is called false hope.
The False Hope Tax and Dull, Dirty and Dangerous (DDD)
The phrase dull, dirty or dangerous became popular in the military in
the last decade, as a way to segment out and identify the work that suits
UAVs (Unmanned Aerial Vehicles, like the Predator) the best. It also
describes the general order in which we will accept work situations that do
not offer any hope of sex, money, or power. Most of us will accept dull
before dirty, and dirty before dangerous. Any pair is worse than any one
alone, and all three together represent hell. Theres a vicious spiral here.
Dull can depress you enough that you are fired and need to work at dull
and dirty, which only accelerates the decline into dull, dirty and
dangerous. And I am not talking dangerous redeemed by Top Gun
heroism. I am talking die in stupid, pointless ways dangerous.
William Rathje, a garbologist (a garbage-archeologist) notes in his
book, Rubbish (to be reviewed), that once you get used to it, garbage in
landfills has a definite bouquet that is not entirely unpleasant. But then, he
is a professor, poking intellectually at garbage rather than having to
merely haul and pile it, with no time off to write papers about it. Dull,
dirty and dangerous work is stuff that takes scholars to make interesting,
priests to ennoble, and artists to make beautiful. But in general, it is
actually done by some mix of the deluded hopeful, the coerced, and the
broken and miserable, depending on how far the civilization in question
has advanced. You might feel noble about recycling, but somewhere out
there, near-destitute people are risking thoroughly stupid deaths (like
getting pricked by an infected needle) to sort your recycling. Downcycling
really, once you learn about how recycling works. On the other side of
the world, ship-breakers are killing themselves through a mix of toxic
poison and slow starvation, to sustain the processes that bring your cheap
Walmart goods to you from China.
The reasons behind the mysteriously perennial talent scarcity and
inelastic wages in the nursing profession, or the hordes of waitstaff in LA
hopefully (Pandora be praised!) waiting for their big Hollywood break, are
blindingly obvious. The obviously germane facts are that one profession

involves bedpans and adult diapers, to be paid for by people on fixed


incomes (so theres a limit to how much nurses can make), while the other
involves tantalizingly close-at-hand hopes of sex, money and fame.
False hope is the key phrase there. Nurses hope from afar, waiters in
LA hope from the front row. The trick Adam Smith uses to get the dull,
dirty and dangerous work done work that took slavery and coercion
until very recently is to sustain hope. American Idol is the greatest
expression of this false hope. A quick ticket from dull, dirty and
dangerous to sexy, lucrative and powerful. The fact that one in a million
will make it allows the other 999,999 to sustain themselves. It is one year
of hope after the other, until you accept the mantra of if you dont get
what you like, youll be forced to like what you get.
That is why the Central Dogma of work is never technically violated.
You could self-actualize, no matter where on the SLP-DDD spectrum you
are. It is just that in the Dull, Dirty and Dangerous part of the world, the
probability that you will do so becomes vanishingly small. To believe in a
probability that small, you have to be capable of massive delusions. You
have to believe you can win American Idol.
But that one technically possible, but highly improbable piece of hope
can replace the whips of an entire class of slave-drivers and dictators, and
replace it with something called democracy.
Snarky probability theorists like to call lotteries a stupidity tax
imposed on people who cannot compute expected values. What they dont
realize is that most professions (probability theorists included) carry a
heavy stupidity tax load: the extraordinarily low-probability hope of
leaping into the world of Sexy, Lucrative and Powerful. The only
difference is, unlike the lottery, you have no option but to participate
(actually, by this reasoning, the hope of winning a lottery is possibly more
reasonable than the more organic sorts of false hope embedded in most
work).

Sexy, Lucrative and Powerful (SLP)


The promised land may not be all it seems to those who arent there
yet (rock stars certainly whine, with drug-addled words, about it), but it
certainly exists.
Again, the order is important. Just as dull, dirty and dangerous is a
vicious spiral towards a thoroughly stupid death, sexy, lucrative and
powerful is a virtuous cycle that gets you to a thoroughly puzzling nirvana.
If you can do rock star or model, it is a relatively easy slide downhill from
sexy to lucrative and from lucrative to powerful. If you are not blessed
with looks or a marketable voice (and Beyonces dad), but can hit
lucrative by say, starting a garbage-hauling business staffed by Mexican
immigrants, you could still claw uphill to sexy. Or you could start with
powerful and trade the gossamer currency of influence for hard cash, and
hard cash for sex (figuratively and literally).
I have much less to say about sexy, lucrative and powerful because
most of you know all about it. Because, like me, youve been dreaming
about it since you were 10. You can easily tell SLP work apart from DDD
work by the structure of labor demand and supply. In one sector, people
are dragged down, kicking and screaming. In the other, they need to be
barricaded out, as they hurry from their restaurant shift to auditions. You
dont need a behavioral economist to tell you that career choices are not
entirely defined by the paychecks associated with them.
So lets move straight on to the reason little girls play their tinker,
tailor counting games.
The Developmental Psychology of Work
In Time Wars (to be reviewed), Jeremy Rifkin cites a study that shows
that young girls typically switch from fantasy career dreams to more
pragmatic ones around the age of ten and a half. For boys, it is about
eleven and a half. For both, the switch from fantasy to reality occurs on
the cusp of adolescence. It is fairly obvious what drives childish job
fantasies. Little children like being the center of attention. They like to feel

important and powerful. What drives realism-modulated adolescent


dreams, which have a more direct impact on career choices, is less clear.
What is clear is that the SLP dreams of pre-adolescents are not abandoned,
merely painted over with some realism.
The first profession I can remember wanting to join desperately was
road-roller driver. Growing up, my house was down the street from a lot
where the city administration parked its road rollers. They were big and
powerful, and I wanted to drive one for the rest of my life. Later, I
expanded my horizons. An uncle who worked in the railways took me for
a ride in a tower wagon (a special kind of track-maintenance locomotive),
and I was convinced I wanted to drive some sort of locomotive for the rest
of my life.
When I hit adolescence, my twin passions were military aircraft and
astronomy. I was already realistic enough to not hanker after Top-Gun
sexy (revealingly, my one classmate who joined the Indian Air Force
dropped out within a year). I was headed for engineering or science, which
were neither sexy nor lucrative, but held out a vague promise of powerful.
Somewhere in college, by turning down an internship at a radio astronomy
center, and picking one in a robotics lab, I abandoned the slightly more
romantic world of astronomy for the less romantic world of aerospace
engineering (I did work on space telescopes in grad school though, so I
guess I didnt really grow up till I was 30).
You probably have your own version of this story. You think it is
heartwarming dont you?
In actual fact, this sort of story reveals something deeply, deeply ugly
about childhood and adolescent yearnings; something on par with
Goldings Lord of the Flies: our brains are prepared for, and our
environment encourages, a hankering for sexy, lucrative, powerful. No kid
ever dreams of a career sorting through smelly, toxic garbage. Or even the
merely dull (and not dangerous or dirty) work of data entry.
But the world does not run on SLP alone. It needs DDD, and no
matter how much we automate things, it always will. By hankering after

SLP, we are inevitably legitimizing the cruelty that the world of DDD
suffers.
Tinker, Tailor, Soldier Sailor
Lets circle back and revisit tinker, tailor, solidier, sailor, richman,
poorman, beggarman, thief.
Why did little 17th century girls enjoy counting stones and guessing
who their future husbands might be? Was their choice of archetypes mere
alliterative randomness?
We tend to think of specialization and complex social organization as
consequences of the industrial age, but the forces that shape the
imaginative division of labor have been at work for millenia.
Macroeconomics and Darwin only dictate that there will be a spectrum
with dull, dirty and dangerous at one end, and sexy, lucrative and
powerful at another. This spectrum is what creates and sustains social and
economic structures. I am not saying anything new. I am merely restating,
in modern terms, what Veblen noted in Theory of the Leisure Class. From
one century to the next, it is only the artistic details that change. Tinker,
tailor evolves to a different set of archetypes.
Weve moved from slavery to false hope as the main mechanism for
working with the spectrum, but whatever the means, the spectrum is here
to stay. Automation may nip at its heels, but fundamentally, it cannot be
changed. Why? The rhyme illustrates why.
At first sight, the tinker, tailor rhyme represents major category
errors. Richman and poorman are socioeconomic classes, while tailor,
sailor and soldier are professions. Tinker (originally a term for a
Scottish/Irish nomad engaged in the tinsmith profession) is a lifestyle.
Beggarman and thief are categories of social exodus behaviors.
Relate them to the DDD-SLP spectrum, and you begin to see a
pattern. As Theodore White noted, Richman enjoys the ultimate privilege:
buying his own social identity at the SLP end of the spectrum. Poorman is
stuck in the DDD end. Beggarman and thief have fallen off the edge of

society, the DDD end of the spectrum, by either giving up all dignity, or
sneaking about in the dark. Sailor and Tinker are successful exodus
archetypes. The former is effectively a free agent. Remember that around
the time this rhyme captured the popular imagination in the 17th century,
the legitimized piracy and seaborne thuggery that was privateering, had
created an alternative path to sexy, lucrative and powerful; one that did not
rely on rising reputably to high office (the path that Samuel Pepys
followed between 1633 and 1703; The Diary of Samuel Pepys remains one
of the most illuminating looks at the world of work ever written). The
latter, the tinker, was a neo-nomad, substituting tin-smithing for
pastoralism in pre-industrial Britain.
The little girls had it right. In an age that denied them the freedom to
create their own destiny, they wisely framed their tag-along life choices in
the form of a rhyme that listed deep realities. Today, the remaining modern
women who look to men, rather than to themselves, to define their lives,
might sing a different song:
blogger, coder, soldier, consultant
rockstar, burger-flipper, welfareman, spammer
Everything changes. Everything remains the same.
The Politics of Career Choices
Somewhere along the path to growing up, if you bought into the
moral legitimacy argument that justified striving for sexy, lucrative,
powerful, you implicitly took on the guilt of letting dull, dirty and
dangerous work, done by others, enabling your life. If that guilt is killing
you, you are a liberal. If you think this is an unchangeable reality of life,
you are a conservative. If you think robots will let us all live sexy,
lucrative, powerful lives, you are deluded. You see, the SLP-DDD
spectrum is not absolute, it is relative. Because our genes program us to
strive for relative reproductive success in complicated ways. There is a
ponderous theory called relative deprivation theory that explains this
phenomenon. So no matter how much DDD work robots take off the table,
well still be the same pathetic fools in our pajamas.

Can you live with what youve chosen?


Heres what makes me, at a first approximation, a business
conservative/social liberal. I can live with it, and shamelessly pursue SLP,
without denying the unpleasant reality that starving and poisoned shipbreakers and American-Idol hopeful garbage haulers make my striving
possible. In my mind, it isnt the pursuit of SLP that is morally suspect. It
is the denial of the existence of DDD.
So what are you? Tinker, tailor, soldier, sailor, richman, poorman,
beggarman or thief?
There is a trail associated with this post that explains the history of the
rhyme.
I wrote this post while consuming three vodka tonics, so if it turns out
to be a successful post, I might change that link down there to say buy me
a vodka tonic instead of buy me a coffee.

The Turpentine Effect


March 18, 2010
Picasso once noted that when art critics get together they talk about
Form and Structure and Meaning. When artists get together they talk about
where you can buy cheap turpentine. When you practice a craft you
become skilled and knowledgeable in two areas: the stuff the craft
produces, and the processes used to create it. And the second kind of
expertise accumulates much faster. I call this the turpentine effect. Under
normal circumstances, the turpentine effect only has minor consequences.
At best, you become a more thoughtful practitioner of your craft, and at
worst, you procrastinate a little, shopping for turpentine rather than
painting. But there are trades where tool-making and tool-use involve
exactly the same skills, which has interesting consequences.
Programming, teaching, writing and mechanical engineering are all such
trades.
Self-Limiting and Runaway Turpentine Effects
Any sufficiently abstract craft seems to cause some convergence of
tool-making and tool-use. Painters arent normally also chemists, so thats
actually not a great example. But I dont doubt that some of Picassos
forgotten technician contemporaries, who had more ability to say things
with art than things to say, set up shop as turpentine sellers, paint-makers
or art teachers. But in most fields the turpentine effect is self-limiting. As
customers, pilots can only offer so many user insights to airplane
designers. To actually become airplane designers, theyd have to learn
aerospace engineering. But in domains where tool-making involves few
or no new skills, you can get runaway turpentine effects.
As Paul Graham famously noted, hackers and painters are very
similar creatures. But unlike painting or aircraft, programming is a domain
where tool-use skills can easily be turned into tool-making skills. So it is
no surprise that programmers are particularly susceptible to the runaway
turpentine effect. Joel Spolsky struck me very forcefully as the runawayturpentine-effect type, when I read his Guerrilla Guide to Interviewing (a

process designed to allow technically brilliant programmers to clone


themselves). It is no surprise that his company produces tools (really good
ones, I am told) for programmers, not software for regular people. And
their hiring process is guaranteed to weed out (with rather extreme
disrespect and prejudice) anyone who could get them to see problems that
are experienced by non-programmers. 37 Signals is another such company
(project management software). If you see a tool-making company,
chances are it was founded entirely by engineers. And the consequences
arent always as pretty as these two examples suggest.
Linus Torvalds most famous accomplishment was an act of
thoroughly unoriginal cloning (Unix to Linux, via Minix). But among
programmers, he seems to be most admired for his invention of git, a
version control system whose subtle and original design elements only
programmers can appreciate. This discussion with a somewhat postal
Torvalds comment (I hope it is authentic) is a revealing look at a mastertool-maker mind. Curiously, Torvalds is going postal over a C vs. C++
point, and it is interesting to read his comment alongside this interview
with another programmers programmer, Bjarne Stroustroup, the inventor
of C++.
Eric Raymond codified, legitimized and spiritualized this path for
programmers, in The Cathedral and the Bazaar, when he noted that most
open-source projects begin with a programmer trying to scratch a very
personal itch, not other-user needs. The open source world, as a result,
has produced far more original products for programmers than for end
users. Off the top of my head, I actually cant think of a single great enduser open source product that is not a clone of a commercial original
(aside: there is a crying need for open-source market research).
When I was a mechanical engineering undergraduate, my computer
science peers created a department t-shirt that said Id rather write
programs to write programs than write programs. That about sums it up.
In my home territory of mechanical engineering, some engineers naturally
like to build machines that do useful things. Others build machine tools,
machines that build machines, and wouldnt have a raison detre without
the first category.

Next door to programming and engineering, the turpentine effect can


occur in science as well. Stephen Wolfram is my favorite example of this.
His prodigal talents in physics and mathematics are probably going to be
forgotten in 50 years, because he never did anything worthy of them
(according to his peers, neither his early work, nor A New Kind of
Science, is as paradigm-shattering as he personally believes). But the
paradigm-shifting tool he built, Mathematica, is going to be in the history
books much longer.
Teaching is a very basic creative skill that seems to emerge through
runaway turpentine effects. I knew a professor at Cornell who had,
outside his door, a sign that said, Those who can, do. Those who can do
better, teach. Methinks the professor doth protest too much. There is a
reason the actual cliche is those who can, do. Those who cant, teach.
But the insinuation that teachers are somehow not good enough to do is
too facile. That is often, but not always, the case.
What happens is that all talented people engage in deliberate practice
(a very conscious and concentrated form of self-aware learning) in
acquiring a skill. But if you cant find (or get interested in) things to do
that are worthy of your skill, you turn to the skill itself as an object of
attention, and become better at improving the skill rather than applying it.
Great coaches were rarely great players in their time. John Wright, a
mostly forgettable cricket player, had a phenomenal second innings in his
life, as the coach who turned the Indian cricket team around.
But this effect of producing great teachers has a dark side as well,
especially in new fields, where there are more learners than teachers.
Thankfully, despite being tempted several times, I never started a how to
blog
blog.
More
generally,
this
is
the
writing/speaking/teaching/consulting (W/S/T/C) syndrome that hits people
who go free agent. We talked about before in my review of One Person,
Multiple Careers (check out the comments as well).
This relation to teaching (via self-learning) has actually been studied
in psychology. In Overachievement, John Eliot talks about a training
mindset and a performance mindset. The former involves meta-cognition
and continuously monitoring your own performance. The latter involves

an ability to shut off the meta-cognition and just get lost in doing. Great
teachers were probably great learners. Great doers may be slower learners,
but are great at shutting off the meta-cognition.
Causes and Consequences
I think the turpentine effect is caused by and I am treading on
dangerous territory here the lack of a truly artistic eye in the domain
defined by a given tool (so it is ironic that it was Picasso who came up
with the line). Interesting art arises out of a combination of refined skills
and a peculiar, highly original way of looking at the world through that
skill. If you have the eye without the skills, you become an idiosyncratic
eccentric who is never taken seriously. If you have the skills without the
eye, you become susceptible to the turpentine effect. The artistic eye is
innate and requires no real refinement. In fact, the more you learn, the
more the eye is blinded. The adult artistic eye is largely a matter of
protecting a childlike way of seeing, but coupling it to an adult way of
processing what you see. And to turn it into value, you need a second
coupling to a skill that translates your unique way of seeing into unique
ways of creating.
There is a feedback loop here. Sometimes acquiring a skill can make
you see things you didnt see before. When you have a hammer in your
hand, everything looks like a nail. On the other hand, if you cant see
nails, all you see is opportunities to make better hammers.
The artistic eye is also what you need to make design decisions that
are not constrained by the tools. A complete absence of artistic instincts
leads to an extreme lack of judgment. In a Seinfeld episode, Jerry gets
massively frustrated with a skilled but thoroughly inartistic carpenter
whom he has hired to remodel his kitchen. The carpenter entirely lacks
judgment and keeps referring every minor decision to Jerry. Finally Jerry
screams in frustration and tells him to do whatever, and just stop bothering
him. The result: the carpenter produces an absolute nightmare of a kitchen.
In Wonderboys, (a movie based on a Michael Chabon novel) the
writer/professor character played by Michael Douglas tells his students
that a good writer must make decisions. But he himself completely fails to
do so, and his book turns into an unreadable, technically-perfect, 1000-

page monster. No artistic decisions usually means doing everything rather


than doing nothing. Artists mainly decide what not to do.
What about consequences?
The most obvious and important one is a negative consequence:
creative self-indulgence. Nikki Hilton designs expensive handbags (which
is still, admittedly, a more admirable way of spending a life than the one
her sister models). There is a reason most product and service ideas in the
world are created for and by rich or middle-class people for their own
classes. The turpentine effect is far more prevalent than its utility requires.
There is a limit to how many people can be absorbed in safe and sociallyuseful turpentine-effect activities like tool-building or teaching. Let loose
where a content-focus, artistic eyes and judgment are needed, it leads to
over-engineered monstrosities, products nobody wants or needs, and a
massive waste of resources. Focusing on the problems of others, rather
than your own (or of your own class), requires even more effort.
The positive effects are harder to see, but they are important. The
turpentine effect is how isolated creatives can get together and form
creative communities that help refine and evolve a discipline, sometimes
over centuries, and take it much further than any individual can. Socially,
this emerges as the aesthetic of classicism in any field of craft.

The World is Small and Life is Long


January 18, 2012
In the Harry Potter series, J. K. Rowling repeatedly uses a very
effective technique: turning a character, initially introduced as part of the
background, into a foreground character. This happens with the characters
of Gilderoy Lockhart, Viktor Krum and Sirius Black for instance. In fact
she uses the technique so frequently (with even minor characters like Mr.
Ollivander and Stan Shunpike) that the background starts to empty out.
This is rather annoying because the narrative suggests and promises a
very large world comparable in scope and complexity to the Lord of
the Rings world say but delivers a very small world in which
everybody knows everybody. You are promised an epic story about the
fate of human civilization, but get what feels like the story of a small
town. Characters end up influencing each others lives a little too
frequently, given the apparent size of the canvas.
We are used to big worlds that act big and small worlds that act small.
We are not used to big worlds that act small.
Which is a problem, because thats the sort of world we now live in.
Our world is turning into Rowlings world.
The Double-Take Zone
Our lives are streams of mostly inconsequential encounters with
people who momentarily break away from the nameless and faceless
social dark matter that surrounds our personal worlds. But most of the
time, they return to the void.
Each of us is at the center of a social reality surrounded by a
foreground social zone of 150 odd people with names and faces, a 7billion strong world of social dark matter outside, and an annular social
gray zone in between, comprising a few thousand people.

This last category contains people who are neither completely


anonymous and interchangeable, nor possessed of completely unique
identities in relation to us. Included in this annular ring are old classmates
and coworkers who still register as unique individuals but have turned into
insubstantial ghosts, associated only with a dim memory or two. Also in
this ring are public figures and celebrities whom we recognize
individually, but who dont rise above the archetypes that define their
respective classes. And then there are all those baristas and receptionists
whom you see regularly.
It is this social gray zone that interests me, and theres a simple test
for figuring out if somebody is in this zone with respect to you: if you
meet them out of context, youll do a double-take.
If the barista at your coffee shop shows up at the grocery store, youll
do a quick double-take. Then youll make the appropriate context switch,
and recognition will turn into identification. Our language accurately
reflects this thought process: we say I cant place her and I just figured out
where I know her from.
This happens with celebrities too. I am pretty good at the game of
recognizing lesser-known actors in new roles. When watching TV, I often
say things like, Oh, the villains sister in Dexter I just realized, she
played Alma Garrett in Deadwood. I tend to spot these connections
across shows and movies faster than most people.
Context-Dependent Relationships
The reason for the double-take effect is obvious. Most of the people
we recognize enough to distinguish from faceless/nameless social dark
matter are still one-dimensional, context-dependent figures: the barista
who works mornings at the Starbucks on Sahara Avenue. Double-take
zone people are literally part of the social background.
It takes a few serendipitous encounters in different contexts to pry
someone loose from context, but mostly, nothing happens. They merely
turn into slightly more well-defined elements of their default contexts: the

barista who works at Starbcucks on Sahara Avenue, that I once ran into at
Whole Foods.
This still isnt the same as actually knowing someone, but it is a
necessary first step (as an aside, this is the reason why the three
media/three contacts rule in sales works the way it does). Double-take
moments are relationship-escalation options with expiry dates. They create
a window of opportunity within which the relationship can escalate into a
personal one.
There is a reason havent we met before? is the mother of all pick-up
lines.
So lets say there are three zones around you. The context-free zone of
personal relationships, surround by a context-dependent double-take zone
(call it the dont-I-know-you-from-somewhere zone if you prefer), and
finally, social dark matter.
The Real and Abstract Parts of the Social Graph
The personal, context-free zone is the part of the social graph that is
real for you. Here, you dont deal in abstractions like Its not what you
know, but who you know. You deal in specifics like, You need to get
yourself a meeting with Joe. Let me send an introductory email. You
could probably sketch out this part of the social graph fairly accurately on
paper, with real names and who-knows-whom connections. You dont
need to speculate about degrees of separation here. You can count them.
The dark matter world is the part of the social graph that is an
abstraction for you. You have abstract ideas about how it works (Old Boy
networks, people taking board seats in each others companies, the idea
that weak links lead to jobs, the idea that Asians have stronger connections
than Americans), but you couldnt actually sketch it out except in coarse,
speculative ways using groups rather than individuals.
The double-take zone is populated by people who are socially part of
the abstract social network that defines the dark matter, but physically or
digitally are concrete entities in your world, embedded in specific contexts

that you frequent. Prying someone loose from the double-take zone means
moving them from the abstract social graph into your real, neighborhood
graph. They go from being concrete and physically or virtually situated in
your mind to being concrete and socially situated, independent of specific
contexts. If mathematicians and theoretical computer scientists ran the
world, the socially correct thing to say in a double-take situation would be:
Oh, were context-independent now; do you want to take this on-graph?
In these terms, Rowlings little trick involves introducing characters
in the double-take zone and then moving them to the context-free zone. In
the process, she socially situates them. Lockhart goes from abstract
celebrity author making an appearance at a bookstore to teacher with
specific relationships to the lead characters. Sirius Black initially appears
as an abstract criminal on television, but turns into Harrys godfather.
Viktor Krum is a distant celebrity Quidditch player who turns into Rons
rival for the affections of Hermione.
The Active, Unstable Layer
The double-take zone is defined by the double-take test, but such tests
are rare. What happens when they do occur? Since an actual double take
creates a window of opportunity to personalize a relationship an active
option you could call this the active and unstable layer of the doubletake zone. The more actual double takes are happening, the more the zone
is active and unstable.
Our minds deal badly with the double-take zone when it is stable and
dormant. And we really fumble when it gets active and unstable. Why?

Our social instincts are based on physical-geographic separation of


scales. In the pre-urban world, the double-take zone was empty. You either
knew somebody personally as a fellow villager, or as a stranger visiting
from the dark-matter world. Strangers couldnt stay strangers. They either
went away soon and were forgotten, or stayed and became fellow
villagers.
We are used to being careful around people from our village, and
more careless in our dealings with strangers passing through. We take the
long view of relationships within local communities, and are more willing
to pick fights with strangers. There is less likelihood of costs escalating
out of control via vendettas in the latter case. It is also easier. The obvious
tourist is more easily cheated than the local.
Our psychological instincts appear to have evolved to deal with this
type of social reality. We are more likely (and able) to dehumanize
strangers before dealing roughly with them.
Urbanization created the double-take zone. Mass media expanded it
vastly, but asymmetrically (mass media creates relationships that are
double-take in one direction, dark-matter in the other). The Internet is

expanding it vastly once again, this time with more symmetry, thanks to
the explosion in number of contexts it offers, for encounters to occur.
This wouldnt matter so much if the expansion didnt affect stability.
We know how to deal with stable and dormant double-take zones.
The Rules of Civility
Before the Internet began seriously destabilizing and activating the
double-take zone, it was an unnatural social space, but we knew how to
deal with it.
The double-take zone merely requires learning a decent and polite,
but impersonal approach to interpersonal behavior: civility. It requires a
capacity for an abstract sort of friendliness and a baseline level of mutual
helpfulness among strangers. We learn the non-Duchene smile
something that sits uncomfortably in the middle of a triangle defined by a
genuine smile, a genuine frown, and a blank stare.
We think of such baseline civility as the right way to deal with the
double-take zone. This is why salespeople come across as insincere: they
act as though double-take zone relationships were something deeper.
The pre-Interent double-take zone was fairly stable. Double-take
events were truly serendipitous and generally didnt go anywhere. Most
relationship options expired due to low social and geographic mobility. A
random encounter was just a random encounter. Travel was stimulating,
but poignant encounters abroad rarely turned into anything more.
The rules of conduct that we know as civility have an additional
feature: they are based on an assumption of stable, default-context status
relationships that carry over to non-default contexts. A century ago, if a
double-take moment did occur, once the parties recognized each other
(made easier by obvious differences in clothing and other physical
markers of class membership), the default-context status relationship
would kick in. If a lord decided to take a walk through the village market
on a whim, and ran into his gardener, once the double-take moment

passed, the gardener would doff his hat to the lord, and the lord would
confer a gracious nod upon the gardener.
But this sort of prescribed, status-dependent civility is no longer
enough. The rules of civility cannot deal with an explosion of
serendipitous encounters.
Social Mobility versus Status Churn
Since double-take encounters temporarily dislocate people from the
default context through which you know them, and make them
temporarily more alive after, you could say the double-take zone is
coming alive with nascent relationships: relationships that have been
dislodged from a fixed physical or digital context, but havent yet been
socially situated.
There is an additional necessary condition for more to happen: the
double-take moment must also destabilize default assumptions about
relative status.
Double-take events today destabilize status, unlike similar events a
century ago. This is because we read them differently. A lord strolling
through a market a century ago a domain marked for the service class
knew that he was a social tourist. Double-take events, if they happened,
were informed by the assumption that one party was an alien to the
context, and both sides knew which one was the alien. Everybody wore
the uniform of their home class, wherever they went.
Things are different today. A century ago, social classes were much
more self-contained. Rich, middle class and poor people didnt run into
each other much outside of expected contexts. They shopped, ate and
socialized in different places for instance. This is why traditional romantic
stories are nearly always based on the trope of the heroine temporarily
escaping from a home social class to a lower one, and having a statusdestabilizing encounter with a lower-class male (the reverse, a prince
going walkabout and meeting a feisty commoner-girl, seems to be a less
common premise, but thats a whole other story).

But today, one of the effects of the breakdown of the middle class and
trading-up is that status relationships become context-dependent. There is
no default context.
Lets say youre an administrative assistant at a university, have an
associates degree, and frequent a coffeeshop where the barista is a
graduate student. You both shop at Whole Foods. Shes trading up, as far
as dietary lifestyles go, to shop at Whole Foods, while it is normal for you
because you have a higher household income.
In the coffeeshop, youre higher status as customer. If you run into
each other at Whole Foods, youre equals. If you run into each other on
campus, shes the superior.
Short of becoming President, there is almost nothing you can do that
will earn you a default status with everybody. Its up in the air.
This isnt social mobility. The whole idea of social mobility, at least in
the sense of classes as separate, self-contained social worlds, is breaking
down. Instead you have context-dependent status churn. Double-take
moments dont necessarily indicate that one party is a tourist outside their
class. There are merely moments that highlight that class is a shaky
construct today.
Worlds are mixing, so double-takes become more frequent. But what
makes the increased frequency socially disruptive is that status
relationships are different in the different contexts.
Temporal Churn
Even more unprecedented than status churn is temporal churn.
People from the same nominal class, who once knew each other, can
move into each others double-take zones simply by drifting apart in
space. Thats why you do a double-take when you randomly run into an
old classmate, whom you havent seen for decades, in a bookstore
(happened to me once). Or when you run into a hallway-hellos level

coworker, whom youve never worked directly with, at the grocery store
(this happened to me as well).
It is not changes in appearance or social status that make immediate
recognition difficult. It is the unfamiliar context itself.
This sort of thing doesnt happen much anymore. We dont catch up
as much anymore because we never disconnect. Unexpected encounters
are rare because online visibility never drops to zero. Truly serendipitous
encounters turn into opportunistically planned ones via online earlywarning signals.
One effect of this is that relationships can go up or down in strength
over a lifetime, since they are continuously unstable and active. Once
youve friended somebody on Facebook, and their activities keep showing
up in your stream, you are more likely to look them up deliberately for a
meeting or collaboration. Social situation awareness is not allowed to fade.
The active and unstable double-take layer is constantly suggesting
opportunities and ideas for deeper interaction.
Its not that time doesnt matter anymore, but that time does more
complicated things to relationships. In the pre-Internet world, relationships
behaved monotonically in the long term. You either lost touch, and the
relationship weakened over time, or you stayed in touch and the
relationship got stronger over time. Some relationships plateaued at a
certain distance.
Few relationships went up and down in dramatic swings as they
routinely do today.
Beyond Civility
Mere static-status civility is no longer enough to deal with a world of
volatile relationships created by status churn across previously distinct
classes, and temporal churn that ensures that relationships that never quite
die. Relationships that move in and out of the double-take zone (or even
just threaten to do so) need a very different approach.

You never know when you might turn a barista into a new friend after
a double-take encounter, or renew a relationship with an old one via a
Facebook Like.
The sane default attitude today is the world is small and life is long.
Reinventing yourself is becoming prohibitively expensive. You have to
navigate under the expectation that the real part of your social graph will
grow over time, even if you move around a lot. If you are immortal and
can move sufficiently fast in space and time, the abstract social graph may
vanish altogether, like it did for Wowbagger the Infinitely Prolonged in
The Hitchhikers Guide to the Galaxy, who made it the mission of his
immortal life to insult everybody in the galaxy, in person, by name, and in
alphabetical order.
The phrase the world is small and life is long came up in a
conversation with an acquaintance in Silicon Valley. Wed been talking
about how the Silicon Valley technology world, despite being quite large,
acts like a small world. Wed been talking, in particular, about the dangers
of burning bridges and picking fights. We both agreed that thats a very
dangerous thing to do. Thats when my acquaintance trotted out that
phrase, with a philosophical shrug.
Of the two parts of the phrase, the world is small is easier to
understand. I dont think it has much to do with the much-publicized fourdegrees finding on Facebook. Status and temporal churn within the sixdegree world is sufficient to explain whats happening.
Life is long is the bit people often fail to appreciate. The social graph
throbs with actual encounters every minute, that are constantly rewiring it.
If you are in a particular neck of the woods for a long enough time, youll
eventually run into everybody within it more than once. Its the law of
large numbers applied to accumulating random encounters.
Silicon Valley is a place where worlds collide frequently in different
status-churning contexts, and circulation through different roles over time
creates temporal churn. There are other worlds that exhibit similar
dynamics. Most of the world is going to look like this in a few decades.

It is increasingly going to be a world of shifting alliances and status


relationships within a larger, far more active and unstable layer in a much
larger double-take zone. A world where you will never be quite sure where
you stand in relation to a large number of potentially important people.
Some people love this emerging, charged social world, always poised
on the edge of serendipity. They seem to come alive with this much static
in the air. They thrive on status churn. They hoard relationships, turning
every chance encounter into a rest-of-life relationship.
Others fantasize about declaring relationship bankruptcy and starting
a new life somewhere else. At one time, this was actually very easy to do.
Today, you need the Witness Protection Program to pull it off.
I am not certain whether I like or dislike this emerging world. I think I
am leaning towards dislike. The slogan, the world is small and life is long
describes a tense and anxious world of constant social shadow-boxing.
One where you must always be on, socially. A world where burning
bridges is more dangerous, and open conflict becomes ever costlier,
leading to less dissent and more stupidity.
It is a situation of false harmony. One where peace is less an indicator
of increasing empathy and human connection, and more an indicator of
increasing wariness. You never know which world your world will collide
with next, with what consequences. You never know what missed
opportunity or threat could decisively impact your life.
So far, weve been able to do without the opportunities, and avoid the
threats. We try to teach teenagers what we think are the right kinds of
cautious lessons: it boils down to be careful what you post on Facebook, it
could affect your job.
But this is a transient stage. Soon we wont be able to do without the
opportunities, and our lives will come to depend on the serendipity
catalyzed by the active, unstable double-take layer. Nice-to-have has a
way of turning into must-have. This dependence will come with necessary
exposure to the threats. The world is small and life is long will not be
enough protection.

Motivational speakers used to preach a few decades back that we


should all think global and act local.
It has happened. But I dont think this is quite what they had in mind.

My Experiments with Introductions


May 7, 2011
Introductions are how unsociable introverts do social capital.
Community building is for extroverts. But introductions I find stimulating.
Doing them and getting them. This is probably a direct consequence of the
type of social interaction I myself prefer. My comfort zone is 1:1, and an
introduction is a 3-way that is designed to switch to a 2-way in short order,
allowing the introducer to gracefully withdraw once the introducees start
talking. As groups get larger than two, my stamina for dealing with them
starts to plummet, and around 12, I basically give up (I dont count
speaking/presentation gigs; those feel more like performance than
socializing to me).
I am pretty good at introductions. Ive helped a few people get jobs,
and helped one entrepreneur raise money. Off the top of my head, I can
think of at least a half-dozen very productive relationships that I have
catalyzed. I think my instincts around when I should introduce X to Y are
pretty good: 2 out of 3 times that I do an introduction, at the very least an
interesting conversation tends to start. Since Ive been getting involved in
a lot of introductions lately, I thought Id share some thoughts based on
my experiments with introductions.
Weak-Link Hubs vs. Strong-Link Hubs
Introductions are the atomic unit of social interaction. They are
central to the creation and destruction of communities, but arent
themselves a feature of communities. Rather they drive the creative
destruction process within the universe of communities, as Romeo and
Juliet illustrates particularly well. Introductions are constantly rewiring the
social graph, causing old communities to collapse and new ones to cohere.
To understand how introductions work, you have to understand a
subtle point: stereotypical extroverted community types are actually pretty
bad at introductions, except for one special variety: introducing a
newcomer into an existing group, as a gatekeeper. Stereotypical

host/hostess community types are great at helping existing communities


grow stronger and endure. Their social behaviors are therefore in direct
conflict with uncensored introduction activity, which causes social
creative destruction to intensify. I call the stereotypical community types
strong-link social hubs. They know everybody in a given local (physical or
virtual) community. They are a friend, mentor or mentee to every
individual within that community. They are the ultimate insiders. When a
strong-link social hub makes an introduction, it is usually quick and
superficial, I am sure you two will find that you have a lot in common,
youre both engineers! Or the half-joking everybody this is X; X this
everybody, ha ha! Enough to sustain party conversations, but usually not
enough to catalyze relationships except by accident.
The real hubs of introduction activity on the social graph though, are
what I call weak-link hubs. It is both a personality type and a structural
position in the social graph. It is easiest for me to explain what this means
via a personal anecdote.
When I was a kid in high school, I resisted being sucked into any
particular group.For their part, the 2-3 major groups in my class saw me as
a puzzle: I was not one of us or one of them. Neither was I one of the
social outcasts. I did 1:1 friendships or hung out occasionally as a guest in
groups, but I rarely joined in group activities.
One day, I remarked to a friend, I guess I am equally inside all the
groups. His retort: No, you are equally outside all the groups. I realized
that not only was he right, that was pretty much my identity. It hardened
into a sort of reactionary tendency towards self-exile (one of my
nicknames in college was hermit) that has stayed with me. Whenever I
find myself getting sucked too deeply into any group, I automatically start
withdrawing to the edge. Physically, if the group is in a room.
That is what I mean by weak-link hubs being both a personality type
and a structural position. You have to have the personality that makes you
retreat from centers and you have to have centers around you to retreat
from. This retreat is an interesting dynamic. You cannot really be attracted
to the edge around a single center, since that is a diffuse place. But if you
are retreating simultaneously from multiple centers, you will find yourself

a position in the illegible and chaotic intersection lands. Why illegible?


Try drawing a random set of overlapping circles and making sense of the
pattern of intersections. Heres an example:

This retreating from all nearby centers is not exactly the personality
description of a great social hub. So why is it a great position for
introduction-making? Its the same reason Switzerland is a great place for
international negotiations: neutrality and small size anchoring credibility,
but with sufficient actual clout to enforce good behavior. If you are big or
powerful, you have an agenda. If you are from the center of a community,
you have an agenda. Another great example is the Bocchicchio family in
The Godfather: not big enough to be one of the Five Families, but bloodyminded enough to effectively play intermediary in negotiations by offering
themselves up as hostages.
Edge Blogging and the Introduction Scaling Problem
This post actually grew out of a problem I havent yet solved. My
instincts around introductions arent serving me well these days. Over the
last few months, the number of potential connection opportunities that go
above my threshold triggers has been escalating. Two years ago, Id spot
one potential connection every few months and do an introduction. Now I
spot one or two a week, and its accelerating. I am getting the strange
feeling that I might turn into one of those cartoon characters at a
switchboard who starts out all calm and in control and is reduced to crazed
scrambling. In case it isnt obvious, the growth of ribbonfarm is the driver
that is creating this scaling problem.

The answer is obvious for extroverts: create a community and start


dealing with people in one-to-many and many-to-many ways in group
contexts. This allows you to simply create a social field around yourself
where people can connect without overt catalysis from you. The cost is
that you must turn yourself into a human social object. You must become a
new center. You will no longer be in the illegible intersection lands where
creativity and originality live. Call me selfish, but thats the big reason I
dont like the idea that readers frequently propose: formal ribbonfarm
meetups or an online ribbonfarm community.
The anatomy of the problem is simple. Blogging is often an edge role.
If you see a blog that sprawls untidily across multiple domains rather than
staying within a tidy niche, chances are you are reading an edge blog.
They tend to be small and slow-growth, with weird numbers in their traffic
anatomy.
The social graph of an edge blogger is very different from the social
graphs of both celebrities and regular people without much public
visibility. Regular people have many active strong links and many more
weak links that used to be strong links (old classmates, colleagues from
former jobs and the like). For regular people weak links are usually either
strong links weakened by time or intrinsically weak links catalyzed by a
short sequence of strong links (like a friend-of-a-friend or an in-law). In
both cases, the weak links of regular people tend to be quiescent.
Celebrities on the other hand have a huge number of active weak
links, but they only go one way: a lot of people know Obama but Obama
doesnt know 99.9999% of them. Even if you count only those who have
shaken hands with Obama, the asymmetry is still massive. Center bloggers
are effectively celebrities. In fact they often are celebrities who have taken
to blogging, like Seth Godin.
Edge bloggers though are an odd species. They are perhaps most like
professional headhunters, used car salesmen or other types of people who
regularly come into weak two-way contact with total strangers. Unlike
those rather transactional roles though, bloggers do a whole lot of weak
social rather than financial transactions with a lot of total strangers. Many
of you (Ive lost count) have ongoing email conversations with me,

usually about a specific theme that Ive blogged about or mentioned


somewhere online (container shipping, martial arts, organizational decay
and s/w design are some of the themes). The intensity ranges from several
times a week to once every couple of months (for the infrequent ones, I
usually have to do an inbox search to remember who the person is). With
some correspondents, I have periodic bursts of activity. With a small
handful of people, thanks to phone or face-to-face meetings, I have made
the jump to actual friendship.
Edge bloggers are natural weak link hubs. We have vastly more active
two-way weak link relationships going on than regular people or
celebrities (or center bloggers). These are not forgotten classmates or
friends-of-friends who can be called upon when you are job-hunting. Nor
are they one-way-recognition handshakes.
I got a visceral sense of what it means to be a weak-link hub when I
compared my LinkedIn graph visualization to that of a couple of regular
people friends. Though my friends had comparable numbers of contacts,
most of their contacts fell into very obvious small-world categories, like
workplace, school, customers or industry associations. My social graph on
the other hand, has a huge bucket that I could only label miscellaneous.
Many are from ribbonfarm, but I suppose my weak link hub style carries
over to regular life as well. For instance, I have a lot more random
connections to people in widely separated parts of Xerox, my former
employer, compared to most of my former coworkers.
Keeping Edges Edgy
Make no mistake, this is fun for me and hugely valuable. But I have
to admit, it takes a lot of time to keep up a whole bunch of 1:1 email
relationships, and it is getting steadily harder. So far, my clean-inbox
practices have helped me keep up, but there has some of the inevitable
increase in response time and sometimes decrease in my response quality.
The big temptation is of course to ignore my personality and
preferences and allow ribbonfarm to become a center. Its not
necessarily a bad thing. You trade off continued creativity and vitality for
deeper collaborative cultivation of established value. I dont like doing

that much. I get distracted too quickly. My brain is not built for depth in
that sense, even around things I trigger, like the Gervais Principle
memeplex.
The conundrum is that I dont think raising the threshold for
potential connection quality is the right answer. Thats the wrong filter
variable for scaling. I am not sure what the right one is, but I wont
attempt to jump to synthesis. So far, Ive simply been letting a steadilyincreasing fraction of introduction opportunities simply go by. Mostly I try
to avoid making introductions to people who are already oversubscribed.
Though I dont have a theory, I do have one heuristic that serves me
well though: closer potential direct connection. If I know A and B, and I
sense that A and B would have a more fertile relationship with each other
than either has with me, I make the connection and exit. It is the opposite
logic of marketplaces whose organizers are afraid of disintermediation. To
me being an intermediary in the social sense is mostly costs and little
benefit.
But that one heuristic isnt enough. I have experimenting with
introductions in different ways lately, and learning new ideas and
techniques.
Heres one new idea Ive learned. To keep edges edgy, and prevent
them from becoming centers, you need feedback signals. One I look for is
symmetry. Introducer types tend to be introducees equally often. If
the ratio changes, I get worried.
As an illustration of the symmetry of this process of mutual crosscatalysis among sociopath weak-link hubs, consider this, while I was
conducting my experiments with introductions, others have been
introducing me to their friends. Hang Zhang of Bumblebee Labs
introduced me to Tristan Harris, CEO of Apture and Seb Paquet formally
introduced me to Daniel Lemire (who I knew indirectly through comments
on each others blogs, before but had never directly emailed/interacted
with).
We are all lab rats running in each others mazes. I like that thought.

Extroverts, Introverts, Aspies and Codies


April 7, 2011
Lately Ive been thinking a lot about extroversion (E) and introversion
(I). As a fundamental spectrum of personality dispositions, E/I represents a
timeless theme in psychology. But it manifests itself differently during
different periods in history. Social psychology is the child of a historicist
discipline (sociology) and an effectively ahistorical one (psychology).
The reason Ive been thinking a lot about the E/I spectrum is that a lot of
my recent ruminations have been about how the rapid changes in social
psychology going on around us might be caused by the drastic changes in
how E/I dispositions manifest themselves in the new (online+offline)
sociological environment. Here are just a few of the ideas Ive been
mulling:

As more relationships are catalyzed online than offline, a great


sorting is taking place: mixed E/I groups are separating into purer
groups dominated by one type
Each trait is getting exaggerated as a result
The emphasis on collaborative creativity, creative capital and
teams is disturbing the balance between E-creativity and Icreativity
Lifestyle design works out very differently for Es and Is
The extreme mental conditions (dubiously) associated with each
type in the popular imagination, such as Aspergers syndrome or
co-dependency, are exhibiting new social phenomenology

It was the last of these that triggered this train of thought, but Ill get
to that.
I am still working through the arguments for each of these
conjectures, but whether or not they are true, I believe we are seeing
something historically unprecedented: an intrinsic psychological variable
is turning into a watershed sociological variable. Historically, extrinsic and
non-psychological variables such as race, class, gender, socio-economic
status and nationality have dominated the evolution of societies.

Psychology has at best indirectly affected social evolution. For perhaps the
first time in history, it is directly shaping society.
So since so many interesting questions hinge on the E/I distinction, I
figured it was time to dig a little deeper into it.
Wrong, Crude and Refined Models
Ill assume you are past the lay, wrong model of the E/I spectrum.
Introversion has nothing to with shyness or social awkwardness.
If you have taken a Psychology 101 course at some point in your life,
you should be familiar with the crude model: extroverts are energized by
social interactions while introverts are energized by solitude. Every major
personality model has an introversion/extroversion spectrum that roughly
maps to this energy-based model. It is arguably the most important of the
Big Five traits.
For the ideas I am interested in exploring, the Psychology 101 model
is too coarse. We sometimes forget that there are no true solitary types in
homo sapiens. As a social species, we merely vary in the degree to which
we are sociable. We need a more refined model that distinguishes between
varieties of sociability.
A traditional mixed group of introverts and extroverts exhibits these
varieties clearly. Watch a typical student group at a cafeteria. The
extroverts will be in their mutually energizing huddle at the center, while
the introverts will be hovering at the edges, content to get the low dosage
social energy they need either through one-on-one sidebar conversations
or occasional contributions tossed like artillery shells into the extrovert
energy-huddle at the core. Usually contributions designed to arrest
groupthink or runaway optimism/pessimism.
As this example illustrates, a more precise and accurate view of the
distinction is that introverts need less frequent and less intense social
interaction, and can use it to fuel activities requiring long periods of
isolation. Extroverts need more frequent and more intense social

interactions, and can only handle very brief periods away from the group.
They prefer to use the energy in collaborative action.
While true solitude (like being marooned an island without even a
pet) is likely intolerable to 99% of humanity, introverts prefer to spend the
social energy they help create individually. This leads naturally to a
financial metaphor for the E/I spectrum.
E/I Microeconomics
Positive social interactions generate psychological energy, while
negative ones use it up. One way to understand the introvert/extrovert
difference is to think in terms of where the energy (which behaves like
money) is stored.
Introverts are transactional in their approach to social interactions;
they are likely to walk away with their share of the energy generated by
any exchange, leaving little or nothing invested in the relationship itself.
This is like a deposit split between two individually held bank accounts.
This means introverts can enjoy interactions while they are happening,
without missing the relationships much when they are inactive. In fact, the
relationship doesnt really exist when it is inactive.
Extroverts are more likely to invest most of the energy into the
relationship itself, a mutually-held joint account that either side can draw
on when in need, or (more likely) both sides can invest together in
collaboration. This is also why extroverts miss each other when separated.
The mutually-held energy, like a joint bank account, can only be accessed
when all parties are present. In fact strong extroverts dont really exist
outside of their web of relationships. They turn into zombies, only coming
alive when surrounded by friends.
In balance sheet terms, introverts like to bring the mutual social debts
as close to zero as possible at the end of every transaction. Extroverts like
to get deeper and deeper into social debt with each other, binding
themselves in a tight web of psychological interdependence.

This shared custodial arrangement of relationship energy is one


reason strong relationships are the biggest predictor of happiness: as
Jonathan Haidt has put it, happiness is neither inside, nor outside, but inbetween. Happiness is the energy latent in interpersonal bonds that helps
smooth out the emotional ups and downs of individual lives. The more
you put into them, the happier you will be.
Continuing the financial analogy, the small pools of individually-held
stores of introvert energy tend to be more volatile in the short term but
better insulated from the exposures of collectivization. The large
collectively held stores of extrovert energy tend to be less volatile in the
short term, but more susceptible to dramatic large scale bubbles of
optimism and widespread depression.
Both sides of course, pay a price for their preferred patterns of social
energy management. But thats a topic for another day. In this post, I am
more interested in bald behavioral implications of this model:
Introverts
1. require a minimum period of isolation every day to survive
psychologically
2. are energized by weak-link social fields, such as coffee shops,
where little interaction is expected
3. are energized by occasional, deeper 1:1 interactions, but still at
arms length; no soul-baring
4. are energized by such 1:1 encounters with anyone, whether or
not a prior relationship exists
5. are drained by strong-link social fields such as family gatherings
6. are reduced to near-panic by huddles: extremely close manymany encounters such as group hugs
7. have depth-limited relationships that reach their maximum depth
very fast
Extroverts
1. need a minimum amount of physical contact everyday, even if it is
just laying around with a pet

2. are energized by strong-link social fields such as family gatherings


3. like soul-baring 1:1 relationships characterized by swings between
extreme intimacy and murderous enmity
4. are not willing to have 1:1 encounters with anyone unless theyve
been properly introduced into their social fields
5. are made restless and anxious by weak-link social fields such as
coffee shops unless they go with a friend
6. are reduced to near panic by extended episodes of solitude
7. have relationships that gradually deepen over time to extreme
levels
It took me a long time to learn point 4 in particular, because it is so
counter-intuitive with respect to the wrong-but-influential conflation of
introversion and shyness. I am a classic introvert. You might even say I
am an extreme introvert. One of my nicknames in college was hermit.
Yet, I find that I am far more capable of talking with random strangers
than most extroverts.
Extroverts tend to enjoy spending a lot of time with people they know
well. Talking to strangers is less rewarding to them because most E-E
transactions are maintenance transactions that help maintain, spend or
appreciate the invested capital in the relationships. Some of my extrovert
friends and family members are even offended by how easily and openly I
talk to random strangers: to them it seems obvious that depth of sharing
should correlate to length of interpersonal history. People like me simply
dont get that since our approach to relationships is to pretty much bring
the depth back to zero at the end of every conversation.
The E-I Tension
Introverts (Es) and extroverts (Is) have a curiously symbiotic, lovehate relationship as a result. Both E-E and I-I interactions tend to be
harmonious, since there is consensus on what to do with any energy
generated. Positive E-E interactions strengthen bonds over time. Positive
I-I interactions generate energy that is used up before the next interaction,
with no collective storage.

It is E-I interactions that create interesting tensions. Extroverts accuse


introverts of selfishness: from their point of view, the introverts are taking
out loans against jointly-held wealth, to invest unilaterally in risky
ventures. Introverts in turn accuse extroverts of being overly possessive
and stifling, since they cannot draw on the energy of the relationship
without the other party being present. The confusion is simple if you note
that the introvert is thinking in terms of two individually held bank
accounts, while the extrovert is thinking in terms of a single jointly held
one.
The tension between introverts and extroverts is most visible in the
loose, non-clinical mental health diagnoses they make up for each other as
insults. Introverts are likely to accuse extroverts of codependency.
Extroverts are likely to accuse introverts of Aspergers syndrome. I only
recently learned about the slang term extroverts have for introverts: aspie.
Introverts dont have an equivalent short slang term for codependency that
I know of (probably because by definition they dont gossip enough to
require such shorthand). So lets simply make one up for the purpose of
symmetry: codie.
Ive met people suffering from clinical versions of both codependency and Aspergers, so I know that most of the aspie/codie
accusations flying around are baseless.
Lately Ive seen a lot more aspie accusations flying around than codie
accusations. This is perhaps partly due to Aspergers becoming an
aspirational disease in places like Silicon Valley (along with dyslexia), due
to a presumed correlation with startup success, but I believe there is more
to it. Recent shifts in the social landscape have made introversion far
more visible. This is among the many cracks in E-I relationships that I
mentioned earlier. There are seismic shifts going on in social psychology.
We may see a re-organization of social geography comparable to the great
racial and socio-economic sortings created by the flight to suburbia and
exurbia at the peak of the urban sprawl era.

Impro by Keith Johnstone


January 23, 2010
Once every four or five years, I find a book that is a genuine lifechanger. Impro by Keith Johnstone joins my extremely short list of such
books. The book crossed my radar after two readers mentioned it, in
reactions to the Gervais Principle series: Kevin Simler recommended the
book in an email, and a reader with the handle angelbob mentioned it in
the discussion around GP II on Hacker News. Impro is ostensibly a book
about improvisation and the theater. Depending on where you are coming
from, it might be no more than that, or it might be a near-religious
experience.
The Alien Soulmate
In Your Evil Twins and How to Find Them, I defined an evil twin as
somebody who thinks exactly like you in most ways, but differs in just a
few critical ways that end up making all the difference. I listed Alain de
Botton and Nicholas Nassim Taleb among my evil twins. Johnstone has
defined for me a category that I didnt know existed, alien soulmate:
someone whose life has been shaped by radically different life
experiences, and thinks with a completely different conceptual language,
but is like you in just a few critical ways that make you soulmates.
Johnstones life (described in the opening chapter, Notes on Myself)
seems to have been shaped by extremely unpleasant early educational
experiences. Mine has been shaped largely by rewarding ones. He loves
teaching and is clearly unbelievably good at it; the sort of teacher who
changes lives. I dislike teaching, and though Ive done a fair amount of it,
I am not particularly good at it. His life revolves around theater, while
mine revolves around engineering, which are about as far apart as
professions can get. I could go on, but you get it. Polar opposites on paper.
We seem to share two critical similarities. First, like me, he seems to
stubbornly think things through for himself, with reference to his own
observations of the world, even if it means clumsily reinventing the wheel

and making horrible mistakes. Second, like me, he seems to adopt


methodological anarchy in groping for truths. Anything goes, if it gets you
to a valuable insight; no religious adherence to any particular
methodology, scientific or otherwise.
There is also a connection that may or may not be important: I was
active in theater for about a decade, from sixth grade through college. In
school, I was mostly the go-to guy for scripting class productions, and in
college I expanded my activities to acting and directing. I even won a
couple of inter-hostel (intramural to you Americans) acting prizes, and was
the dramatics secretary for my hostel for a year. Not that that means much.
It was pretty much a case of the one-eyed man being king in the land of
the blind. Engineering schools are not known for producing eventual
movie stars.
But though I was pretty much a talentless hack among other talentless
hacks, in retrospect, my experience with amateur theater did profoundly
shape how I think. I suppose thats why I resonated strongly with Impro.
I am pretty sure though, that experience with theater is not necessary
for the book to have a deep impact on you. It seems to have attained a cult
status with a wide audience that extends well beyond the theater
community, so if you like this blog, you will probably like the book.
The Book
The book, first published in 1981, is a collection of loosely-connected
essays on various aspects of improvisational theater. The essays are not
philosophical (which is why their philosophical impact is so startling).
They are about very specific details of stagecraft. There are exercises
designed to teach particular skills, acting tips, short explanations
motivating the descriptions of the exercises, and insider references to
famous theater personalities (the only name I recognized among all the
references was Stanislavsky, he of the Method School). This is what
makes the non-theater reader feel so pleasantly blindsided. You shouldnt
be getting epiphanies about life, death and the universe while reading
about how to put on a mask or strike a pose. But more on that later, heres
a quick survey of the contents.

Chapter 1, Notes on Myself, begins with an exercise designed to get


you seeing the world differently. Literally. The exercise is to simply walk
around looking at things and shouting out the wrong names for things you
see (for example, look at your couch and yell apple). The effect he
asserts, of doing this for a minutes, is that everything seems to come alive
and acquire the intensity it held for you when you were a child. Try it for a
bit. It works, though I did not experience as much intensifying as he
claims his students typically experience. After that unsettling start, we get
a short and unsentimental, yet poignant and intimate, autobiographical
sketch of his early educational experiences. The descriptions of the
experiences are accompanied by deft insights into the nature of education .
This chapter includes the philosophical premise of the book, that adults are
atrophied children, and that traditional education accelerates rather than
slows this process of atrophy. But the point is not made with any sort of
political intent. It is simply presented as a useful perspective from which
to view what he has to say, and why theater training has the effects it does.
Chapter 2, Status, is particularly spectacular, and the most accessible
chapter in the book. It is based on the idea that the only thing you really
need to do, in preparing to improvise a scene, is to decide what status to
play, high or low, in relation to the other actors on stage. Through a series
of explanations and descriptions of startlingly original exercises,
Johnstone illustrates the working of status dynamics in interpersonal
interactions. One that I found both enlightening and hilarious was this: you
have a completely boring, everyday conversation with your improv
partner, but include an insult in every line you make up. Heres one of his
example fragments:
Can I help you, fool?
Yes, Bugeyes!
Do you want a hat, slut?
Ive done just enough theater to be able to visualize this clearly, but I
suspect, even if you have no experience with theater, you can imagine how
this strange exercise can turn quickly into drama that helps you understand
status. There are other surgically precise exercises that are designed to

teach how personal space relates to status, and how master-servant


dynamics play out. One true Aha! moment for me was a throwaway
remark on Becketts Waiting for Godot, which I saw in New York last fall.
I knew of the play by reputation of course, but I had no idea what to
expect, and whether I would get it. I only got it at a fairly superficial
level, but enjoyed it immensely nevertheless, for reasons that I did not
understand. Yet, others in the audience seemed to not get it at all, to the
point of being bored.
Impro completely explained the play for me. The plays appeal lies in
the fact that it is a showcase for status dynamics. The four characters,
Vladimir, Estragon, Pozzo and Lucky, perform what amounts to a status
opera. Though a good deal of the content is nonsensical, the status
interactions are not.
Chapter 3, Spontaneity, describes exercises and acting principles that
seem like they would take you perilously close to madness if you tried
them unsupervised. Having had a lifelong preference for learning by
myself rather than listening to teachers, I dont often tell myself, this
material needs a teacher. So that should give you an idea of just how
unusual this is likely to be for most people. Johnstone recognizes this, and
he notes that the work described in this chapter is closer to intensive
therapy than to learning a skill. In fact, it sounds like it would be more
intense and more effective than therapy (therapy being, like teaching, yet
another process that I dont trust to others). I am surprised nobody has
invented theater-therapy. Actually, I take that back. I once knew a girl who
did prison theater. I never understood the point of that. Now I do. Done
right, I suspect prison theater could lower rates of recidivism. Maybe there
are other examples of theater as therapy.
Chapter 4, Narrative Skills, is close to the best fiction-writing advice
Ive ever read, probably second only to Francine Proses Reading Like a
Writer (also recommended by a regular reader, Navin Kabra). The
material in this chapter actually got me curious enough that I put down the
book and tried out one of the exercises right then. At the time, I happened
to be on a long flight from DC to Tokyo (on my way to Bali), so I actually
sat there for an hour with my eyes closed, thinking up a story, and then
spent another hour scribbling like crazy, writing it down. I came up with

probably the best plot outline of my life. I might actually flesh it out and
post it here at some point (I dabbled in fiction a fair amount about a
decade ago, but somehow never pursued it very far).
Chapter 5, Masks and Trance, is easily the most intense, disturbing
and rewarding chapter. The subject is acting with masks on, a stylized sort
of theater that seems to have been part of every culture, during every time
period, until enlightenment values began stamping it out. Since I had
just returned from Bali when I read this chapter (examples from Bali
feature prominently books treatment), and seen glimpses of what he was
talking about during my trip, the material came alive in particularly vivid
ways. The chapter deals, with easy familiarity, with topics that would
make most of us very uncomfortable: trances, possession and atavistic
archetypes. Yet, despite the disturbing raw material, the ideas and concepts
are not particularly difficult to grasp and accept. They make sense.
The Book, Take Two
So much for the straightforward summary of the book. That it teaches
theater skills effectively should not be surprising. What is surprising is the
light it sheds on a variety of other topics. Here are just a few:
1. Body Language: Ive always found body language a somewhat
distasteful subject, whether it is of the traditional covering your
mouth means you think the other person is lying variety, or
neurolinguistic programming, or the latest craze, the study of
microexpressions. Despite the apparent validity of specific
insights, the field has always seemed to me intellectually
disreputable and shoddy. Impro does something I didnt think was
possible: it lends the subject dignity and intellectual respectability.
The trick, with hindsight, is to view the ideas in the field in the
context of art, not psychology.
2. Interpersonal Relationships: I spend a good deal of time thinking
about the principles of interpersonal interaction, and writing up
my thoughts. The reason Impro sheds a unique sort of light on the
subject is that it describes simulations of what-if scenarios that
would never happen in real life, but serve to validate theories that
do apply to real-life situations.

3. Psychology: Elsewhere in recent posts, Ive recommended the


classic books on transactional analysis (TA), Eric Bernes Games
People Play and What Do You Say after You Say Hello and
Thomas Harris Im OKYoure OK. Ive always felt though, that
TA, while useful as an analytical framework, isnt very helpful if
you are trying to figure out what to do. Impro is pretty much the
how to manual for TA, and it works through a sort of
experimental reductio ad absurdum. There is no better way to
recognize the stupidity of game playing than to act out (or at least
think out) game scripts in exaggerated forms.
Youll probably find insights into other subjects if you look harder. I
suspect the reason there is so much to learn from the practice of theater is
that the humanities and social sciences lack a strong culture of
experimentation. Theater is, in a sense, the true laboratory for the
humanities and social sciences.
Ill finish up with one thought. I explain the tagline of this blog,
experiments in refactored perception as geekspeak for seeing the world
differently. If you ignore the theater-manual aspect, that pretty much
describes the book: it is a textbook that teaches you how to see the world
differently.

Your Evil Twins and How to Find Them


September 17, 2009
Recently a reader emailed me a note: I just wanted to bring to your
radar the pleasures and sorrows of work by Alain de Botton, and what
you thought of its theses. Now de Botton (The Pleasures and Sorrows of
Work, The Consolations of Philosophy, How Proust Can Change Your
Life) has been on my radar for a while. I had browsed his books at Barnes
and Noble a few times, but always put them down due to strange, sick
feelings in my stomach. Thanks to this readers gentle nudge, I finally
caved and read the first of the three, and managed to figure out why de
Bottons books had made me viscerally uncomfortable at first glance: he is
my evil twin. An evil twin is defined as somebody who thinks exactly like
you in most ways, but differs in just a few critical ways that end up
making all the difference. Think the Batman and the Joker. Heres why
evil twins matter, and how to discover yours.
Why Evil Twins Matter
In the closing scene of Batman Begins, Commissioner Gordon tells
the Batman that a new villain is abroad who has a taste for theatrics, like
you and shows him the Jokers calling card. The premise of the evil twin
setup plays out in the sequel, the The Dark Night. Towards the end, Heath
Ledgers disturbing Joker elaborates on the logic: I wouldnt kill you!
What would I do without you? You complete me.
Comic book universes provide plenty of examples of this fundamental
idea, that your nemesis is not a polar opposite, but an eerily similar person
who is just different in a few subtle but critical ways. Some narratives in
fact present the nemesis as a polarity within one character, as in the Jekyll
and Hyde model and more recently, the Hulk.
If you think about it, this makes sense. Your nemesis has to be
interested in the same things as you, operate in the same areas, and think
and act at levels of sophistication similar to yours. Polar opposites would
live lives that would likely not even intersect. List the 10 most important

elements of your social (not private) identity. In my case for instance, they
might be PhD, researcher, omnivorous reader, writer, individualist,
polymath-wannabe, coffee-shop person, non-athletic, physically lazy,
amoral, atheistic and so forth. If you turned them all around, youd get
something like high-school drop-out, non-reader, groupie, parochial, pub
person, sportsy, physically active, moral and religious. I am no snob, but it
is highly unlikely that Id have much to do with somebody with that
profile.
On the other hand, if you meet somebody to whom every adjective
applies, but they rub you the wrong way at a deep level, what are you to
conclude? The clash has to be at the most subtle levels of your personality.
Meeting your evil twin helps you find yourself, which is why you should
look. Of course, I am being somewhat facetious here. You dont have to
hate your evil twin or battle him/her to the death. You can actually get
along fine and even complement each other in a yin-yang way.
de Botton, Taleb and Me
Take Alain de Botton for instance. Despite my evil twin adjective, I
think Id like him a lot and get along with him quite well. No climactic
battles. The Pleasures and Sorrows of Work is just beautiful as a book. As
you know if youve been reading this blog for a while, I write a lot on the
philosophy of work. The book literally produced dozens of thoughts and
associations in my head on every page. Since I was reading it on the
Kindle, I was annotating and highlighting like crazy. We think about the
same things. He opens with a pensive essay on container shipping
logistics, something Ive written about. The Shawshank Redemption with
its accountant hero is one of my favorite movies; de Botton finds romance
in the profession as well. Ive written about ship-breaking graveyards, he
writes about airplane graveyards. He seems fascinated by aerospace stuff.
I am an aerospace engineer. He sees more romance in a biscuit factory
than in grand cathedrals. So do I. Like me (only more successfully) he
shoots for an introspective, lyrical style. But as I continued reading, I
realized I was intellectually a little too close to the guy.
When I tried putting my notes all together, the feelings of discomfort
only intensified. There was no coherent pattern to my responses. I realized

that, in a way, you can only build one picture at a time with a given set of
jigsaw pieces. Writers normally leave enough room for you to construct
meaning so you feel a sense of control over the reading experience. With
evil twins, thats not possible, since you are trying to build different
pictures. I felt absorbed in the book, but also confused and disoriented by
it.
Thinking harder, I realized that the points of conflict in our
worldviews were at a very abstract level indeed. In a deep sense, de
Bottons worldview is that of an observer. Mine, though I do observe and
write a lot, is primarily that of a get-in-the-fray doer. He is content to
watch. I feel compelled to engage. He admires engineers and engineering;
I felt compelled to become one and get involved in building stuff. It is a
being-vs.-becoming dynamic.To a certain extent, he is driven by needs of
an almost religious nature: to overcome his sense of separateness and be
part of something larger than himself. My primary instinct is to separate
myself. It is a happiness vs. will-to-power dynamic. One last example. de
Botton is clearly a humanist: he wants to be kind and feel for others, and
paradoxically, ends up being quite cruel in places. I, on the other hand, am
mainly driven by a deep ubermensch tendency towards hard/cold
interpersonal attitudes, but end up surprising myself by being kind and
compassionate more often, in practice. Kind cruelty vs. tough love. I could
go on.
Another of my evil twins is Nicholas Nassim Taleb (Fooled by
Randomness, The Black Swan). I am re-reading the latter at the moment,
and I noticed that Taleb describes himself as a flaneur. In the comments to
my piece, Is there a Cloudworker Culture? a reader noted that my selfdescription as a cloudworker sounded a lot like the idea of a flaneur.
Again, a lot of the exact same things interest us, and we share opinions on
a lot of key fronts (the nature of mathematics, empiricism and
falsifiability, unapologetic elitist tastes, long-windedness, low tolerance
for idiots and the accidentally wealthy, a preference for reading books
rather than the news). And again, we part ways at a deep level. Thats a
story for another day.
So before we move on to the How-To section, a recommendation. If
you feel strangely attracted to my writing, and yet rebel against it at some
deep level, you might really (and unreservedly) love de Botton and/or

Taleb. I am too close to their thinking to do justice to them with book


reviews, but you should read them. If the books help you clarify who you
are, and you end up dropping ribbonfarm from your reading list, Ill
consider it my good deed for the day.
How to Find Your Evil Twin
In my case, my evil twins mostly turn out to be writers Ive never
met. Sometimes dead writers. Thats because so much of my life revolves
around books and ideas. I suspect most people have a pretty good chance
of actually meeting and getting to know their evil twins.
The key things to look for are the following:
1. You share a lot of interests, down to very specific details like books
read, places visited, socio-economic and cultural backgrounds
(though oddly enough, not race or ethnicity).
2. Your thinking levels are similar, and your conceptual categories for
viewing the world are similar
3. You try to act in the world in very similar ways; you choose similar
means and ends
4. You reach similar conclusions about what is, what ought to be,
what you should do and how
5. If you ever meet them in person, you instantly resonate with them
That sounds like soulmate right? Now for the differential that will
discriminate between soulmate and evil twin:
1. If you are straight, they are the same gender as you. If you are gay,
I dont know.
2. You lean in different directions on key philosophical tradeoffs. For
example, if you both believe truth vs. kindness is a fundamental
tradeoff, you lean towards truth, while he/she leans towards
kindness.
3. On the important question of attitude towards others, you are
clearly different. You want different things from other people and
the world at large.

So go, look for your evil twin. You will be enlightened by what you
find. If you already know who yours are, I am curious. Post a comment
(suitably anonymized if necessary).

Bargaining with your Right Brain


March 16, 2008
At the straw market in Nassau, in the Bahamas, famous for stuff
like the straw handbags below I recently encountered a distinctive
culture of bargaining that made me stop and ponder the subject (on a
beach, aided by rum). The pondering resulted in a neat little flash of
insight that allowed me to synthesize everything I know about the subject
in a way that surprised me. The short version: game-theoretic and
information-theoretic approaches to the subject are something between
irrelevant and secondary. What drives bargaining behaviors and outcomes
is story-telling skill. Heres how you can learn the skills that really matter
in being a successful bargainer.

The Simple, Elegant and Wrong Answer


Buyer A and seller B are haggling over product P. Neither knows what
price the other is willing to settle for, but A wants P at the lowest price B
might sell, and B wants to sell at the highest price A might be willing to
pay. You model the situation as a sequential move game, with each move
being, at its simplest, a price. You might represent the progress of the
game thus, as pairs of B-A call-response moves:

($200, $100), ($180, $120), ($160, $140), ($150, SOLD!)


At each turn, each player uses the history of previous offers/counteroffers to estimate the true limit of the other party, and makes an offer that
induces the other to move his price-point as much as possible while
revealing as little as possible of his own limit. Throw in some bounded
rationality and a value on time, and you have the sort of framing
economists like.
You could mathematically model this (no doubt, somebody already
has analyzed this to death and proved all sorts of convergence results).
One example of such an analysis is in the classic game-theoretic decision
analysis book, Thinking Strategically (an excellent read, by the way, for
what it sets out to do).
But neat though this mathematical formulation is, it is fundamentally
wrong-headed. Bargaining isnt primarily driven by parties attempting to
(bounded) rationally guess each others limit points by doing (non-trivial)
real-time analysis of number sequences. Yes, the alternating sequence of
numbers does carry information, but in most real situations, information is
primarily conveyed in other, non-quantitative ways, and the relevant
information isnt even about price. Lets examine two examples before I
present my alternate model of bargaining.
Real-World Examples
Example 1: The Bahamian Turtle
In Nassau, I bought this coconut-shell toy turtle for my nephew, for
$5:

The seller, a pretty young woman, came up to us and engaged us with,


not one, but three moves all at once, and persuaded me to close, before I
had a chance to make a counter-offer:
Seller: You like this turtle? Nice toy for child! Eight dollars!
Seller (conspiratorial whisper, eyes darting left and right): Tell you
what, I make you a deal, only six dollars.
Me (doubtfully): Hmm
My Wife (enthusiastically): oh, its so cute; we should get one for
Arjun [my nephew]!
Seller: Alright, I give it to you for five dollars. What color bead you
want lady? [the thing has a bead on the string that is not visible in this
picture].
This one is, in a sense, a non-starter example, since I got played
before we got to bargaining. Lets take a longer example where I acquitted
myself better.
Example 2: The Jaipur Good-Cop/Bad-Cop
In Jaipur, India, on a vacation recently, my wife wanted to buy herself
a traditional Rajasthani kurta, or shirt. Translating from Hindi, the
exchange went roughly as follows (I may be misremembering the prices,
but the gist is accurate):
Me: How much?
Seller: 300 rupees
Me: Thats too much, how about 175?
Seller: Come on sir, at that price, I wont even recover my costs!

Me: No, 175 is the reasonable price for this kind of item.
Seller: Arrey, come on sir! Just look at this fine needlework; you
may have seen similar stuff for less in other shops, but if you look closely,
the work isnt as delicate!
Me: Of course I can see the quality of the work, thats why we want
to buy it, now come on, quote me the right price.
Seller: Okay sir, for you, Ill let it go for 250 (starts folding up the
kurta).
Me: No no, this lady may not be Indian, but I am; be reasonable
[my wife is Korean, and since I hadn't mentioned that she was my wife,
the shopkeeper had almost certainly assumed I was her local guide -many other shopkeepers had in fact called out to me to bring her into their
stores, offering me a commission!]
Seller: But I did quote the price for you sir, for foreigners, we
normally ask for at least 4-500!
Me: Fine, tell you what, Ill give you 190.
Seller: Come on sir, at that price, I dont even make a profit of 10
rupees!
Me: Fine, lets do this deal. 200; final offer.
Seller (looking upset): But
At this point, the sellers boss, probably the store owner, whod been
poring over a ledger in the background, looked up, interrupted and said
shortly, Cant you see the lady wants it? Just give it to them for 200, lets
cut this short!
I have several other examples I could offer (in the US, bargaining
tends to be restricted to larger purchases like cars), but these two examples
suffice to illustrate the points I want to pick out.

The Phenomenology
There are several features of interest here. Here is a round dozen:
1. Fake moves: In the Bahamian example, consider the rapid series of
three prices offered with a very quick change of subject to the color
of the bead at the first sign that I wanted to buy. This bargaining is
clearly fake, the numbers being part of the initial courtship ritual
rather than the actual price negotiations, which were shortcircuited.
2. Bargaining as bait: The sellers in the Nassau marketplace promote
their wares with a curious mix of American retail rhetoric (Cmon
honey! Everything 50% off today) and more traditional bargainhunter bait (You want a handbag sir, for the pretty lady? Cmon I
make you a deal!). I suspect very little serious bargaining actually
takes place, since the customers are largely American cruise ship
tourists, who are not used to bargaining for the small amounts in
play in these transactions.
3. Qualitative Re-valuation: Consider the variety of non-quantitative
moves in the Jaipur example. In the fine needlework move, the
seller attempted to change my valuation of the object, rather than
move the price point. I accepted the point, but indicated Id already
factored that in.
4. Narrative: A narrative also developed, inviting me to cast myself
as the knowledgeable insider who was being offered the smart
Indian deal, as opposed to the high-mark-up offered to clueless
foreigners. This is a key point that I will return to.
5. Deal-breaker feints: Twice, the seller attempted to convince me
that I was offering him a price he could not accept. These are
rhetorical feints. A similar move on the customers part is to
pretend to walk away (that old saw about the key to negotiation
being the willingness to walk away isnt much use in practice, but
pretending to walk away is very useful).
6. Closure Bluffs: another interesting feature of the Indian example is
the closure bluff; a non-serious price accompanied by closure
moves (such as starting to package the item), on the off-chance that
the other party may panic and fold early.

7. Good Cop: Finally, note the second individual stepping in to make


the deal towards the end (this dynamic is particularly common in
traditional retail stores in India, where the owner, or seth and
accountant, or munim, will often be watching the salesmen at work,
stepping in at the right moment. The psychological key to this is an
implicit sense of escalation and respect: forget that unimportant
lackey, clearly youre a smart customer and I, the boss, will deal
with you personally and cut you a special deal that my lackey isnt
authorized to offer.)
8. Ritual: In most cultures of bargaining, there are also moves of
ritual significance. In India, the best-known one is the bohni, or
first sale of the day. Sellers will often plead with customers to close
the deal, since it would constitute the bohni and also assert that the
price being offered is a really good deal because of the sellers
anxiety to finish his bohni. This is not entirely a pragmatic sort of
move the bohni does matter to traditional merchants, who will
often actually take a bit of a hit on the first transaction for luck, to
get the cash flow started. Curiously, I also encountered a much
more open version of this in the Bahamas, where one shopkeeper
said it plainly, Cmon honey, first customer gets best deal! I
suspect anthropologists would find an equivalent in every culture
besides modern fixed-price retail.
9. Knowledge Bluffs: Though I didnt use them, knowledge bluffs
are common in bargaining (I saw that same thing in Delhi for half
the price!) . These are bluffs because if the buyer really knows
something about the cost structure of the seller, that information is
usually quickly deployed and factored out, reducing the bargaining
to a matter of what is the convenience-value to me of buying here
rather than in that other place? Why do they still work? Ill tell
you in a moment.
10. The Justice Bluff: Surprisingly, in the form of a bluff, a notion of
fair price often enters even the purest free-market exchanges. This
is usually brought into play via displays of mock anger at being
treated unfairly. Surprisingly, even sellers will do this, attempting
to convey a clear sense of disgust by putting away the items under
discussion. But there are boundaries: sellers will rarely plead to
make a sale on the basis of personal need, since this subtly moves
the situation from a peer-transaction to a (morally unacceptable)
charity transaction. My mom once bought some vaseline from a

tearful door-to-door saleswoman who apparently genuinely broke


down and said, buy some just out of sympathy please! My mom
caved.
11. Boundary Blurring: If a full transaction has three parts
(discovery/selection, negotiations, closing), elements of bargaining
often creep out of their nominal home in the middle section. For
example, in traditional Indian full-service retail, the sales staff will
often pull out a vast selection more than necessary or asked for
visibly doing a lot of work, creating a clear sense of pressure.
On the other side, towards the end of a transaction, sometimes the
customer will throw in surprise in-kind requests after the deal
seems closed. Alright, I am buying this car from you for more
than I wanted, how about you throw in some floor mats?
12. Non-influence of actual knowledge: Though there is a lot of
bluffing, there is not much actual information about price limits in
play. In every example Ive encountered, at least for small
amounts, buyers and sellers do not attempt to guess limits directly
(beyond having a ballpark reasonable figure in mind that may be
utterly irrational). It is obvious that the buyer may not even have a
walk away limit price in his/her head, but what is not so obvious
is that even the seller may not have such a price in mind. Cost
structures can be murky even to sellers, and other hard-to-value
elements may be in play, like prospects for future sales, desire to
clear slow-moving inventory, and the like. Actual price information
usually enters the picture only when the amounts are significant, in
which case the parties generally do their research beforehand, and
attempt to factor that information out of the bargaining stage
altogether. Bargaining is primarily about ownership of the
unknown factors in pricing, and is a high-cost process, and it is in
the interest of both parties not to bother bargaining about mutuallyacknowledged certainties.
The Right Model
So thats all very well. There are lots of psychological subtleties
involved. But is this all really important? Is it possible to just cut the
Gordian knot and become really good at some sort of game-theoretic

model? Would all the bluffing and qualtiative nuances vanish under the
right sort of time-series modeling?
The answers are yes and no respectively. Yes, you do need to work
with the full thing; game theory wont cut the Gordian knot for you. And
no, you will not be able to subsume all the bluffing and complexity no
matter how much you crunch the numbers. So you do need to appreciate
the qualitative sound-track of the bargaining, but no, dont be discouraged
I am not suggesting that the only meaningful model is a localized sui
generis ethnography. Universal models and approaches to bargaining are
possible.
What actually happens in a bargaining transaction is the coconstruction of a storyline that both sides end up committing to. Every
move has a qualitative and a quantitative part. The prototypical transaction
pair can be modeled roughly as:
((p1, v1) , (p2, v2))
Where the ps are qualitative statements and the vs are price
statements. The key is that the qualitative parts constitute a language game
(in the sense of people like Stalnaker). Each assertion is either accepted or
challenged by subsequent assertions. The set of mutually accepted
assertions serves to build up a narrative of increasing inertia, since every
new statement must be consistent with previous ones to maintain
credibility, even if it is only the credibility of a ritual rather than literal
storyline.
This is the real reason why there is apparent spinning-of-wheels
where the price point may not move for several iterations. For example in
the Indian kurta case, I rejected the sellers assertion that 175 would
represent a loss, but acknowledged (but successfully factored out) the this
is fine needlework assertion. Though the price point wasnt moving, the
narrative was. At a more abstract level, a full narrative with characters and
plot may develop. This is also the reason why knowledge bluffs work
even if the seller knows the buyer cannot have seen the same item for half
the price in another store, he cannot call out the bluff in an obvious way
since that would challenge the (always positive) role in which the buyer is
cast.

The key conclusion from all this? The transaction moves to closure
when the emerging logic of the narrative becomes overwhelming, not
when price transparency has been achieved. To bargain successfully, you
must be able to control the pace and direction of the development of the
narrative. At a point of narrative critical mass, something snaps and either
a new narrative must displace the old one (rare), or there must be a
movement towards closure.
Becoming a Right-Brained Bargainer
So here is my magic solution: become good at story-telling based
conversations.

Walk in, not with a full-fledged plan/story, but a sense of what roles
you can comfortably fill (straight-dealer? cynic? know-it-all?
Innocent student without much to spend?)
As the conversation progresses, try to sense what roles the other
party is trying on for size, and suggest ones favorable to you
(Look, I try to buy only from local merchants, and you guys do
are doing a great job for the economy of our town, but). Say
things that move towards a locked-in role on both sides that favors
you. In the example above, I got locked in into the role of
knowledgeable local on disadvantageous terms.
Look out for the narrative logic as it develops. For example, I
successfully resisted an attempt to bring fine needlework
assertion into play, which would have moved the story from guy
looking for cheap deal to connoisseur transaction and a
premium-value storyline.
There are critical/climactic points where you can move decisively
for closure; watch and grab. In my case, I thought I had one when
the seller offered the Not even 10 rupees move, but the owner
cutting in for the kill and accepting was a clue that I could have
pushed lower.
Be aware of the symbolic/narrative significance of your numerical
moves. If the seller moves from 200 to 180, and you move from
100 to 120, the very symmetry of your move shows that you have
no information at all to use for leverage, and the transaction is
likely to proceed dully to a bisection unless you do something

creative. If the seller offers 500 and you say 250, that reveals that
you may be using a start at half heuristic, which might create an
opening for the seller to launch a storyline of really, 500 is a fair
price, heres why. Offering 275 instead creates the right sort of
ambiguity. If you do want to drive towards a symmetric-bisection
storyline, make sure you pick an irrational starting point, but not
one so irrational that it reveals you know nothing about the price
(irrationally low opening offers can work, but you need a 201 level
bargaining course to learn why).
Now, this isnt easy. You have to become a storyteller. But I never
said I was going to offer an easy answer; merely a better one than a
misguided attempt to do real-time game-theoretic computations.

The Tragedy of Wiios Law


March 26, 2009
The game-break is to 1:1 interpersonal relationships what the Aha!
moment is to individual introspection. The rare moment, shortly after
meeting for the first time, when two people experience a sudden,
uncontracted moment of connection, shared meaning and resonance. A
moment that breaks through normal social defenses. I call it uncontracted,
because I mean the kind of moment that occurs when there isnt an
obvious subtext of sexual tension, or a potential buy/sell transaction,
limiting behavior to the boundaries of an informal social contract. The best
examples are the ones that happen between people who arent trying to
sleep with, or sell to each other (at least not right then). I call it a gamebreak, because you momentarily stop playing social games and realize
with a shock that there is some part of an actual person on the other side
that perfectly matches a part of you that you thought was unique. A
moment that elevates human contact from the level of colliding billiard
balls to the level of electricity or chemistry. It is the moment when a
relationship can be born. Our fundamental nature as a social species rests
on the anatomy of this moment. Here is a picture: lowered masks, a spark
breaking through invisible shells.

The Interpersonal Double-Take


The game-break is not the same as the forced-festivity ritual of the
party ice-breaker, which merely preempts social discomfort without
catalyzing genuine connection. In fact, by giving people something

ritualistic to do, the ice-breaker often delays the game-break, or prevents it


altogether. The game-break cannot be engineered with certainty, but you
can do things that make it more likely. But this is not a how-to article; it is
a dissection of the phenomenon itself. Lets start with a specimen.
I stumbled upon the meaning of the game-break a few years ago,
during the course of a routine, you really should reach out to meeting.
This particular meeting happened while I was booting-up my postdoctoral
stint at Cornell. I was meeting a Russian PhD student Ill call Z. My
adviser had recommended I talk to his adviser, who had recommended I
talk to Z. So I did the usual thing: asking questions about his research,
mentioning my own work where I saw a connection, and so on. Like many
European researchers, he was unsmiling and taciturn, providing precise,
sealed-and-complete answers to even the most open-ended of questions,
and making only the bare minimum socially-acceptable effort to ask
questions in turn. Fortunately, I have a good deal of stamina for this sort of
thing. My ability to talk authoritatively about pretty much any subject
under the sun for one minute serves me well in such situations. So for
about 15 minutes we did our little immovable object, irresistible force
dance.
And then suddenly, the game-break. Something I shared got through
to him, and it was like watching a bulb light up. It was obvious that I went
from being guy who works for that other professor, to hey, I thought
only I thought/felt that way; who is this guy? On my end, I too was able
to instantly fingerprint his inner life. For a moment, roles and social
identities fell away. It was particularly dramatic because of his previous
impenetrability. With Americans, you often get such over-the-top fake
resonance that you can easily miss the moment when it turns genuine. Z
instantly became voluble and excited, eager to brainstorm and have a big
old mind-meld. I had already experienced many such moments, but I
detected something new in my own reaction: a clinical checking of a
mental box.
Wiios Law
That conversation led, as I fully expected, to nothing. We did have a
great mind-meld, and the meeting paid for itself socially, but we didnt

actually collaborate on anything afterward. For professional collaboration,


the game-break moment is necessary, but not sufficient. A lot of other
pieces have to fall into place for that.
But my clinical box-checking intrigued me for quite a while, and I
only understood it months later when I ran across a Finnish epigram called
Wiios Law: communication usually fails, except by accident. The Finnish
original, if you care, reads: Viestint yleens eponnistuu, paitsi
sattumalta.
I understood what my little mental check-box act represented. I was
noting and filing away one of Wiios accidental moments of true
communication. We view the world through mental models. The
culturally-inherited and shared parts of these models dont feel very
personal to us. If you and I met and talked about, say, shopping or cricket,
we wouldnt be too surprised to find that we think about these things in
roughly similar terms. Nor would we share a sense of intimacy through the
shared meaning.
But there are parts that derive from our own experiences, which we
mistakenly believe are completely unique to us. Things we think others
will never understand (a source of relief to those, like me, who are
existentialist by preference, and unholy fear for those who yearn for a
sense of complete connection to something bigger than themselves). I
earnestly hope there are irreducibly subjective elements of being, but I
have been through enough game-break moments to realize that we are far
less existentially unique than I think.
The Golden Rule of the Game Break
The game break is so rare because we are at once desperate for, and
terrified of, genuine connection. Watch people at a busy intersection.
Heres a portion of a scene from New Yorks Times Square for instance
(Wikimedia Commons):

Intersections illustrate how efficiently we avoid contact and maintain


the coherence of our groupings in confined settings. If we shook a
comparable box of marbles around, wed get hundreds of collisions.
Living things turn those hundreds into zero, most of the time.
But if our physical obstacle avoidance skills are amazing, our social
mind-bump collision avoidance skills are even better. We know exactly
when to break eye-contact to prevent polite from turning into unwanted
intimacy. We know exactly the appropriate kind of smile for every
situation. Our sense of our own, and others personal space is finely-tuned,
and we know what level of closeness requires apology, and what levels
require snubs, sharpness or displays of anger.
Though they certainly trap us in prisons of existential solitude, if we
didnt have these mechanisms, wed be rubbed raw by the sheer volume of
social contact in modern society. Nearly all of the contact would be
annoying or draining. But in every situation, even the most random, there
are always tiny leftover gaps in our defenses, and we are grateful to have
them. Through these gaps and imperfections, sometimes the game-break
can sneak through. Which leads us to:
The Golden Rule of the Game Break: Given enough time, any two
people forced into proximity will experience a game break.
I learned this growing up as a kid in India, when we used to go on
long train journeys, between 24 to 48 hours, to visit relatives. Thats a long
time for anybody to be cooped up together with strangers. The

unavoidable morning intimacy of tousled hair and teeth-brushing breaks


down reserves even more. You can reliably expect at least one game break
on every long train ride.
What happens before the game-break, of course, is the gradual
approach, as time-driven norms dictate the lowering of all the outer layers
of defenses. Once enough layers have been peeled away, the probability of
a game break starts climbing. It might take 4 hours or 4 years, but given
enough time, it will happen. The Golden Rule is merely an application of
what statisticians call the law of large numbers. What happens after,
though, is more interesting (and for some of you, philosophically
depressing).
The Tragedy of Wiios Law
Unfortunately for the romantic, Wiios law is a grim and agnostic one.
The accident of genuine communication can lead to both deep trust and
implacable enmity. You can get Harry Potter-Voldemort outcomes, a level
of intimacy in enmity that makes decisive conflict as much suicide as
murder. Worse, even when the outcome is a positive, resonant, mind-meld,
our minds cannot stand the deep connection for very long. While the full
defenses never return, the uncontracted quality of the game-break does not
last for long. A social contract descends, to structure and codify any
continuation of the relationship.
Which explains why some of our most precious social memories are
of brief, accidental encounters with strangers, often nameless. Encounters
which didnt have a chance to get codified and structured, and remain in
limbo in our memories as highly significant episodes. Moments of deep
loneliness can arise out of familiar situations, when we are surrounded by
people with whom we have achieved very intimate, but controlled
relationships, and we recall our still-raw contacts with strangers. We have
had our game-break moments with those who surround us, but the
memories of those moments lie irretrievably buried under the reality of
active relationship contracts. There is a beautiful Hindi film song that
goes:
Aate-jaate khoobsurat, aawara sadakon pe

kabhi-kabhi ittefaq se, kitne anjan log mil jate hain,


unme se kuch log bhool jaate hain, kuch yaad reh jaate hain
Which translates roughly to:
Wandering through beautiful lonely streets,
sometimes, by accident, you meet so many strangers
of these, some you forget, some you remember
The game-break is the gateway to the real relationship: two people
who have unique mental models of each other, and keep up contact
frequently enough to keep those models fresh and evolving. Friendship,
romance, hatred, professional collaboration, marriage, business
partnerships and investments all follow from game-break moments. Yet,
each of those is a contracted relationship, safely bounded away from
further unscripted and uncontracted game-break moments. We may have
more game-break moments with the same people; that is what we mean by
taking the relationship to another level, but each time, the fog of a
(possibly deeper) social contract descends, shrouding the memory of the
moment.
The tragedy of Wiios law is this. Our most connected moments are
with people we know we will never meet again. The moments of
connection stay with us only to the extent that relationships do not follow.
I am no exception. There are too-long glances burned in my brain (as
many exchanged with wrinkled old men as with pretty girls) that I will
never forget. There are conversations I occasionally replay, there are
memories jogged by old letters and photographs.
One of the most poignant such memories for me is from three weeks I
spent backpacking in Europe in 1998. As you might expect, I experienced
several game-break moments (thats the main reason we do things like
backpacking). One particular evening, towards the end of the trip, in
Brussels, I ran into a group of Indian folk musicians practicing in the park
(they were members of the Jaipur Kawa Brass Band and Musafir, who
were touring Europe together, and were due to perform as part of the
World Cup Soccer festivities). I was then newly-minted as a global citizen
(having left India in 1997), and still capable of homesickness. A gamebreak moment followed, and I went back to their lodgings with them for a

magical evening of rustic Indian food, unbelievable conversation about


music, and listening to them playing, not for an audience (I was only one,
and they were clearly not counting me), but for themselves.
Several years later, one of the groups, Musafir, toured the US, and
played in Ann Arbor (I was then a graduate student at the University of
Michigan). With a great deal of anticipation and excitement I went to their
concert, and then backstage after. They remembered me, and their eyes lit
up too, very briefly, at the memory. Then discomfort descended, as we
realized we would not be able to recreate that magic. Now I was on the
edge of a contracted relationship: artists-and-fan.
I chose to say goodbye quickly, and walk away with my memory.
That is the tragedy of Wiios law.

The Allegory of the Stage


September 23, 2009
Have you ever taken a deep breath and stepped out on a stage of some
sort to perform? Time slows down. Sounds quiet down and you can
actually hear the thudding of your heart. And then, just as suddenly, as
your performance starts, your acute sense of self-consciousness is forced
to recede. Time speeds back up and the audio gets turned up again. You
are left with a hallucination-like memory of that moment of transition.
This experience, which I call the trigger moment is at the heart of the
allegory of the stage.

Movie directors didnt make up this subjective feeling. They copied


something very real with the slow-motion camera and the sound mixer.
Trigger moments are such powerful experiences that we are tempted to
weave our life stories around them. Shakespeares all the worlds a stage
bit from As You Like It is of course the most famous take on the idea.
I consider it more an allegory than a metaphor, but lets not quibble.
Think of it as a metaphor if you like.
In school and college, I used to get on the stage quite a bit to perform
debates, public-speaking contests, quizzes, and yes, theater. Not all
performances have a hallucinatory trigger moment between normal-mind
and stage mind. If you have just had a few drinks, and are fooling around

to entertain friends, you wont experience this transition. But what my


literal stage experiences did was sensitize me to the trigger-moment
feeling. Once you learn to recognize it, you realize that plenty of life
experiences have the same subjective signature. Submitting a homework
assignment, releasing a product, jumping off the high board, pushing a
button to start a chemistry experiment, pulling a trigger, hitting publish
to release a blog post into the wild. Even hitting send in your email
editor. At the other end of the spectrum, on large collective scales, you get
the first wave of soldiers landing on a beach on D-day, or the moment in
medieval battles when two opposed armies begin to charge at each other.
Or thousands of people holding their breath as someone goes 3, 2, 1, we
have ignition.
What all these trigger-moment experiences have in common is that
they represent thresholds beyond which you are no longer in control of the
consequences of your actions. Something you are creating goes from
being protected by you (and your delusions) to facing the forces of the
wild world. Every time a comic steps on stage, he is about to either bomb
or kill, putting his theory of life to the test. Movie script writers design
entire stories around such moments. There are plenty of overused and
cliched lines to choose from (This is it or So it begins or most
dramatically, my whole life has been building up to this moment).
The moments themselves are infrequent and short-lived, but there is
no mistaking the transition, or the feeling of being on stage on the other
side. You know whether you are performing, or on the sidelines, waiting.
There is truth underlying the angsty teenagers sense of anticipation, the
feeling that something significant hasnt yet happened, that life has not yet
begun. You havent started living until you experience and survive your
first powerful stepping on stage moment. The bitter, depressed middleaged adult who tells the 18-year old that real life isnt like the movies is
actually wrong. He has merely never dared to step onto a significant stage
himself, so he doesnt know that such powerful crossing-the-threshold
moments are possible. That every life can be the Heros journey.
Sure, the rather crude and vulgar yearnings of teenagers (they all want
to be rock stars, sports stars or novelists) are mostly unlikely to be
fulfilled. But whether your life feels like it is playing out as a series of on-

stage episodes doesnt depend on whether you head towards the more
obvious stages. It depends entirely on whether you have the mental
toughness to recognize and not shy away from big trigger moments. This
mental toughness is what allows you to say damn the torpedoes, full
speed ahead. You accept the worst that can happen and step on stage
anyway. The exhilaration that can follow is not the exhilaration of having
impressed an audience. It is the exhilaration of having cheated death one
more time.
The allegory of the stage is the story of your life told around the
moments when you faced death, and charged ahead anyway.
But life is more than a series of step-on-stage/step-off vignettes; there
is a narrative logic to the whole thing. Each trigger moment prepares you
for larger trigger moments. Each time you shy away from a trigger
moment, you become weaker. There is a virtuous cycle of increasingly
difficult trigger moments, and if you can get through them all, you are
ready for the biggest trigger moment of all: the jump into eternal oblivion.
Everybody dies. Not everybody can make it an intentional act of stepping
onto a pitch-black stage.
There is also a vicious cycle of increasing existential stage fright. Do
that enough, and you will find yourself permanently in the darkness, life
having passed you by. As you might expect, the universe has a sense of
humor. You can only experience living to the fullest if you are able to
get through death-like trigger moments. Shy away from these death-like
moments, and your life will actually feel like living death.
Curiously though, in this allegory of the stage, it isnt other people
who are spectators of your life. Everybody is either on the stage or waiting
backstage for their moment. Whats out there is the universe itself,
random, indifferent to your strutting. Thats what separates teenagers from
adults: the realization that other people are not your audience.

The Missing Folkways of Globalization


June 16, 2010
Between individual life scripts and civilization-scale Grand
Narratives, there is an interesting unit of social analysis called the
folkway. Historian David Hackett Fischer came up with the modern
definition in 1989, in his classic, Albions Seed: Four British Folkways in
America:
the normative structure of values, customs and
meanings that exist in any culture. This complex is not
many things but one thing, with many interlocking parts
Folkways do not rise from the unconscious in even a
symbolic sense though most people do many social
things without reflecting very much about them. In the
modern world a folkway is apt to be a cultural artifact
the conscious instrument of human will and purpose. Often
(and increasingly today) it is also the deliberate contrivance
of a cultural elite.
Ever since I first encountered Fischers ideas, Ive wondered whether
folkways might help us understand the social landscape of globalization.
As I started thinking the idea through, it struck me that the notion of the
folkway actually does the opposite. It helps explain why a force as
powerful as globalization hasnt had the social impact you would expect.
The phrase global citizen rings hollow in a way that even the officially
defunct Yugoslavian does not. Globalization has created a good deal of
industrial and financial infrastructure, but no real social landscape,
Friedman-flat or otherwise. Why? I think the answer is that we are missing
some folkways. Why should you care? Let me explain.
Folkway Analysis
Folkways are a particularly useful unit of analysis for America, since
the sociological slate was pretty much wiped clean with the arrival of
Europeans. As Fischer shows, just four folkways, all emerging in 17th

and 18th century Britain, suffice to explain much of American culture as it


exists today. It is instructive to examine the American case before jumping
to globalization.
So what exactly is a folkway? Its an interrelated collection of default
ways of conducting the basic, routine affairs of a society. Fischer lists the
following 23 components: speech ways, building ways, family ways,
gender ways, sex ways, child-rearing ways, naming ways, age ways, death
ways, religious ways, magic ways, learning ways, food ways, dress ways,
sport ways, work ways, time ways, wealth ways, rank ways, social ways,
order ways, power ways and freedom ways.
Even a cursory examination of this list should tell you why this is
such a powerful approach to analysis. If you were to describe any society
through these 23 categories, you would have pretty much sequenced its
genome (curious coincidence, 23 Fischer categories, 23 chromosome pairs
in the human genome). You wouldnt necessarily be able to answer every
interesting social or cultural question immediately, but descriptions of the
relevant folkways would contain the necessary data.
The four folkways examined by Fischer (the Puritans of New
England, the Jamestown-Virginia elites, the Quakers in Pennsylvania, and
migrants from northern parts of Britain to Appalachia), constitute the
proverbial 20% of ingredients that define 80% of the social and cultural
landscape of modern America. These four original folkways created the
foundations of modern American society. It is fairly easy to trace
recognizable modern American folkways, such as Red and Blue state
folkways, back to the original four.
Other folkways that came later added to the base, but did not
fundamentally alter the basic DNA of American society (one obvious sign:
the English language as default speech way). Those that dissolved
relatively easily into the 4-folkway matrix (such as German, Irish, Dutch
or Scandinavian) are barely discernible today if you dont know what to
look for. Call them mutations. Less soluble, but high-impact ones, such as
Italian, and Black (slave-descended), have turned into major subcultures
that accentuate, rather than disrupt, the four-folkway matrix; rather like
mitochondrial DNA. And truly alien DNA, such as Asian, has largely
remained confined within insular diaspora communities; intestinal fauna,

so to speak. The one massive exception is the Latino community. In both


size (current, and potential) and cultural distance from the Anglo-Saxon
core, Latinos represent the only serious threat to the dominance of the
four-folkway matrix. The rising Latino population led Samuel Huntington,
in his controversial article in Foreign Affairs, The Hispanic Challenge, to
raise an alarm about the threat to the American socio-cultural operating
system. To complete our rather overwrought genetic analogy, this is a
heart transplant, and Huntington was raising concerns about the risks of
rejection (this is my charitable reading; there is also clearly some
xenophobic anxiety at work in Huntingtons article).
I offer these details from the American case only as illustrations of the
utility of the folkway concept. What interests me is the application of the
concept to globalization. And I am not attempting to apply this definition
merely as an academic exercise. It really is an extraordinarily solid one. It
sustains Fischers extremely dense 2-inch thick tome (which I hope to
finish by 2012). This isnt some flippant definition made up by a shallow
quick-bucks writer. It has legs.
Globalization and Folkways
Globalization is, if Tom Friedman is to be believed, an exciting
process of massive social and cultural change. A great flattening.
Friedmans critics (who have written books with titles like The World is
Curved) disagree about the specifics of the metaphoric geometry, but dont
contest the idea that globalization is creating a new kind of society. I
agree that globalization is creating new technological, military and
economic landscapes, but I am not sure it creating a new social landscape.
We know what the before looks like: an uneasy, conflict-ridden
patchwork quilt of national/civilizational societies. It is a multi-polar
world where, thanks to weapons of mass destruction, refined models of
stateless terror, and multi-national corporations binding the fates of
nations in what is starting to look like a death embrace, no one hegemon
can presume to rule the world. Nobody seriously argues anymore that
Globalization is reducible to Americanization (in the sense of a
wholesale export of the four-folkway matrix of America). That was a
genuine fear in the 80s and 90s that has since faded. The Romanization of

Europe in antiquity, and the Islamization of the Middle East and North
Africa in medieval times, have been the only successful examples of that
dynamic.
But it is still seems reasonable to expect that this process,
globalization, is destroying something and creating something equally
coherent in its place. It is reasonable to expect that there are coherent new
patterns of life emerging that deserve the label globalized lifestyles, and
that large groups of people somewhere are living these lifestyles. It is
reasonable in short, to expect some folkways of globalization.
Surprisingly, no candidate pattern really appears to satisfy the
definition of folkway.
With hindsight, this is not surprising. What is interesting about the list
of ways within a folkway is the sheer quantity of stuff that must be
defined, designed and matured into common use (in emergent ways of
course), in order to create a basic functioning society. Even when a
society is basically sitting there, doing nothing interesting (and by
interesting I mean living out epic collective journeys such as the
settlement of the West for America or the Meiji restoration in Japan) there
is a whole lot of activity going on.
The point here is that the activity within a folkway is not news, but
that doesnt mean nothing is happening. People are born, they grow up,
have lives, and die. All this background folkway activity frames and
contextualizes everything that happens in the foreground. The little and
big epics that we take note of, and turn into everything from personal
blogs to epic movies, are defined by their departure from, and return to,
the canvas of folkways.
That is why, despite the power of globalization, there is no there
there, to borrow Gertude Steins phrase. There is no canvas on which to
paint the life stories of wannabe global citizens itching to assert a social
identity that transcends tired old categories such as nationality, ethnicity,
race and religion.
This wouldnt be a problem if these venerable old folkways were in
good shape. They are not. As Robert Putnam noted in Bowling Alone, old

folkways in America are eroding faster than the ice caps are melting.
Globalization itself, of course, is one of the causes. But it is not the only
one. Folkways, like individual lives and civilizations, undergo rise and fall
dynamics, and require periodic renewals. They have expiry dates.
Every traditional folkway today is an end-of-life social technology;
internal stresses and entropy, as much as external shocks, are causing them
to collapse. The erosion has perhaps progressed fastest in America, but is
happening everywhere. I am enough of a nihilist to enjoy the crash-andburn spectacle, but I am not enough of an anarchist to celebrate the lack of
candidates to fill the vacuum.
The Usual Suspects
Weve described the social before of globalization. What does the
after look like? Presumably there already is (or will be) an after, and
globalization is not an endless, featureless journey of continuous
unstable change. That sounds like a dark sort of fun, but I suspect humans
are not actually capable of living in that sort of extreme flux. We seek the
security of stable patterns of life. So we should at some point be able to
point to something and proclaim, there, thats a bit of globalized society.
I once met a 19-year old second-generation Indian-American who,
clearly uneasy in his skin, claimed that he thought of himself as a global
citizen. Is there any substance to such an identity?
How is this global citizen born? What are the distinguishing
peculiarities of his speech ways and marriage ways? What does he
eat for breakfast? What are his building ways? How does this creature
differ from his poor old frog-in-the-well national-identity ancestors? If
there were four dominant folkways that shaped America, how many
folkways are shaping the El Dorado landscape of globalization that he
claims to inhabit? One? Four? Twenty? Which of this set does our heros
story conform to? Is the Obama folkway (for want of a better word) a neoAmerican folkway or a global folkway?
These questions, and the difficulty of answering them, suggest that the
concept of a global citizen is currently a pretty vacuous one. Fischers

point that the folkway is a complex of interlocking parts is a very


important one. Most descriptions of globalized lifestyles fail the
folkway test either because they are impoverished (they dont offer
substance in all 23 categories) or are too incoherent; they lack the
systematic interlocking structure.

Multicultural societies are no more than many decrepit old


folkways living in incongruous juxtaposition, and occasionally
coming together in Benneton ads and anxious mutual-admiration
culture fests
Melting pot societies are merely an outcome of some folkways
dissolving into a dominant base, and others forming
distinguishable subcultural flavors
Cyberpunk landscapes are more fantasy than fact; a few people
may be living on this gritty edge, but most are not.
Intentional communities, which date back to early utopia
experiments, have the characteristic brittleness and cultural
impoverishment of too-closed communities, that limits them to
marginal status.
Purely virtual communities are not even worth discussing.
Click-and-mortar communities, that might come together
virtually, have so far been just too narrow. Take a moment to
browse the groups on meetup.com. How many of those interest
groups do you think have the breadth and depth to anchor a
folkway?

The genetic analogy helps explain why both coverage (of the 23
categories) and complex of interlocking parts are important. Even the
best a la carte lifestyle is a bit of a mule. In Korea for instance, or so I am
told, marriages are Western style but other important life events draw from
traditional sources. Interesting, perhaps even useful, but not an
independent folkway species capable of perpetuating itself as a distinct
entity. Thats because a la carte gives you coverage, but not complex
interlocking. On the other hand, biker gangs have complex interlocking
structures and even perpetuate themselves to some extent, but do not have
complete coverage. Ive been watching some biker documentaries lately,
and it is interesting how their societies default back to the four-folkway

base for most of their needs, and only depart from it in some areas. They
really are subcultures, not cultures.
Latte Land
I dont know if there is even one coherent folkway of globalization,
let alone the dozen or so that I think will be necessary at a minimum
(some of you might in fact argue that we need thousands of micro-Balkan
folkways, but I dont think that is a stable situation). But I have my
theories and clues.
Heres one big clue. Remember Howard Dean and the tax-hiking,
government-expanding, latte-drinking, sushi-eating, Volvo-driving, New
York Times-reading body-piercing, Hollywood-loving, left-wing freak
show culture?
Perhaps thats a folkway? It wouldnt be the first time a major
folkway derived its first definition from an external source. It sounds a la
carte at first sight, but theres some curious poetic resonance suggestive of
deeper patterns.
For a long-time I was convinced that this was the case; that Blue
America could be extrapolated to a Blue World, and considered the
Promised Land of globalization, home to recognizable folkways. That it
might allow (say) the Bay Area, Israel, Taiwan and Bangalore to be tied
together into one latte-drinking entrepreneurial folkway for instance. And
maybe via a similar logic, we could bind all areas connected, and socially
dominated by, Walmart supply chains into a different folkway. If Latte
Land is one conceptual continent that might one day host the folkways of
globalization, Walmartia would be another candidate.
I think theres something nascent brewing there, but clearly were
talking seeds of folkways, not fully developed ones. There are tax-hiking,
latte-drinking types in Bangalore, but it is still primarily an Indian city,
just as the Bay Area, despite parts achieving an Asian majority, is still
recognizably and quintessentially American.

But there are interesting hints that suggest that even if Latte Land isnt
yet host to true globalized folkways, it is part of the geography that will
eventually be colonized by globalization. One big hint has to do with walls
and connections.
In the Age of Empires, the Chinese built the Great Wall to keep the
barbarians out, and a canal system to connect the empire. The Romans
built Hadrians wall across Britain to keep the barbarians out, and the
famed Roman roads to connect the insides.
Connections within, and walls around, are characteristic features of an
emerging social geography. Today the connections are fiber optic and
satellite hookups between buildings in Bangalore and the Bay Area. In
Bangalore, walled gated communities seal Latte Land off from the rest of
India, their boundaries constituting a fractal Great Wall. In California, if
you drive too far north or south of the Bay Area, the cultural change is
sudden and very dramatic. Head north and you hit hippie-pot land. Head
south and you hit a hangover from the 49ers (the Gold Rush guys, not the
sports team). In some parts of the middle, it is easier to find samosas than
burgers. Unlike in Bangalore, there are no physical walls, but there is still
a clear boundary. I dont know how the laptop farms of Taiwan are sealed
off, or the entrepreneurial digital parts of Israel from the parts fighting
messy 2000 year old civilizational wars, but I bet they are.
Within the walls people are more connected to each other
economically than to their host neighborhoods. Some financial shocks will
propagate far faster from Bangalore to San Jose than from San Jose to
(say) Merced. I know at least one couple whose marriage way involves
the longest geometrically possible long-distance relationship, a full 180
longitude degrees apart, and maintained through frequent 17 hour flights.
Curiously, since both the insides and outsides of the new walls are
internally well-connected, though in different ways, the question of who
the barbarians are is not easy to answer. My tentative answer is that our
side of the wall is in fact the barbarian side. Our nascent folkways have
more in common with the folkways of pastoral nomads than settled
peoples. Unlike the ancient Chinese and Romans, weve built the walls to
seal the settled people in. Ill argue that point another day. Trailer: the key
is that barbarians in history havent actually been any more barbaric

than settled peoples, and the ages of their dominance havent actually been
dark ages. We may well be headed for a digital dark age driven by
digital nomad-barbarians.
Our missing folkways, I think, are going to start showing up in Latte
Land in the next 20 years. Also in Walmartia and other emerging
globalization continents, but I dont know as much about those.
In the meantime, I am curious if any of you have candidate folkways.
Remember, it has to cover the 23 categories in complex and
interconnected ways, and there should be a recognizable elite whose
discourses are shaping it (the folkway itself cant be limited to the elite
though: the elite have always had their own globalized jet-setting
folkways; we are talking firmly middle class here). How many folkways
do you think will emerge? 0, 1, 10 or 1000? Where? How many
conceptual continents?
Random side note: This post has officially set a record for longest
gestation period. I started this essay in 2004, two years before I started
blogging. Its kinda been a holding area for a lot of globalization ideas,
about 20% of which made it into this post. I finally decided to flush it out
and evolve the thread in public view rather than continue it as a working
(very hard-working) paper.
Random side note #2: There are lots of books that are so thick, dense
and chock-full of fantastic ideas that I could never hope to review or
summarize them. In a way, this post is an alternative sort of book
review, based on plucking one really good idea from a big book. Fischers
book is a worthwhile reading project if you are ready for some intellectual
heavy lifting.

On Going Feral
August 19, 2009
Yesterday, a colleague looked at me and deadpanned, arent you
supposed to have a long beard? When you remote-work for an extended
period (its been six months since my last visit to the mother ship), you
can expect to hear your share of jokes and odd remarks when you do show
up. Once you become a true cloudworker, a ghost in the corporate
machine who only exists as a tinny voice on conference calls, perceptions
change. So when you do show up, you find that people react to you with
some confusion. Youre not a visitor or guest, but you dont seem to truly
belong either.
I hadnt planned on such a long period without visits to the home
base, but the recession and a travel freeze got in the way of my regular
monthly visits for a while. The anomalous situation created an accidental
social-psychological experiment with me as guinea pig. Whats the
difference between six months and one month, you might ask? Everything.
Monthly visits keep you domesticated. Six months is long enough to make
you go feral. Ive gone feral.

Consider the meaning of the word: the condition of a domesticated


species that goes back to its wild ways. The picture above is of a feral cat.
Curiously enough, this one is apparently from Virginia, where I live.
Most common domesticated animals can go feral: dogs, pigs, cats,
horses and sheep, for instance. We tend to forget though, that the most
impossibly ornery species weve managed to domesticate is ourselves,
homo sapiens. Settled agriculture, urbanization, religion, the nation-state
and finally, industrialization, each added one more layer of domestication.
Its not for nothing that primate ethologist Desmond Morris titled one of
his books about human sociology The Human Zoo. Modern work styles
are ripping away all the layers at once. I am an atheistic, post-nationalist
immigrant from the other side of the planet, living in a neo-urban (though
not bleak-cyberpunk) landscape. I inhabit physical environments where
old communities are crumbling, and people are tentatively groping for
social structure through meetups (aside: I just started a writers meetup,
1000 words a day, in the DC area). I am tethered to a corporation too
loosely to be a significant part of it socially. No Friday happy hours or
regular lunch-buddy for me.
Ive become to the society that is my parent company what the
privateers of old used to be to the big naval powers of the 18th century. A
sort of barely-legal socio-economic quasi-outlaw. Maybe I am yielding to
self-romanticizing temptations, but there are some hard truths here.
Political scientists often use a fictional construct, man in the state of
nature, as a starting point for their conceptual models of the gradual
domestication civilization of humans. William Golding offered one
fictional imagining of what might happen if humans went feral at an early
enough age, in Lord of the Flies. But until now, the idea of modern feral
humans has largely been a theoretical one.
Cloudworker lifestyles mobile, home-based, unshaven, pajamaclad and Starbucks-swilling create a psychological transformation that
is very similar to what happens when animals go feral. In animals, it takes
a couple of generations of breeding for the true wild nature to re-emerge.
Cats, for instance, revert to a basic, hardy, stocky, short-haired robustlyinterbred tabby variety. Dogs become mutts. But in humans it can happen

faster, since most of our domestication is through education and


socialization rather than breeding.
You might think that the true tabby-mutt human must live outside the
financial system, maybe as a wilderness survivalist or fight-club member.
Maybe engage in desperate and deadly Lord-of-the-flies style lifestyles, all
nature-red-in-tooth-and-claw. But thats actually a mistaken notion,
because that sort of officially checked-out or actively nihilistic person is
defined and motivated by the structure of human civilization. To rebel is to
be defined by what you rebel against. Criminals and anarchists are
civilized creatures. Feral populations are agnostic, rather than either
dependent on, or self-consciously independent of, codified social
structures. Feral cloudworkers use social structures where it accidentally
works for them, rather like feral cats congregating near fish markets, and
improvise ad-hoc self-support structures for the rest of their needs.
As a truly feral cloudworker, you simply end up being thrown to your
own devices. Social infrastructure no longer works for you, except by
accident. You dont get friends for free just because you have a job or
belong to a bowling league. You improvise. You find some social contact
at Starbucks. You go for long walks and learn to appreciate solitude more.
You become more closely attuned to your personal bio-rhythms. You nap
(well, I do). You have left your cubicle for the wide world, but you pay the
cost. You have to learn to survive in the social wilderness. Much of it is as
bleak as the deep open ocean, where it takes the personality of the oceanic
white-tip shark to survive.
But the contrast is most vivid when you do, on occasion, rejoin
society as a physical guest. I was surprised at how different I felt, starting
with my shoes and badge (I am barefoot or in flip-flops at home. At
work I have to wear close-toed shoes because it is a lab environment).
The regular rhythms of morning coffee-hello rituals and meeting behaviors
seem strangely alien. It all seems like a foreign country youve only read
about in theoretical org charts. Names and faces drift apart, starved of
nourishing, daily reinforcement, and you struggle to conjure up names of
people you used to pass by in the hallways everyday. Out of cc out of
mind. The logic of promotions, team staffing and budgeting seem as
obscure as the rituals of Martian society. Even though you know that in
theory, you are being affected by it.

As my colleagues beard joke illustrates, you are perceived differently.


You are some strange off-the-org-chart species, and people dont know
what to do with you. You are disconnected from water-cooler gossip to a
significant extent, but the fact that you are clearly surviving, productive,
and effective, I suspect, makes the regular workers introspect as much as
us aliens. I imagine they wonder whether all the seemingly solid reality all
around them can really be what it seems if somebody like me can
randomly show up and disappear occasionally and still impact things as
much as their next-cubicle neighbor. Anybody with imagination who is
still desk-bound in traditional ways, I suspect, is feeling reality and its
walls, floors and ceilings dissolve around him/her, Matrix style.
On Friday, Ill return to my natural wild habitat. Will life redomesticate me at some point? I dont know.

On Seeing Like a Cat


August 6, 2009
Cats and dogs are the most familiar among the animal archetypes
inhabiting the human imagination. They are to popular modern culture
what the fox and the hedgehog are to high culture, and what farm animals
like cows and sheep were to agrarian cultures. They also differ from foxes,
hedgehogs, sheep and cows in an important way: nearly all of us have
directly interacted with real cats and dogs. So let me begin this meditation
by introducing you to the new ribbonfarm mascot: the junkyard cat,
Skeletor, and my real, live cat, Jackson. Here they are. And no, this isnt
an aww-my-cat-is-so-cute post. I hate those too.

The Truth about Cats and Dogs


I am a cat person, not in the sense of liking cats more (though I do),
but actually being more catlike than doglike. Humans are more complex
than either species; we are the products of the tension between our doglike and cat-like instincts. We do both sociability and individualism in
more complicated ways than our two friends; call it hyper-dogginess plus
hyper-cattiness. That is why reductively mapping yourself exclusively to

one or the other is such a useful exercise. You develop a more focused
self-awareness about who you really are.
Our language is full of dog and cat references. Dogs populate our
understanding of social dynamics: conflict, competition, dominance,
slavery, mastery, belonging and otherness:

Finance is a dog-eat-dog world


Hes the alpha-dog/underdog around here
Hes a pit bull
Dhobi ka kutta, na ghar ka, na ghat ka (trans: the washermans
dog belongs neither at the riverbank, nor at the house; i.e. a
social misfit everywhere)
He follows her around like a dog
He looks at his boss with dog-like devotion

Cat references speak to individualism, play, opportunism, risk,


comfort, mystery, luck and curiosity

A cat may look at a king


She curled up like a cat
A cat has nine lives
Managing this team is like herding cats
Look what the cat dragged in
Its a cat-and-mouse game
Curiosity killed the cat

There is a dark side to each: viciousness and deliberate cruelty (dogs),


coldness and lack of empathy (cats). We also like to use the idealized catdog polarity to illuminate our understanding of deep conflicts: they are
going at it like cats and dogs. Curiously, though the domestic cat is a far
less threatening animal than the domestic dog (wolf vs. tiger is a different
story), we are able to develop contempt for certain dog-archetypes, but not
for cat-archetypes. You cant really insult someone in any culture by
calling him/her a cat (to my knowledge). But there is fear associated with
cats (witches, black cats for bad luck) in every culture. Much of this fear, I

believe, arises from the cats clear indifference to our assumptions about
our own species-superiority and intra-species status.
That point is clearly illustrated in the pair of opposites he looks at his
boss with dog-like devotion/a cat may look at a king. The latter is my
favorite cat-proverb. It gets to the heart of what is special about the cat as
an archetype: being not oblivious, but indifferent to ascriptive authority
and social status. You can wear fancy robes and a crown and be declared
King by all the dogs, but a cat will still look quizzically at you, trying to
assess whether the intrinsic you, as opposed to the socially situated,
extrinsic you, is interesting. Like the child, the cat sees through the
Emperors lack of clothes.
Our ability to impress and intimidate is mostly inherited from
ascriptive social status rather than actual competence or power. Cats call
our bluff, and scare us psychologically. Dogs validate what cats ignore.
But it is this very act of validating the unreal that actually creates an
economy of dog-power, expressed outside the dog society as the power of
collective, coordinated action. Dogs create society by believing it exists.
In the Canine-Feline Mirror
We map ourselves to these two species by picking out, exaggerating
and idealizing certain real cat and dog behaviors. In the process, we
reveal more about ourselves than either cats or dogs. Cats are loyal to
places, dogs to people is an observation that is more true of people than
either dogs or cats. Just substitute interest in the limited human sphere (the
globalized world of gossipy, politicky, watercoolerized, historicized and
CNNized human society; feebly ennobled as humanism) versus the
entire universe (physical reality, quarks, ketchup, ideas, garbage, container
ships, art, history, humans-drawn-to-scale). There are plenty of such
dichotomous observations. A particularly perceptive one is this: dogpeople think dogs are smarter than cats because they learn to obey
commands and do tricks; cat-people think cats are smarter for the exact
same reason. Substitute interest in degrees, medals, awards, brands and
titles versus interest in snowflakes and Saturns rings. I dont mean to be
derisive here: medals and titles are only unreal to cats. Remember, dogs

make them real by believing they are real. They lend substance to the
ephemeral through belief.
Cat-people, incidentally, can develop a pragmatic understanding of
the value of dog-society things even if deep down they are puzzled by
them. You can get that degree and title while being ironic about it. Of
course, if you never break out and go cat-like at some point, you will be a
de facto dog (check out the hilarious Onion piece a commenter on this
blog pointed out a while back: Why cant anyone tell I am wearing this
suit ironically?).
But lets get to the most interesting thing about cats, an observation
that led to the title of this article. My copy of the The Encyclopedia of the
Cat says:
It is not entirely frivolous to suggest that whereas pet
dogs tend to regard themselves as humans and part of the
human pack, the owner being the pack leader, cats regard
the humans in the household as other cats. In many ways
they behave towards people as they would towards other
kittens in the nest, grooming them, snuggling up with
them, and communicating with them in the ways that they
would use with other cats.
There is in fact an evolutionary theory that while humans deliberately
domesticated wild dogs, cats self-domesticated by figuring out that
hanging around humans led to safety and plenty.
I want to point out one implication of these two observations: cats
arent unsociable. They just use lazy mental models for the species-society
they find themselves in: projecting themselves onto every other being they
relate to, rather than obsessing over distinctions. They only devote as
much brain power to social thinking as is necessary to get what they want.
The rest of their attention is free to look, with characteristic curiosity, at
the rest of the universe.
To summarize, dog identities are largely socially constructed, inspecies (actual or adopted, which is why the reverse-pet raised by

wolves sort of story makes sense). Cat identities are universeconstructed. Which brings us to a quote from Kant (I think).
Personal History, Identity and Perception
It was Kant, I believe, who said, we see not what is, but who we are.
We dont start out this way, but as our world-views form by accretion,
each new layer is constructed out of new perceptions filtered and distorted
by existing layers. As we mature, we get to the state Kant describes, where
identity overwhelms perception altogether, and everything we see
reinforces the inertia of who we are, sometimes leading to complete
philosophical blindness. Neither cats, nor dogs can resist this inevitability,
this brain-entropy, but our personalities drive us to seek different kinds of
perceptions to fuel our identity-construction.
Dogs, and dog-like people end up with socially-constructed, largely
extrinsic identities because thats what they pay attention to as they
mature: other individuals. People to be like, people to avoid being like. It
is at once a homogenizing and stratifying kind of focus; it creates out of
self-fulfilling beliefs an identity mountain capped by Ken and Barbie
dolls, with foothills populated by hopeless, upward-gazing peripheral
Others, who must either continue the climb or mutiny.
Cats and cat-like people though, simply arent autocentric/speciescenteric (anthropomorphic, canino-morphic and felino-morphic).
Wherever they are on the identity mountain believed into existence by
dogs, they are looking outwards, not at the mountain itself. They are
driven to look at everything from quarks to black holes. In this broad
engagement of reality, there isnt a whole lot of room for detailed mental
models of just one species. In fact, the ideal cat uses exactly one spacesaving mental (and, to dogs, wrong) model: everyone is basically kinda
like me. Appropriate, considering we are one species on one insignificant
speck of dust circling an average star in a humdrum galaxy. The
Hitchhikers Guide to the Galaxy, remember, has a two-word entry for
Earth: Mostly Harmless. This indiscriminate, non-autocentric curiosity is
dangerous though: curiosity does kill the cat. Often, it is dogs that do the
killing. We may be mostly harmless to Vogons and Zaphod Beeblebrox,
but not to ourselves.

Paradoxically, this leads cat-people, through the Kantian dynamic, to


develop identities with vastly more diversity than dog-people. It is quite
logical though: random-sampling a broader universe of available
perceptions must inevitably lead to path-dependent divergence, while
imitative-selective-sampling of a subset of the universe must lead to some
convergence. By looking inward at species-level interpersonal differences,
dog-people become more alike. By caricaturing themselves and everybody
else to indistinguishable stick-figure levels, cats become more
individualized and unique. The more self-aware among the cats realize
that who I am and what I see are two aspects of the same reality: the sum
total of their experiences. Their identities are at once intrinsic and
universal.
Thats why the title of the article is Seeing Like a Cat. We see not
what is, but who we are. Cats become unique by feeding on a unique
history of perceptions. And that makes their perspectives unique. To see
the world like a cat is to see it from a unique angle. Equally, it is the
inability to see it from the collective perspective of dogs.
If you are a lucky cat, your unique cat perspective has value in dog
society. That brings us to Darwin.
Dogs, Cats and Darwin
To intellectualize something as colloquial as the cat-dog discourse
might seem like a pointless exercise to some. And yes, as the hilariously
mischievous parody of solemn analysis, Why Cats Paint: A Theory of
Feline Aesthetics demonstrates, it is easy to get carried away.
Yet I think there is something here as fundamental as the
fox/hedgehog argument. As I said when I started, we have both cat-like
and dog-like tendencies within us, and the two are not compatible. Both
sorts of personalities are necessary for the world to function, but you can
really only be like one or the other, and the course is set in childhood, long
before we are mature enough to consciously choose.
Where does this dichotomy come from though? Darwin, I think.

When we think in Darwinist terms, we usually pay the most attention


to the natural selection and survival of the fittest bits, which dog-belief
societies replicate as artificial selection and social competition. But theres
the other side of Darwin: variation. It is variation and natural selection.
Variation is the effect of being influenced by randomness. Without it, there
is no selection.
Cats create the variation, and mostly die for their efforts. The
successful (mostly by accident) cats spawn dog societies. Thats why, at
the very top of the identity pyramids constructed by dog-beliefs, even
above the prototypical Barbie/Ken abstractions, you will find cats. Cats
who didnt climb the mountain, but under whom the mountain grew.
Those unsociable messed-up-perspective neurotics who are as puzzled by
their position as the dogs who actually want it.

How to Take a Walk


August 9, 2010
It was cool and mildly breezy around 8 PM today. So I went for a
walk, and I noticed something. Though I passed a couple of hundred
people, nobody else was taking a walk. There were people returning from
work, people going places with purpose-laden bags, people running,
people going to the store, people sipping slurpies. But nobody taking a
walk. Young women working their phones, but not taking a walk. People
walking their dogs, or pushing a stroller, with the virtuous air of one
performing a chore for the benefit of another, but not themselves taking a
walk. I was the only one taking a walk. The closest activity to taking a
walk that I encountered was two people walking together and forgetting,
for a moment, to talk to each other. The moment passed. One of them said
something and they slipped back into talking rather than taking a walk.
My observation surprised me, and I tried to think back to other walks.
I take a lot of walks, so there are a lot of memories to comb through. In
my 13 years of taking walks in the United States, I could remember only
ever seeing one native-born American taking a walk. All other examples I
could remember were clearly immigrants. Middle-aged eastern European
matrons strolling. Old Chinese men walking slowly with their hands
behind their backs. Even elderly Americans dont seem to take walks the
way elderly immigrants do. They walk slowly, but they look like theyre
doing it for the exercise. They often look resentfully at young runners.
It is not hard to take a walk. The right shoes are the ones nearest the
door. The right clothes are the ones you happen to be wearing. You will
not sweat. You may need a jacket if it is cold, or an umbrella if it is
raining. If you pass anybody, you are not walking slowly enough for it to
be taking a walk. If you need to make up a nominal purpose like get
more bananas from the store you are not taking a walk.
Taking walks is the entry drug into the quiet, solitary heaven of
idleness (the next level up is sitting on a bench without a view). For
modern Americans, idleness is a shameful, private indulgence. If they

attempt it in public, they are stricken by social anxiety. They seem to fear
that the slow, solitary, and obviously purposeless amble that marks taking
a walk signals social incompetence or a life unacceptably adrift. If a
shopping bag, gym bag, friend or dog cannot be manufactured, nominal
non-idleness must be signaled through an ostentatious I have friends
phone call, or email-checking. If all else fails, hands must be placed
defiantly in pockets, to signal a brazen challenge to anyone who dares
look askance at you, Yeah, Im takin a walk! You got a problem with
that?
In America, visible idleness is a luxury for the homeless, the
delinquent and immigrants. The defiantly tautological protest, I have a
life, is quintessentially American. The American life does not exist until it
is filled up.
Even a pause at a bench must be justified by a worthwhile view or a
chilled drink.
Worthwhile. Now, theres an American word. Worth-while. Worthyour-while. The time value of money. Someone recently remarked that the
iPad has lowered the cost of waiting. Americans everywhere heaved a sigh
of relief, as their collective social anxiety dipped slightly. The rest of the
world groaned just a little bit.
The one American I remember seeing taking a walk was Tom Hales,
then a professor at the University of Michigan. He was teaching the
differential geometry course I was auditing that semester. One dark,
solitary Friday, while the rest of America was desperately trying to
demonstrate to itself that it had a life, I was taking a walk in an empty,
desolate part of the campus. I saw Hales taking a walk on the other side of
the street. He did not look like he was pondering Deep Matters. He merely
looked like he was taking a walk.
That year he proved the Kepler conjecture, a famous unsolved
problem dating back to 1611. A beautifully pointless problem about how
to stack balls. I like to think that Kepler must have enjoyed taking walks
too.

The Blue Tunnel


February 21, 2008

How Do You Run Away from Home?


April 11, 2012
My Big History reading binge last year got me interested in the
history of individualism as an idea. I am not entirely sure why, but it
seems to me that the right question to ask is the apparently whimsical one,
How do you run away from home?
I dont have good answers yet. So rather than waiting for answers to
come to me in the shower, I decided to post my incomplete thoughts.
Lets start with the concept of individualism.
The standard account of the idea appears to be an ahistorical one; an
ism that modifies other isms like libertarianism, existentialism and
anarchism.
Fukuyama argues, fairly persuasively, that the individual as a
meaningful unit only emerged in the early second millennium AD in
Europe, as a consequence of the rise of the Church and the resultant
weakening of kinship-based social structures. This immediately suggests a
follow-on question: is the slow, 600-700-year rise of individualism an
expression of an innate drive, unleashed at some point in history, or is it an
unnatural consequence of forces that weaken collectivism and make it
increasingly difficult to sustain? Are we drifting apart or being torn apart?
Do we possess a fundamental run away from home drive, or are we
torn away from home by larger, non-biological forces, despite a strong
attachment drive?
Chronic Disease or Natural Drive?
If the former is true, individualism is a real personality trait that was
merely expensive to express before around 1300 AD. The human
condition prior to the rise of individualism could be viewed as a sort of

widespread diseased state. Only the rare prince or brave runaway could
experience an individualistic lifestyle.
If the latter is true, individualism is something like an occasional
solitude-seeking impulse that has been turned into a persistent chronic
condition by modern environments. That would make individualism the
psychological equivalent of chronic physiological stress.
According to Robert Sapolskys excellent book Why Zebras Dont
Get Ulcers, chronic stress is the diseased state that results when natural
and healthy acute stress responses the kind we use to run away from
lions get turned on and never turned off. This is more than an analogy.
If individualism is a disease, it probably works by increasing chronic
stress levels.
The interesting thing about this question is that the answer will seem
like a no-brainer to you depending on your personality. To someone like
me, there is no question at all that individualism is natural and healthy. To
someone capable of forming very strong attachments, it seems equally
obvious that individualism is a disease.
The data apparently supports the latter view, since happiness and
longevity are correlated with relationships, as is physical health. Radical
individualism is physically stressful and shortens lifespans. I bet if you
looked at the data, youd find that individualists do get ulcers more
frequently than collectivists.
But to conclude from this data that individualism is a disease is to
reduce the essence of being human to a sort of mindlessly sociable
existence within a warm cocoon called home. If individualism is a disease,
then the exploratory and restless human brain that seeks to wander alone
for the hell of it is a sort of tumor.
Our brains, with their capacity for open-ended change, and restless
seeking of change and novelty (including specifically social change and
novelty), make the question non-trivial. We can potentially reprogram
ourselves in ways that muddy the distinctions between natural and
diseased behaviors.

The social perception of individualism through history has been


decidedly mixed, and we have popular narratives around both possibilities
thrown at us from early childhood (think of two classic childrens books:
The Runaway Bunny and Oh, the Places You Will Go).
Around the world (and particularly in the West), individualism has
superficially positive connotations. Correlations to things like creativity
and originality are emphasized.
But the social-institutional structure of the world possesses a strong
an immune defense against individualism everywhere. We dont realize
this because mature Western-style institutions allow for a greater variety of
scripts to choose from.
This variety represents a false synthesis of individualism and
collectivism. A domestication of individualist instincts. A better synthesis
is likely to be psychological rather than sociological, since we are talking
about intrinsic drives.
The Runaway Drive
The existence of an attachment drive is not a matter for debate. It
clearly exists, and is just as clearly healthy and natural. Nobody has
suggested (to my knowledge) that the ability to form attachments and
relationships is a disease. There do exist fundamentally unsociable species
(such as tigers and polar bears) for which adult sociability could be
considered a disease, but homo sapiens is not among them.
The attachment drive breaks down into two sub-drives, getting ahead
and getting along (competition and cooperation) that both require being
attached to the group.
The question is whether a third drive, getting away, exists. This is not
the same as being an exile or outcast. Those are circumstantial and
contingent situations: self- or other-imposed punishments. I am also not
talking about running away from home as a response to toxic
communities or abusive families. That is merely a case of lower-level
survival drives in Maslows pyramid over-riding higher-level social drives.

The getting away drive is the drive to voluntarily leave a group


because it is a natural thing to do. A drive that is powerful enough to
permanently overpower getting ahead and getting along drives, resulting
in a persistent state of solitary nomadism and transient sociability in the
extreme case, like that of George Clooney in Up in the Air. In his case, it
turns out to be empty bravado, a pretense covering up a yearning for
home. But I believe real (and less angsty) versions exist.
If we do possess such a drive, it presumably shows up as a weaker or
stronger trait, with some individuals remaining strongly attached and
others itching to cut themselves loose. In my post, Seeing Like a Cat, I
argued that:
I am a cat person, not in the sense of liking cats more
(though I do), but actually being more catlike than doglike.
Humans are more complex than either species; we are the
products of the tension between our dog-like and cat-like
instincts. We do both sociability and individualism in more
complicated ways than our two friends; call it hyperdogginess plushyper-cattiness. That is why reductively
mapping yourself exclusively to one or the other is such a
useful exercise.
To argue for a getting away drive is to argue for the presence of a catlike element to our nature (specifically, tiger-like unsociability, not lionlike; in the latter, individualism is exile imposed on young males.
Domestic cats appear to be an in-between species).
Fukuyama does not get to the evolutionary psychology of
individualism, and appears to be agnostic towards the question. He merely
marshals evidence to show that the original human condition was a
strongly collectivist one, from which at some point a widespread pattern of
individualist behavior emerged. Since his focus is on the institutional
history of civilization, he limits his treatment to the necessary level of
institutional development and externalized trust required for individualism
to exist.

Graeber appears to believe that it is a disease. For him, identity is


social identity. The individual is defined in terms of a nexus of
relationships. To be torn away from this nexus is slavery and loss of
identity. While the theory is a workable one if you are talking about actual
slavery (he treats the history of the African slave trade at considerable
length), things get murky when you get to other situations.
The Three Rs of Rootedness
Homesickness provides a good lens through which to understand
attachment drives. Diasporas and expat communities provide a good
illustration of the dynamics of both wanderlust and homesickness.
For some, the expat condition is torture. They return to some place
that feels like home every chance they get. If they cannot, they recreate
home wherever they are, as a frozen museum of memories. Home in this
sense is doggie home. It is a social idea, not a physical idea. Physical
elements of home serve as triggers for memories of social belonging.
There is a third kind response to the diaspora state, integration into the
new environment, that is also an expression of homesickness. It is merely
a more adaptable variety that is capable of building a new home in
unfamiliar surroundings (which can be either a new stationary geography
or a moving stream). This takes effort. Many of my Indian friends who
came to America at the same time as I did are now rabid football fans.
They used to be rabid cricket fans back in India. Its just a small part of
their careful (and ongoing) effort to construct a new sense of home.
All these responses are a reaction to the pain of homesickness: return,
recreation, rerooting. The three Rs of rootedness.
It is tempting to believe tha