Вы находитесь на странице: 1из 106

Object 1

On the fiver

Issue 960 by: Eddie Ford 02.05.2013

Winston Churchill: A reactionary bigot


Thatcher has been compared to Winston Churchill, and quite rightly - both were
virulently anti-working class. Eddie Ford looks at Churchill’'s toxic legacy
Margaret Thatcher is now an official saint of the right wing of the bourgeoisie. That was made
abundantly clear by the Tory media’s revoltingly sycophantic coverage of her funeral, which was a
state funeral in all but name. Her elevated status is illustrated by the frequent comparisons to
Winston Churchill - the latter proclaimed as the country’s greatest ever wartime leader and the
former designated the greatest ever ‘peacetime’ prime minister (leaving aside Northern Ireland and
the Falklands for now). She saved the country from disaster in the same way that Churchill rescued
us from the Nazi menace.
Now we are to get Winston Churchill’s image on every £5 banknote, along with his “blood, toil,
tears and sweat” quote to a backdrop of parliament (he previously appeared on a 1965 crown coin).
He will be replacing Elizabeth Fry, the progressively-minded social reformer and Quaker known as
the “angel of prisons”, who has been on the note since 2001. Mervyn King, the departing Bank of
England governor, even suggested that the new notes might become known as “Winstons” - perhaps
destined to become the most popular ever manufactured.
Explaining his decision, King said Churchill “holds a special place in the affections of our nation”,
for his indefatigable “energy, courage, eloquence, wit and public service are an inspiration to us
all”. Above all, King claimed, he “remains a hero of the entire free world” - helping to ensure the
“survival of those freedoms” that we “continue to enjoy today”. He was the ultimate democrat, it
seems. A saviour.
Depressingly, though predictably enough, there has not been a squeak of protest against Mervyn
King’s decision - regarded as entirely unproblematic. A stark contrast to Thatcher’s funeral, which
divided the country. At least half the population hated the woman, not just the ‘usual suspects’ on
the far left. Churchill, on the other hand, is presented - and overwhelmingly accepted - as some sort
of unifying figure.
But if the working class had a collective memory, which sadly it does not at the moment, not having
its own party, it would be strongly objecting to his appearance on the note. Why should we have to
look at his damned face every day? He was without doubt the most virulently anti-working class
representative of the British high establishment in the 20th century bar none. Like Margaret
Thatcher he was a class-war warrior to his marrow, never afraid to take on the ‘enemy within’ - the
labour movement and the organised working class. Therefore, in that sense, both Thatcher and
Churchill fully deserve to be mentioned in the same breath.

Force
Say what you will about Winston Churchill, but one thing cannot be denied: he was consistent - that
is, consistently anti-working class and reactionary. Whether at home or abroad. As home secretary
in 1910, he sent in the troops against the miners at Tonypandy (the so-called Tonypandy or Rhondda
riots). Though no shots were fired and the police were far more despised - one historian describing
them as an “army of occupation” - the presence of the troops prevented the strike action from
ending early in the miners’ favour. The troops also helped ensure that strikers and miner leaders
would be successfully prosecuted the following year.
Churchill is still hated to this day in many parts of south Wales due to Tonypandy. In 2010 a Welsh
local council in the Vale of Glamorgan opposed the renaming of a military base after him because
he sent the troops into the Rhondda. Jackie Griffin, clerk of Llanmaes council, stated he was unable
to support such an “inappropriate name change” due to the fact that there is “still a strong feeling of
animosity” towards Winston Churchill in the community.1

When it came to the 1926 general strike, now as chancellor of the exchequer, he wanted to do the
same thing - send the troops in. As the enthusiastic editor of the British Gazette, which ran for eight
editions during the strike, he openly advocated using physical force. Machine guns should be used
on the striking miners if required. His reasoning was quite simple and not without logic, For him,
the general strike as a quasi-revolutionary venture and he therefore had no interest in a negotiated
settlement - it had to be crushed by any means necessary. “Either the country will break the general
strike”, he declared, “or the general strike will break the country”; he did not agree that the TUC
“have as much right as the government to publish their side of the case and to exhort their followers
to continue action”. They had no right to resist the government of the day. It is also worth noting
that Churchill also wanted to turn the BBC into a government propaganda department - to hell with
all pretence of ‘impartiality’.
Showing exactly what he would do to defend the interests of the British ruling class, Churchill
helped create the Black and Tans - which terrorised the Irish people between 1920 and 1922. No-
one disputes that the Tans killed and terrorised on a large scale, resorting to ferocious reprisals and
‘collective punishment’. When a Tan was killed in Cork, they burnt down more than 300 buildings
in the city centre and afterwards proudly pinned pieces of burnt cork to their caps. They were also
involved in the notorious 1920 Bloody Sunday massacre, an atrocity which occurred following the
spectacular assassinations of over a dozen members of the Cairo Gang, a team of British undercover
agents operating from Dublin. In retaliation, the Auxiliary Division of the Royal Irish Constabulary
and the Tans opened fire indiscriminately on the crowd at a Gaelic football match in Croke Park,
killing 14 supporters.
The Tans’ brutality disgusted even members of the British army. General Frank Crozier resigned in
1921 in protest against them being allowed to “murder, rob, loot and burn up the innocent because
they could not catch the few guilty on the run”. The late Lord Longford wrote of the Tans torturing
captured republicans - “cutting out the tongue of one, the nose of another, the heart of another and
battering in the skull of a fourth”.
Then, of course, there were Churchill’s odious social views - notably his support for a particularly
foul brand of eugenics. The “improvement of the British breed is my aim in life”, he wrote to his
cousin, Ivor Guest, on January 19 1899. As a young politician entering parliament in 1901,
Churchill saw the mentally disabled as a threat to the vigour and virility of British society. The
stock must not be diluted. Thus as home secretary he was in favour of the confinement, segregation
and sterilisation of the “feeble-minded” and others - including “idiots”, “imbeciles” and “moral
defectives”. He proposed in 1910 that 100,000 “degenerate” Britons should be “forcibly sterilised
and others put in labour camps to halt the decline of the British race”.
As for “tramps and wastrels”, he said a year later, there “ought to be proper labour colonies where
they could be sent for considerable periods and made to realise their duty to the state”. Very liberal.
Unsurprisingly, Churchill eagerly endorsed Dr HC Sharp’s charming booklet, The sterilisation of
degenerates.2 Sharp was a member of the US Indiana Reformatory and issued an apocalyptic
warning that “the degenerate class” was reproducing more quickly than the general population and
thus threatening the “purity of the race”. In 1907 Indiana passed a eugenics law making sterilisation
mandatory for those individuals in state custody deemed to be “mentally unfit” - other states
followed suit and in the end more than 65,000 individuals were forcibly sterilised (nor were they
allowed to marry). Naturally, Churchill was impressed, writing to home office officials asking them
to investigate the possibility of introducing the “Indiana law” to Britain. He remained frustrated on
this point. The 1913 Mental Deficiency Act rejected compulsory sterilisation in favour of
confinement in special institutions. Bloody do-gooders.
With regards to international politics, Churchill was a fanatical anti-Bolshevik. Nothing else
mattered except the need to prevent the spread of communism and ruthlessly “strangle the
Bolshevik baby in its cradle” - whether that meant direct imperialist invasion or the sponsoring of
terrorism. Anything goes. Though the Soviet regime survived the imperialist assault, Churchill
ultimately succeeded in his mission by forcing civil war on the Bolsheviks - traumatising society as
a whole and by necessity turning the Bolsheviks/Communist Party into a party-state war machine.
In other words, the Bolsheviks became transmuted - going from a situation where they led a
revolution based on the working class to one where the working class had become utterly declassed:
the fate of the revolution was dependent, as Lenin ruefully said, on the decision of a few thousand
communists. By the time JV Stalin amended his Foundations of Leninism in 1924 to espouse the
idea of socialism in one country - abandoning proletarian internationalism for national socialism -
the revolution was indeed being ‘strangled’.
Anti-Semite
Just about the greatest myth peddled about Winston Churchill is that he led a great anti-fascist
crusade against the Axis power during World War II - his finest hour. What utter baloney. The man
welcomed the coming to power of Benito Mussolini and Adolf Hitler - viewing them as valuable
bulwarks against communism. Churchill only became ‘anti-fascist’ when he felt that the British
empire was threatened by the expanding ambitions of these rivals. Defending British imperial
interests, not fighting a democratic crusade against fascism, was his aim during World War II.
Previously, Churchill had praised Mussolini to the skies - the man could do no wrong. Il Duce had
“rendered a service to the whole world” by showing the “way to combat subversive forces”. In fact,
Churchill thought, Mussolini was the “Roman genius” - the “greatest lawgiver among men”.
Speaking in Rome in 1927, he told Italy’s Fascist Party: “If I had been an Italian, I would have been
entirely with you from the beginning to the end of your victorious struggle against the bestial
appetites and passions of Leninism.”
He heaped similar praise upon Hitler too. After the Nazis came to power, Churchill proclaimed in a
1935 article that if Britain was defeated like Germany had been in 1918, he hoped “we should find a
champion as indomitable to restore our courage and lead us back to our place among the nations”.
While all manner of “formidable transformations” were occurring in Europe, Churchill continued,
corporal Hitler was “fighting his long, wearing battle for the German heart” - the story of that
struggle “cannot be read without admiration for the courage, the perseverance and the vital force
which enabled him to challenge, defy, conciliate or overcome all the authorities or resistances which
barred his path”. If only things had been different, Britain could have done a deal with fascist Italy
and Germany against the common enemy - ie, ‘international Bolshevism’.
An associated myth is that Churchill fought the war to save the Jews from Nazi genocide. Total
ahistorical nonsense, which is purely an ideological product of the post-World War II bourgeoisie -
reinvented as a ‘democratic’ and ‘anti-fascist’ class with a deep hatred of racism in any form.
Rather, Churchill was an anti-Semite - a prejudice he shared with most members of his class at the
time. Yes, he may not have bought into Hitler’s mad pseudo-science (although his penchant for
eugenics took him in that direction), but he certainly distrusted Jews - viewing them as both
exploiters and resisters to exploitation: parasitical finance capitalists and Bolsheviks/communists.
This irrational bigotry shines through in his notorious February 1920 article for the Illustrated
Sunday Herald - ‘Zionism versus Bolshevism: a struggle for the soul of the Jewish people’. 3 In it,
he writes that “we owe to the Jews in the Christian revelation a system of ethics which, even if it
were entirely separated from the supernatural, would be incomparably the most precious possession
of mankind”. But at the same time, he cautions, it “may well be that this same astounding race may
at the present time be in the actual process of producing another system of morals and philosophy,
as malevolent as Christianity was benevolent” - it “almost seems as if the gospel of Christ and the
gospel of Antichrist were destined to originate among the same people”.
Whilst lauding “national Jews” - the good Jews “loyal to the land of their adoption” - he denounced
the violent schemes of the “international Jews”. For Churchill, there was no need to “exaggerate the
part played in the creation of Bolshevism and in the actual bringing about of the Russian Revolution
by these international and for the most part atheistic Jews”. With the “notable exception” of Lenin,
he fulminated, the “majority of the leading figures” in the communist movement are Jews.
Moreover, even more importantly, the “principal inspiration and driving power comes from the
Jewish leaders”. Karl Marx, Trotsky, Bela Kun, Rosa Luxemburg, Emma Goldman, etc - all part of
“this worldwide conspiracy for the overthrow of civilisation and for the reconstitution of society on
the basis of arrested development, of envious malevolence and impossible equality”. A hideous
disease.
He recommended Zionism as a partial antidote to Bolshevism - observing that “nothing could be
more significant than the fury with which Trotsky has attacked the Zionists”. The “cruel penetration
of his mind”, believed Churchill, “leaves [Trotsky] in no doubt that his schemes of a worldwide
communist state under Jewish domination are directly thwarted and hindered by this new ideal,
which directs the energies and the hopes of Jews in every land towards a simpler, a truer and a far
more attainable goal” - a home for Jews in Palestine under the “protection”, and watchful eye, of
the British crown.
The fact that we have forgotten the real Winston Churchill signals the failure of the left. Criminally,
the bourgeoisie has almost total freedom to write and rewrite history as it sees fit. It would be
dangerously complacent to think that the same thing could not happen to Margaret Thatcher, maybe
sooner rather than later. For instance, The Guardian conducted a snap poll on who should be on
banknotes to come. The favourite was Isambard Kingdom Brunel (20%), followed by Emily
Pankhurst (19%) - with Thatcher coming a worrying third on 14% (David Beckham and Tony Blair
came joint last on 1%).4

Frighteningly, it could happen - your grandchildren may come home one day excitedly waving a
Thatcher banknote, telling you teacher said she saved the country from disaster. Organise now, and
fight for left unity, to make sure this never happens.
eddie.ford@weeklyworker.org.uk

Notes
1. www.bbc.co.uk/news/10294530.
2. http://tinyurl.com/csdjtag.
3. www.fpp.co.uk/bookchapters/WSC/WSCwrote1920.html.
4. The Guardian April 26.
Doomsday Prep for the Super-Rich
Some of the wealthiest people in America—in Silicon Valley,
New York, and beyond—are getting ready for the crackup of
civilization.
By Evan Osnos January 30, 2017

An armed guard stands at the entrance of the Survival Condo Project, a former missile silo north of
Wichita, Kansas, that has been converted into luxury apartments for people worried about the
crackup of civilization.
Steve Huffman, the thirty-three-year-old co-founder and C.E.O. of Reddit, which is valued at six
hundred million dollars, was nearsighted until November, 2015, when he arranged to have laser eye
surgery. He underwent the procedure not for the sake of convenience or appearance but, rather, for a
reason he doesn’t usually talk much about: he hopes that it will improve his odds of surviving a
disaster, whether natural or man-made. “If the world ends—and not even if the world ends, but if
we have trouble—getting contacts or glasses is going to be a huge pain in the ass,” he told me
recently. “Without them, I’m fucked.”
Huffman, who lives in San Francisco, has large blue eyes, thick, sandy hair, and an air of restless
curiosity; at the University of Virginia, he was a competitive ballroom dancer, who hacked his
roommate’s Web site as a prank. He is less focussed on a specific threat—a quake on the San
Andreas, a pandemic, a dirty bomb—than he is on the aftermath, “the temporary collapse of our
government and structures,” as he puts it. “I own a couple of motorcycles. I have a bunch of guns
and ammo. Food. I figure that, with that, I can hole up in my house for some amount of time.”
Survivalism, the practice of preparing for a crackup of civilization, tends to evoke a certain picture:
the woodsman in the tinfoil hat, the hysteric with the hoard of beans, the religious doomsayer. But
in recent years survivalism has expanded to more affluent quarters, taking root in Silicon Valley and
New York City, among technology executives, hedge-fund managers, and others in their economic
cohort.
Last spring, as the Presidential campaign exposed increasingly toxic divisions in America, Antonio
García Martínez, a forty-year-old former Facebook product manager living in San Francisco,
bought five wooded acres on an island in the Pacific Northwest and brought in generators, solar
panels, and thousands of rounds of ammunition. “When society loses a healthy founding myth, it
descends into chaos,” he told me. The author of “Chaos Monkeys,” an acerbic Silicon Valley
memoir, García Martínez wanted a refuge that would be far from cities but not entirely isolated.
“All these dudes think that one guy alone could somehow withstand the roving mob,” he said. “No,
you’re going to need to form a local militia. You just need so many things to actually ride out the
apocalypse.” Once he started telling peers in the Bay Area about his “little island project,” they
came “out of the woodwork” to describe their own preparations, he said. “I think people who are
particularly attuned to the levers by which society actually works understand that we are skating on
really thin cultural ice right now.”
In private Facebook groups, wealthy survivalists swap tips on gas masks, bunkers, and locations
safe from the effects of climate change. One member, the head of an investment firm, told me, “I
keep a helicopter gassed up all the time, and I have an underground bunker with an air-filtration
system.” He said that his preparations probably put him at the “extreme” end among his peers. But
he added, “A lot of my friends do the guns and the motorcycles and the gold coins. That’s not too
rare anymore.”
Tim Chang, a forty-four-year-old managing director at Mayfield Fund, a venture-capital firm, told
me, “There’s a bunch of us in the Valley. We meet up and have these financial-hacking dinners and
talk about backup plans people are doing. It runs the gamut from a lot of people stocking up on
Bitcoin and cryptocurrency, to figuring out how to get second passports if they need it, to having
vacation homes in other countries that could be escape havens.” He said, “I’ll be candid: I’m
stockpiling now on real estate to generate passive income but also to have havens to go to.” He and
his wife, who is in technology, keep a set of bags packed for themselves and their four-year-old
daughter. He told me, “I kind of have this terror scenario: ‘Oh, my God, if there is a civil war or a
giant earthquake that cleaves off part of California, we want to be ready.’ ”
When Marvin Liao, a former Yahoo executive who is now a partner at 500 Startups, a venture-
capital firm, considered his preparations, he decided that his caches of water and food were not
enough. “What if someone comes and takes this?” he asked me. To protect his wife and daughter, he
said, “I don’t have guns, but I have a lot of other weaponry. I took classes in archery.”
For some, it’s just “brogrammer” entertainment, a kind of real-world sci-fi, with gear; for others,
like Huffman, it’s been a concern for years. “Ever since I saw the movie ‘Deep Impact,’ ” he said.
The film, released in 1998, depicts a comet striking the Atlantic, and a race to escape the tsunami.
“Everybody’s trying to get out, and they’re stuck in traffic. That scene happened to be filmed near
my high school. Every time I drove through that stretch of road, I would think, I need to own a
motorcycle because everybody else is screwed.”
Huffman has been a frequent attendee at Burning Man, the annual, clothing-optional festival in the
Nevada desert, where artists mingle with moguls. He fell in love with one of its core principles,
“radical self-reliance,” which he takes to mean “happy to help others, but not wanting to require
others.” (Among survivalists, or “preppers,” as some call themselves, FEMA, the Federal
Emergency Management Agency, stands for “Foolishly Expecting Meaningful Aid.”) Huffman has
calculated that, in the event of a disaster, he would seek out some form of community: “Being
around other people is a good thing. I also have this somewhat egotistical view that I’m a pretty
good leader. I will probably be in charge, or at least not a slave, when push comes to shove.”
Over the years, Huffman has become increasingly concerned about basic American political
stability and the risk of large-scale unrest. He said, “Some sort of institutional collapse, then you
just lose shipping—that sort of stuff.” (Prepper blogs call such a scenario W.R.O.L., “without rule
of law.”) Huffman has come to believe that contemporary life rests on a fragile consensus. “I think,
to some degree, we all collectively take it on faith that our country works, that our currency is
valuable, the peaceful transfer of power—that all of these things that we hold dear work because we
believe they work. While I do believe they’re quite resilient, and we’ve been through a lot, certainly
we’re going to go through a lot more.”
In building Reddit, a community of thousands of discussion threads, into one of the most frequently
visited sites in the world, Huffman has grown aware of the way that technology alters our relations
with one another, for better and for worse. He has witnessed how social media can magnify public
fear. “It’s easier for people to panic when they’re together,” he said, pointing out that “the Internet
has made it easier for people to be together,” yet it also alerts people to emerging risks. Long before
the financial crisis became front-page news, early signs appeared in user comments on Reddit.
“People were starting to whisper about mortgages. They were worried about student debt. They
were worried about debt in general. There was a lot of, ‘This is too good to be true. This doesn’t
smell right.’ ” He added, “There’s probably some false positives in there as well, but, in general, I
think we’re a pretty good gauge of public sentiment. When we’re talking about a faith-based
collapse, you’re going to start to see the chips in the foundation on social media first.”
How did a preoccupation with the apocalypse come to flourish in Silicon Valley, a place known, to
the point of cliché, for unstinting confidence in its ability to change the world for the better?
Those impulses are not as contradictory as they seem. Technology rewards the ability to imagine
wildly different futures, Roy Bahat, the head of Bloomberg Beta, a San Francisco-based venture-
capital firm, told me. “When you do that, it’s pretty common that you take things ad infinitum, and
that leads you to utopias and dystopias,” he said. It can inspire radical optimism—such as the
cryonics movement, which calls for freezing bodies at death in the hope that science will one day
revive them—or bleak scenarios. Tim Chang, the venture capitalist who keeps his bags packed, told
me, “My current state of mind is oscillating between optimism and sheer terror.”
In recent years, survivalism has been edging deeper into mainstream culture. In 2012, National
Geographic Channel launched “Doomsday Preppers,” a reality show featuring a series of Americans
bracing for what they called S.H.T.F. (when the “shit hits the fan”). The première drew more than
four million viewers, and, by the end of the first season, it was the most popular show in the
channel’s history. A survey commissioned by National Geographic found that forty per cent of
Americans believed that stocking up on supplies or building a bomb shelter was a wiser investment
than a 401(k). Online, the prepper discussions run from folksy (“A Mom’s Guide to Preparing for
Civil Unrest”) to grim (“How to Eat a Pine Tree to Survive”).
The reëlection of Barack Obama was a boon for the prepping industry. Conservative devotees, who
accused Obama of stoking racial tensions, restricting gun rights, and expanding the national debt,
loaded up on the types of freeze-dried cottage cheese and beef stroganoff promoted by
commentators like Glenn Beck and Sean Hannity. A network of “readiness” trade shows attracted
conventioneers with classes on suturing (practiced on a pig trotter) and photo opportunities with
survivalist stars from the TV show “Naked and Afraid.”

The living room of an apartment at the Survival Condo Project.


The fears were different in Silicon Valley. Around the same time that Huffman, on Reddit, was
watching the advance of the financial crisis, Justin Kan heard the first inklings of survivalism
among his peers. Kan co-founded Twitch, a gaming network that was later sold to Amazon for
nearly a billion dollars. “Some of my friends were, like, ‘The breakdown of society is imminent. We
should stockpile food,’ ” he said. “I tried to. But then we got a couple of bags of rice and five cans
of tomatoes. We would have been dead if there was actually a real problem.” I asked Kan what his
prepping friends had in common. “Lots of money and resources,” he said. “What are the other
things I can worry about and prepare for? It’s like insurance.”
Yishan Wong, an early Facebook employee, was the C.E.O. of Reddit from 2012 to 2014. He, too,
had eye surgery for survival purposes, eliminating his dependence, as he put it, “on a nonsustainable
external aid for perfect vision.” In an e-mail, Wong told me, “Most people just assume improbable
events don’t happen, but technical people tend to view risk very mathematically.” He continued,
“The tech preppers do not necessarily think a collapse is likely. They consider it a remote event, but
one with a very severe downside, so, given how much money they have, spending a fraction of their
net worth to hedge against this . . . is a logical thing to do.”
How many wealthy Americans are really making preparations for a catastrophe? It’s hard to know
exactly; a lot of people don’t like to talk about it. (“Anonymity is priceless,” one hedge-fund
manager told me, declining an interview.) Sometimes the topic emerges in unexpected ways. Reid
Hoffman, the co-founder of LinkedIn and a prominent investor, recalls telling a friend that he was
thinking of visiting New Zealand. “Oh, are you going to get apocalypse insurance?” the friend
asked. “I’m, like, Huh?” Hoffman told me. New Zealand, he discovered, is a favored refuge in the
event of a cataclysm. Hoffman said, “Saying you’re ‘buying a house in New Zealand’ is kind of a
wink, wink, say no more. Once you’ve done the Masonic handshake, they’ll be, like, ‘Oh, you
know, I have a broker who sells old ICBM silos, and they’re nuclear-hardened, and they kind of
look like they would be interesting to live in.’ ”
I asked Hoffman to estimate what share of fellow Silicon Valley billionaires have acquired some
level of “apocalypse insurance,” in the form of a hideaway in the U.S. or abroad. “I would guess
fifty-plus per cent,” he said, “but that’s parallel with the decision to buy a vacation home. Human
motivation is complex, and I think people can say, ‘I now have a safety blanket for this thing that
scares me.’ ” The fears vary, but many worry that, as artificial intelligence takes away a growing
share of jobs, there will be a backlash against Silicon Valley, America’s second-highest
concentration of wealth. (Southwestern Connecticut is first.) “I’ve heard this theme from a bunch of
people,” Hoffman said. “Is the country going to turn against the wealthy? Is it going to turn against
technological innovation? Is it going to turn into civil disorder?”
The C.E.O. of another large tech company told me, “It’s still not at the point where industry insiders
would turn to each other with a straight face and ask what their plans are for some apocalyptic
event.” He went on, “But, having said that, I actually think it’s logically rational and appropriately
conservative.” He noted the vulnerabilities exposed by the Russian cyberattack on the Democratic
National Committee, and also by a large-scale hack on October 21st, which disrupted the Internet in
North America and Western Europe. “Our food supply is dependent on G.P.S., logistics, and
weather forecasting,” he said, “and those systems are generally dependent on the Internet, and the
Internet is dependent on D.N.S.”—the system that manages domain names. “Go risk factor by risk
factor by risk factor, acknowledging that there are many you don’t even know about, and you ask,
‘What’s the chance of this breaking in the next decade?’ Or invert it: ‘What’s the chance that
nothing breaks in fifty years?’ ”
One measure of survivalism’s spread is that some people are starting to speak out against it. Max
Levchin, a founder of PayPal and of Affirm, a lending startup, told me, “It’s one of the few things
about Silicon Valley that I actively dislike—the sense that we are superior giants who move the
needle and, even if it’s our own failure, must be spared.”
To Levchin, prepping for survival is a moral miscalculation; he prefers to “shut down party
conversations” on the topic. “I typically ask people, ‘So you’re worried about the pitchforks. How
much money have you donated to your local homeless shelter?’ This connects the most, in my
mind, to the realities of the income gap. All the other forms of fear that people bring up are
artificial.” In his view, this is the time to invest in solutions, not escape. “At the moment, we’re
actually at a relatively benign point of the economy. When the economy heads south, you will have
a bunch of people that are in really bad shape. What do we expect then?”
On the opposite side of the country, similar awkward conversations have been unfolding in some
financial circles. Robert H. Dugger worked as a lobbyist for the financial industry before he became
a partner at the global hedge fund Tudor Investment Corporation, in 1993. After seventeen years, he
retired to focus on philanthropy and his investments. “Anyone who’s in this community knows
people who are worried that America is heading toward something like the Russian Revolution,” he
told me recently.
To manage that fear, Dugger said, he has seen two very different responses. “People know the only
real answer is, Fix the problem,” he said. “It’s a reason most of them give a lot of money to good
causes.” At the same time, though, they invest in the mechanics of escape. He recalled a dinner in
New York City after 9/11 and the bursting of the dot-com bubble: “A group of centi-millionaires
and a couple of billionaires were working through end-of-America scenarios and talking about what
they’d do. Most said they’ll fire up their planes and take their families to Western ranches or homes
in other countries.” One of the guests was skeptical, Dugger said. “He leaned forward and asked,
‘Are you taking your pilot’s family, too? And what about the maintenance guys? If revolutionaries
are kicking in doors, how many of the people in your life will you have to take with you?’ The
questioning continued. In the end, most agreed they couldn’t run.”
Élite anxiety cuts across political lines. Even financiers who supported Trump for President, hoping
that he would cut taxes and regulations, have been unnerved at the ways his insurgent campaign
seems to have hastened a collapse of respect for established institutions. Dugger said, “The media is
under attack now. They wonder, Is the court system next? Do we go from ‘fake news’ to ‘fake
evidence’? For people whose existence depends on enforceable contracts, this is life or death.”
Robert A. Johnson sees his peers’ talk of fleeing as the symptom of a deeper crisis. At fifty-nine,
Johnson has tousled silver hair and a soft-spoken, avuncular composure. He earned degrees in
electrical engineering and economics at M.I.T., got a Ph.D. in economics at Princeton, and worked
on Capitol Hill, before entering finance. He became a managing director at the hedge fund Soros
Fund Management. In 2009, after the onset of the financial crisis, he was named head of a think
tank, the Institute for New Economic Thinking.
When I visited Johnson, not long ago, at his office on Park Avenue South, he described himself as
an accidental student of civic anxiety. He grew up outside Detroit, in Grosse Pointe Park, the son of
a doctor, and he watched his father’s generation experience the fracturing of Detroit. “What I’m
seeing now in New York City is sort of like old music coming back,” he said. “These are friends of
mine. I used to live in Belle Haven, in Greenwich, Connecticut. Louis Bacon, Paul Tudor Jones, and
Ray Dalio”—hedge-fund managers—“were all within fifty yards of me. From my own career, I
would just talk to people. More and more were saying, ‘You’ve got to have a private plane. You
have to assure that the pilot’s family will be taken care of, too. They have to be on the plane.’ ”
By January, 2015, Johnson was sounding the alarm: the tensions produced by acute income
inequality were becoming so pronounced that some of the world’s wealthiest people were taking
steps to protect themselves. At the World Economic Forum in Davos, Switzerland, Johnson told the
audience, “I know hedge-fund managers all over the world who are buying airstrips and farms in
places like New Zealand because they think they need a getaway.”
Johnson wishes that the wealthy would adopt a greater “spirit of stewardship,” an openness to
policy change that could include, for instance, a more aggressive tax on inheritance. “Twenty-five
hedge-fund managers make more money than all of the kindergarten teachers in America
combined,” he said. “Being one of those twenty-five doesn’t feel good. I think they’ve developed a
heightened sensitivity.” The gap is widening further. In December, the National Bureau of
Economic Research published a new analysis, by the economists Thomas Piketty, Emmanuel Saez,
and Gabriel Zucman, which found that half of American adults have been “completely shut off from
economic growth since the 1970s.” Approximately a hundred and seventeen million people earn, on
average, the same income that they did in 1980, while the typical income for the top one per cent
has nearly tripled. That gap is comparable to the gap between average incomes in the U.S. and the
Democratic Republic of Congo, the authors wrote.
Johnson said, “If we had a more equal distribution of income, and much more money and energy
going into public school systems, parks and recreation, the arts, and health care, it could take an
awful lot of sting out of society. We’ve largely dismantled those things.”
As public institutions deteriorate, élite anxiety has emerged as a gauge of our national predicament.
“Why do people who are envied for being so powerful appear to be so afraid?” Johnson asked.
“What does that really tell us about our system?” He added, “It’s a very odd thing. You’re basically
seeing that the people who’ve been the best at reading the tea leaves—the ones with the most
resources, because that’s how they made their money—are now the ones most preparing to pull the
rip cord and jump out of the plane.”
On a cool evening in early November, I rented a car in Wichita, Kansas, and drove north from the
city through slanting sunlight, across the suburbs and out beyond the last shopping center, where the
horizon settles into farmland. After a couple of hours, just before the town of Concordia, I headed
west, down a dirt track flanked by corn and soybean fields, winding through darkness until my
lights settled on a large steel gate. A guard, dressed in camouflage, held a semiautomatic rifle.
He ushered me through, and, in the darkness, I could see the outline of a vast concrete dome, with a
metal blast door partly ajar. I was greeted by Larry Hall, the C.E.O. of the Survival Condo Project, a
fifteen-story luxury apartment complex built in an underground Atlas missile silo. The facility
housed a nuclear warhead from 1961 to 1965, when it was decommissioned. At a site conceived for
the Soviet nuclear threat, Hall has erected a defense against the fears of a new era. “It’s true
relaxation for the ultra-wealthy,” he said. “They can come out here, they know there are armed
guards outside. The kids can run around.”
Hall got the idea for the project about a decade ago, when he read that the federal government was
reinvesting in catastrophe planning, which had languished after the Cold War. During the September
11th attacks, the Bush Administration activated a “continuity of government” plan, transporting
selected federal workers by helicopter and bus to fortified locations, but, after years of disuse,
computers and other equipment in the bunkers were out of date. Bush ordered a renewed focus on
continuity plans, and FEMA launched annual government-wide exercises. (The most recent, Eagle
Horizon, in 2015, simulated hurricanes, improvised nuclear devices, earthquakes, and cyberattacks.)
“I started saying, ‘Well, wait a minute, what does the government know that we don’t know?’ ” Hall
said. In 2008, he paid three hundred thousand dollars for the silo and finished construction in
December, 2012, at a cost of nearly twenty million dollars. He created twelve private apartments:
full-floor units were advertised at three million dollars; a half-floor was half the price. He has sold
every unit, except one for himself, he said.
Most preppers don’t actually have bunkers; hardened shelters are expensive and complicated to
build. The original silo of Hall’s complex was built by the Army Corps of Engineers to withstand a
nuclear strike. The interior can support a total of seventy-five people. It has enough food and fuel
for five years off the grid; by raising tilapia in fish tanks, and hydroponic vegetables under grow
lamps, with renewable power, it could function indefinitely, Hall said. In a crisis, his SWAT-team-
style trucks (“the Pit-Bull VX, armored up to fifty-calibre”) will pick up any owner within four
hundred miles. Residents with private planes can land in Salina, about thirty miles away. In his
view, the Army Corps did the hardest work by choosing the location. “They looked at height above
sea level, the seismology of an area, how close it is to large population centers,” he said.
Hall, in his late fifties, is barrel-chested and talkative. He studied business and computers at the
Florida Institute of Technology and went on to specialize in networks and data centers for Northrop
Grumman, Harris Corporation, and other defense contractors. He now goes back and forth between
the Kansas silo and a home in the Denver suburbs, where his wife, a paralegal, lives with their
twelve-year-old son.
Hall led me through the garage, down a ramp, and into a lounge, with a stone fireplace, a dining
area, and a kitchen to one side. It had the feel of a ski condo without windows: pool table, stainless-
steel appliances, leather couches. To maximize space, Hall took ideas from cruise-ship design. We
were accompanied by Mark Menosky, an engineer who manages day-to-day operations. While they
fixed dinner—steak, baked potatoes, and salad—Hall said that the hardest part of the project was
sustaining life underground. He studied how to avoid depression (add more lights), prevent cliques
(rotate chores), and simulate life aboveground. The condo walls are fitted with L.E.D. “windows”
that show a live video of the prairie above the silo. Owners can opt instead for pine forests or other
vistas. One prospective resident from New York City wanted video of Central Park. “All four
seasons, day and night,” Menosky said. “She wanted the sounds, the taxis and the honking horns.”
Some survivalists disparage Hall for creating an exclusive refuge for the wealthy and have
threatened to seize his bunker in a crisis. Hall waved away this possibility when I raised it with him
over dinner. “You can send all the bullets you want into this place.” If necessary, his guards would
return fire, he said. “We’ve got a sniper post.”
The swimming pool at Larry Hall’s Survival Condo Project. These days, when North Korea tests a
bomb, Hall can expect an uptick in phone inquiries about space in the complex.
Photograph by Dan Winters for The New Yorker

Recently, I spoke on the phone with Tyler Allen, a real-estate developer in Lake Mary, Florida, who
told me that he paid three million dollars for one of Hall’s condos. Allen said he worries that
America faces a future of “social conflict” and government efforts to deceive the public. He
suspects that the Ebola virus was allowed to enter the country in order to weaken the population.
When I asked how friends usually respond to his ideas, he said, “The natural reaction that you get
most of the time is for them to laugh, because it scares them.” But, he added, “my credibility has
gone through the roof. Ten years ago, this just seemed crazy that all this was going to happen: the
social unrest and the cultural divide in the country, the race-baiting and the hate-mongering.” I
asked how he planned to get to Kansas from Florida in a crisis. “If a dirty bomb goes off in Miami,
everybody’s going to go in their house and congregate in bars, just glued to the TV. Well, you’ve
got forty-eight hours to get the hell out of there.”
Allen told me that, in his view, taking precautions is unfairly stigmatized. “They don’t put tinfoil on
your head if you’re the President and you go to Camp David,” he said. “But they do put tinfoil on
your head if you have the means and you take steps to protect your family should a problem occur.”
Why do our dystopian urges emerge at certain moments and not others? Doomsday—as a prophecy,
a literary genre, and a business opportunity—is never static; it evolves with our anxieties. The
earliest Puritan settlers saw in the awe-inspiring bounty of the American wilderness the prospect of
both apocalypse and paradise. When, in May of 1780, sudden darkness settled on New England,
farmers perceived it as a cataclysm heralding the return of Christ. (In fact, the darkness was caused
by enormous wildfires in Ontario.) D. H. Lawrence diagnosed a specific strain of American dread.
“Doom! Doom! Doom!” he wrote in 1923. “Something seems to whisper it in the very dark trees of
America.”
Historically, our fascination with the End has flourished at moments of political insecurity and rapid
technological change. “In the late nineteenth century, there were all sorts of utopian novels, and
each was coupled with a dystopian novel,” Richard White, a historian at Stanford University, told
me. Edward Bellamy’s “Looking Backward,” published in 1888, depicted a socialist paradise in the
year 2000, and became a sensation, inspiring “Bellamy Clubs” around the country. Conversely, Jack
London, in 1908, published “The Iron Heel,” imagining an America under a fascist oligarchy in
which “nine-tenths of one per cent” hold “seventy per cent of the total wealth.”
At the time, Americans were marvelling at engineering advances—attendees at the 1893 World’s
Fair, in Chicago, beheld new uses for electric light—but were also protesting low wages, poor
working conditions, and corporate greed. “It was very much like today,” White said. “It was a sense
that the political system had spun out of control, and was no longer able to deal with society. There
was a huge inequity in wealth, a stirring of working classes. Life spans were getting shorter. There
was a feeling that America’s advance had stopped, and the whole thing was going to break.”
Business titans grew uncomfortable. In 1889, Andrew Carnegie, who was on his way to being the
richest man in the world, worth more than four billion in today’s dollars, wrote, with concern, about
class tensions; he criticized the emergence of “rigid castes” living in “mutual ignorance” and
“mutual distrust.” John D. Rockefeller, of Standard Oil, America’s first actual billionaire, felt a
Christian duty to give back. “The novelty of being able to purchase anything one wants soon
passes,” he wrote, in 1909, “because what people most seek cannot be bought with money.”
Carnegie went on to fight illiteracy by creating nearly three thousand public libraries. Rockefeller
founded the University of Chicago. According to Joel Fleishman, the author of “The Foundation,” a
study of American philanthropy, both men dedicated themselves to “changing the systems that
produced those ills in the first place.”
During the Cold War, Armageddon became a matter for government policymakers. The Federal
Civil Defense Administration, created by Harry Truman, issued crisp instructions for surviving a
nuclear strike, including “Jump in any handy ditch or gutter” and “Never lose your head.” In 1958,
Dwight Eisenhower broke ground on Project Greek Island, a secret shelter, in the mountains of West
Virginia, large enough for every member of Congress. Hidden beneath the Greenbrier Resort, in
White Sulphur Springs, for more than thirty years, it maintained separate chambers-in-waiting for
the House and the Senate. (Congress now plans to shelter at undisclosed locations.) There was also
a secret plan to whisk away the Gettysburg Address, from the Library of Congress, and the
Declaration of Independence, from the National Archives.
But in 1961 John F. Kennedy encouraged “every citizen” to help build fallout shelters, saying, in a
televised address, “I know you would not want to do less.” In 1976, tapping into fear of inflation
and the Arab oil embargo, a far-right publisher named Kurt Saxon launched The Survivor, an
influential newsletter that celebrated forgotten pioneer skills. (Saxon claimed to have coined the
term “survivalist.”) The growing literature on decline and self-protection included “How to Prosper
During the Coming Bad Years,” a 1979 best-seller, which advised collecting gold in the form of
South African Krugerrands. The “doom boom,” as it became known, expanded under Ronald
Reagan. The sociologist Richard G. Mitchell, Jr., a professor emeritus at Oregon State University,
who spent twelve years studying survivalism, said, “During the Reagan era, we heard, for the first
time in my life, and I’m seventy-four years old, from the highest authorities in the land that
government has failed you, the collective institutional ways of solving problems and understanding
society are no good. People said, ‘O.K., it’s flawed. What do I do now?’ ”

A dental chair in the Survival Condo Project’s “medical wing,” which also contains a hospital bed
and a procedure table. Among the residents, Hall said, “we’ve got two doctors and a dentist.”
Photograph by Dan Winters for The New Yorker

The movement received another boost from the George W. Bush Administration’s mishandling of
Hurricane Katrina. Neil Strauss, a former Times reporter, who chronicled his turn to prepping in his
book “Emergency,” told me, “We see New Orleans, where our government knows a disaster is
happening, and is powerless to save its own citizens.” Strauss got interested in survivalism a year
after Katrina, when a tech entrepreneur who was taking flying lessons and hatching escape plans
introduced him to a group of like-minded “billionaire and centi-millionaire preppers.” Strauss
acquired citizenship in St. Kitts, put assets in foreign currencies, and trained to survive with
“nothing but a knife and the clothes on my back.”
These days, when North Korea tests a bomb, Hall can expect an uptick of phone inquiries about
space in the Survival Condo Project. But he points to a deeper source of demand. “Seventy per cent
of the country doesn’t like the direction that things are going,” he said. After dinner, Hall and
Menosky gave me a tour. The complex is a tall cylinder that resembles a corncob. Some levels are
dedicated to private apartments and others offer shared amenities: a seventy-five-foot-long pool, a
rock-climbing wall, an Astro-Turf “pet park,” a classroom with a line of Mac desktops, a gym, a
movie theatre, and a library. It felt compact but not claustrophobic. We visited an armory packed
with guns and ammo in case of an attack by non-members, and then a bare-walled room with a
toilet. “We can lock people up and give them an adult time-out,” he said. In general, the rules are set
by a condo association, which can vote to amend them. During a crisis, a “life-or-death situation,”
Hall said, each adult would be required to work for four hours a day, and would not be allowed to
leave without permission. “There’s controlled access in and out, and it’s governed by the board,” he
said.
The “medical wing” contains a hospital bed, a procedure table, and a dentist’s chair. Among the
residents, Hall said, “we’ve got two doctors and a dentist.” One floor up, we visited the food-
storage area, still unfinished. He hopes that, once it’s fully stocked, it will feel like a “miniature
Whole Foods,” but for now it holds mostly cans of food.
We stopped in a condo. Nine-foot ceilings, Wolf range, gas fireplace. “This guy wanted to have a
fireplace from his home state”—Connecticut—“so he shipped me the granite,” Hall said. Another
owner, with a home in Bermuda, ordered the walls of his bunker-condo painted in island pastels—
orange, green, yellow—but, in close quarters, he found it oppressive. His decorator had to come fix
it.
That night, I slept in a guest room appointed with a wet bar and handsome wood cabinets, but no
video windows. It was eerily silent, and felt like sleeping in a well-furnished submarine.
I emerged around eight the next morning to find Hall and Menosky in the common area, drinking
coffee and watching a campaign-news brief on “Fox & Friends.” It was five days before the
election, and Hall, who is a Republican, described himself as a cautious Trump supporter. “Of the
two running, I’m hoping that his business acumen will override some of his knee-jerk stuff.”
Watching Trump and Clinton rallies on television, he was struck by how large and enthusiastic
Trump’s crowds appeared. “I just don’t believe the polls,” he said.
He thinks that mainstream news organizations are biased, and he subscribes to theories that he
knows some find implausible. He surmised that “there is a deliberate move by the people in
Congress to dumb America down.” Why would Congress do that? I asked. “They don’t want people
to be smart to see what’s going on in politics,” he said. He told me he had read a prediction that
forty per cent of Congress will be arrested, because of a scheme involving the Panama Papers, the
Catholic Church, and the Clinton Foundation. “They’ve been working on this investigation for
twenty years,” he said. I asked him if he really believed that. “At first, you hear this stuff and go,
Yeah, right,” he said. But he wasn’t ruling it out.
Before I headed back to Wichita, we stopped at Hall’s latest project—a second underground
complex, in a silo twenty-five miles away. As we pulled up, a crane loomed overhead, hoisting
debris from deep below the surface. The complex will contain three times the living space of the
original, in part because the garage will be moved to a separate structure. Among other additions, it
will have a bowling alley and L.E.D. windows as large as French doors, to create a feeling of
openness.
Hall said that he was working on private bunkers for clients in Idaho and Texas, and that two
technology companies had asked him to design “a secure facility for their data center and a safe
haven for their key personnel, if something were to happen.” To accommodate demand, he has paid
for the possibility to buy four more silos.
If a silo in Kansas is not remote or private enough, there is another option. In the first seven days
after Donald Trump’s election, 13,401 Americans registered with New Zealand’s immigration
authorities, the first official step toward seeking residency—more than seventeen times the usual
rate. The New Zealand Herald reported the surge beneath the headline “Trump Apocalypse.”

The shooting range at the Survival Condo Project. Hall said that the hardest part of the project was
sustaining life underground. He studied how to avoid depression (add more lights) and prevent
cliques (rotate chores).
Photograph by Dan Winters for The New Yorker

In fact, the influx had begun well before Trump’s victory. In the first ten months of 2016, foreigners
bought nearly fourteen hundred square miles of land in New Zealand, more than quadruple what
they bought in the same period the previous year, according to the government. American buyers
were second only to Australians. The U.S. government does not keep a tally of Americans who own
second or third homes overseas. Much as Switzerland once drew Americans with the promise of
secrecy, and Uruguay tempted them with private banks, New Zealand offers security and distance.
In the past six years, nearly a thousand foreigners have acquired residency there under programs
that mandate certain types of investment of at least a million dollars.
Jack Matthews, an American who is the chairman of MediaWorks, a large New Zealand
broadcaster, told me, “I think, in the back of people’s minds, frankly, is that, if the world really goes
to shit, New Zealand is a First World country, completely self-sufficient, if necessary—energy,
water, food. Life would deteriorate, but it would not collapse.” As someone who views American
politics from a distance, he said, “The difference between New Zealand and the U.S., to a large
extent, is that people who disagree with each other can still talk to each other about it here. It’s a
tiny little place, and there’s no anonymity. People have to actually have a degree of civility.”
Auckland is a thirteen-hour flight from San Francisco. I arrived in early December, the beginning of
New Zealand’s summer: blue skies, mid-seventies, no humidity. Top to bottom, the island chain
runs roughly the distance between Maine and Florida, with half the population of New York City.
Sheep outnumber people seven to one. In global rankings, New Zealand is in the top ten for
democracy, clean government, and security. (Its last encounter with terrorism was in 1985, when
French spies bombed a Greenpeace ship.) In a recent World Bank report, New Zealand had
supplanted Singapore as the best country in the world to do business.
The morning after I arrived, I was picked up at my hotel by Graham Wall, a cheerful real-estate
agent who specializes in what his profession describes as high-net-worth individuals, “H.N.W.I.”
Wall, whose clients include Peter Thiel, the billionaire venture capitalist, was surprised when
Americans told him they were coming precisely because of the country’s remoteness. “Kiwis used
to talk about the ‘tyranny of distance,’ ” Wall said, as we crossed town in his Mercedes convertible.
“Now the tyranny of distance is our greatest asset.”
Before my trip, I had wondered if I was going to be spending more time in luxury bunkers. But
Peter Campbell, the managing director of Triple Star Management, a New Zealand construction
firm, told me that, by and large, once his American clients arrive, they decide that underground
shelters are gratuitous. “It’s not like you need to build a bunker under your front lawn, because
you’re several thousand miles away from the White House,” he said. Americans have other
requests. “Definitely, helipads are a big one,” he said. “You can fly a private jet into Queenstown or
a private jet into Wanaka, and then you can grab a helicopter and it can take you and land you at
your property.” American clients have also sought strategic advice. “They’re asking, ‘Where in New
Zealand is not going to be long-term affected by rising sea levels?’ ”
The growing foreign appetite for New Zealand property has generated a backlash. The Campaign
Against Foreign Control of Aotearoa—the Maori name for New Zealand—opposes sales to
foreigners. In particular, the attention of American survivalists has generated resentment. In a
discussion about New Zealand on the Modern Survivalist, a prepper Web site, a commentator wrote,
“Yanks, get this in your heads. Aotearoa NZ is not your little last resort safe haven.”
An American hedge-fund manager in his forties—tall, tanned, athletic—recently bought two houses
in New Zealand and acquired local residency. He agreed to tell me about his thinking, if I would not
publish his name. Brought up on the East Coast, he said, over coffee, that he expects America to
face at least a decade of political turmoil, including racial tension, polarization, and a rapidly aging
population. “The country has turned into the New York area, the California area, and then everyone
else is wildly different in the middle,” he said. He worries that the economy will suffer if
Washington scrambles to fund Social Security and Medicare for people who need it. “Do you
default on that obligation? Or do you print more money to give to them? What does that do to the
value of the dollar? It’s not a next-year problem, but it’s not fifty years away, either.”
New Zealand’s reputation for attracting doomsayers is so well known in the hedge-fund manager’s
circle that he prefers to differentiate himself from earlier arrivals. He said, “This is no longer about
a handful of freaks worried about the world ending.” He laughed, and added, “Unless I’m one of
those freaks.”
Every year since 1947, the Bulletin of the Atomic Scientists, a magazine founded by members of the
Manhattan Project, has gathered a group of Nobel laureates and other luminaries to update the
Doomsday Clock, a symbolic gauge of our risk of wrecking civilization. In 1991, as the Cold War
was ending, the scientists set the clock to its safest point ever—seventeen minutes to “midnight.”
Since then, the direction has been inauspicious. In January, 2016, after increasing military tensions
between Russia and NATO, and the Earth’s warmest year on record, the Bulletin set the clock at
three minutes to midnight, the same level it held at the height of the Cold War. In November, after
Trump’s election, the panel convened once more to conduct its annual confidential discussion. If it
chooses to move the clock forward by one minute, that will signal a level of alarm not witnessed
since 1953, after America’s first test of the hydrogen bomb. (The result will be released January
26th.)
Fear of disaster is healthy if it spurs action to prevent it. But élite survivalism is not a step toward
prevention; it is an act of withdrawal. Philanthropy in America is still three times as large, as a share
of G.D.P., as philanthropy in the next closest country, the United Kingdom. But it is now
accompanied by a gesture of surrender, a quiet disinvestment by some of America’s most successful
and powerful people. Faced with evidence of frailty in the American project, in the institutions and
norms from which they have benefitted, some are permitting themselves to imagine failure. It is a
gilded despair.
As Huffman, of Reddit, observed, our technologies have made us more alert to risk, but have also
made us more panicky; they facilitate the tribal temptation to cocoon, to seclude ourselves from
opponents, and to fortify ourselves against our fears, instead of attacking the sources of them. Justin
Kan, the technology investor who had made a halfhearted effort to stock up on food, recalled a
recent phone call from a friend at a hedge fund. “He was telling me we should buy land in New
Zealand as a backup. He’s, like, ‘What’s the percentage chance that Trump is actually a fascist
dictator? Maybe it’s low, but the expected value of having an escape hatch is pretty high.’ ”
There are other ways to absorb the anxieties of our time. “If I had a billion dollars, I wouldn’t buy a
bunker,” Elli Kaplan, the C.E.O. of the digital health startup Neurotrack, told me. “I would reinvest
in civil society and civil innovation. My view is you figure out even smarter ways to make sure that
something terrible doesn’t happen.” Kaplan, who worked in the White House under Bill Clinton,
was appalled by Trump’s victory, but said that it galvanized her in a different way: “Even in my
deepest fear, I say, ‘Our union is stronger than this.’ ”
That view is, in the end, an article of faith—a conviction that even degraded political institutions are
the best instruments of common will, the tools for fashioning and sustaining our fragile consensus.
Believing that is a choice.
I called a Silicon Valley sage, Stewart Brand, the author and entrepreneur whom Steve Jobs credited
as an inspiration. In the sixties and seventies, Brand’s “Whole Earth Catalog” attracted a cult
following, with its mixture of hippie and techie advice. (The motto: “We are as gods and might as
well get good at it.”) Brand told me that he explored survivalism in the seventies, but not for long.
“Generally, I find the idea that ‘Oh, my God, the world’s all going to fall apart’ strange,” he said.
At seventy-seven, living on a tugboat in Sausalito, Brand is less impressed by signs of fragility than
by examples of resilience. In the past decade, the world survived, without violence, the worst
financial crisis since the Great Depression; Ebola, without cataclysm; and, in Japan, a tsunami and
nuclear meltdown, after which the country has persevered. He sees risks in escapism. As Americans
withdraw into smaller circles of experience, we jeopardize the “larger circle of empathy,” he said,
the search for solutions to shared problems. “The easy question is, How do I protect me and mine?
The more interesting question is, What if civilization actually manages continuity as well as it has
managed it for the past few centuries? What do we do if it just keeps on chugging?”
After a few days in New Zealand, I could see why one might choose to avoid either question. Under
a cerulean blue sky one morning in Auckland, I boarded a helicopter beside a thirty-eight-year-old
American named Jim Rohrstaff. After college, in Michigan, Rohrstaff worked as a golf pro, and
then in the marketing of luxury golf clubs and property. Upbeat and confident, with shining blue
eyes, he moved to New Zealand two and a half years ago, with his wife and two children, to sell
property to H.N.W.I. who want to get “far away from all the issues of the world,” he said.
Rohrstaff, who co-owns Legacy Partners, a boutique brokerage, wanted me to see Tara Iti, a new
luxury-housing development and golf club that appeals mostly to Americans. The helicopter nosed
north across the harbor and banked up the coast, across lush forests and fields beyond the city. From
above, the sea was a sparkling expanse, scalloped by the wind.
The helicopter eased down onto a lawn beside a putting green. The new luxury community will
have three thousand acres of dunes and forestland, and seven miles of coastline, for just a hundred
and twenty-five homes. As we toured the site in a Land Rover, he emphasized the seclusion: “From
the outside, you won’t see anything. That’s better for the public and better for us, for privacy.”
As we neared the sea, Rohrstaff parked the Land Rover and climbed out. In his loafers, he marched
over the dunes and led me down into the sand, until we reached a stretch of beach that extended to
the horizon without a soul in sight.
Waves roared ashore. He spread his arms, turned, and laughed. “We think it’s the place to be in the
future,” he said. For the first time in weeks—months, even—I wasn’t thinking about Trump. Or
much of anything. ♦
This article appears in the print edition of the January 30, 2017, issue, with the headline “Survival
of the Richest.”
Silicon Valley’s super rich are eyeing New
Zealand for escape plans
AMERICA’S super rich and industry elite are developing an obsession with New Zealand, in the
event of a social catastrophe.

Nick Whigham January 25, 20177:23pm

New Zealand is garnering a lot of interest by the world’s super rich concerned about the future
THE term “doomsday prepper” provokes images of tin foil hat-wearing, gun-totting bushmen, the
likes of which appear on the plethora of TV shows dedicated to their pessimistic worldview.
But across the United States, at the big end of town — both on Wall Street and Silicon Valley —
there is apparently growing paranoia about the potential of impending catastrophe and many are
hatching contingency plans.
And it appears New Zealand is at the top of the list for many of the super rich who are seeking
“apocalypse insurance”.
In a truly fascinating article published this week by The New Yorker, journalist and author Evan
Osnos investigated the surprisingly extensive world of centi-millionaire and billionaire doomsday
preppers hedging against the potential of mass social unrest.
An unknown number of Silicon Valley titans are reportedly buying up property in New Zealand as a
safe haven when the proverbial s**t hits the fan.
Growing wealth inequality in the West has led to predictions of growing social unrest. Contributing
to the fear is a worry that as automation and artificial intelligence continue to swallow up jobs, there
will be an eventual backlash against Silicon Valley and the guardians of technology.
New Zealand’s remoteness, good governance and ample agriculture make it an appealing place for a
last resort retreat.
The New Yorker article details numerous plans by super rich preppers including the experience of
LinkedIn co-founder and prominent investor Reid Hoffman.
When asked to estimate how many Silicon Valley billionaires have established escape routes in the
form of remote hideaways, he said: “I would guess 50-plus per cent.”

Object 2
He also recounted the moment he realised New Zealand was the desired destination after he
mentioned to a friend that he was thinking of visiting the country.
“Oh, are you going to get apocalypse insurance?” his friend asked.
“I’m, like, Huh?” Mr Hoffman told Osnos.
“Saying you’re ‘buying a house in New Zealand’ is kind of a wink, wink, say no more. Once you’ve
done the Masonic handshake, they’ll be, like, ‘Oh, you know, I have a broker who sells old ICBM
silos, and they’re nuclear-hardened, and they kind of look like they would be interesting to live in’,”
he said.

Doomsday Preppers is not just a popular TV show.


One CEO of a large tech company that wished to remain anonymous said the prepper mentality
hadn’t become entirely commonplace among Silicon Valley elites but described it as “logically
rational and appropriately conservative.”
The installation of Donald Trump in the White House seems to have exacerbated the far-flung fear
about the rise of the pitchfork-wielding masses.
But it appears as though even those who support him aren’t taking any chances. Billionaire co-
founder of PayPal and Silicon Valley venture capitalist Peter Thiel bought a 447 acre property in
New Zealand in 2015.
It is not clear what Mr Thiel — a noted eccentric who is on Mr Trump’s transition team — plans to
do with the large lakeside land, which is his second property in the country.
An investigation by the New Zealand Herald this week into the purchase uncovered the scarcely
known fact that Mr Thiel even has New Zealand citizenship.
Peter Thiel, a vocal Trump supporter is reported to have a New Zealand passport.
The appeal of New Zealand among wealthy doomsday-sayers is reportedly well known among
hedge fund managers and Wall Street guys.
Robert Johnson is an economist and senior fellow and director of the “Project on Global Finance” at
the Roosevelt Institute. At the World Economic Forum in Davos, Switzerland in January 2015, he
told the audience: “I know hedge-fund managers all over the world who are buying airstrips and
farms in places like New Zealand because they think they need a getaway.”
Of course, the rising interest hasn’t gone unnoticed by New Zealand officials.
In the wake of Mr Trump’s shock election win, more than 13,000 Americans registered their intent
to emigrate to the land of the long white cloud. Those figures were recorded between November 9
and 16, exactly a week after Mr Trump won the election and represented a 17 fold increase on
normal, something the NZ Herald reported under the headline “Trump apocalypse”.
Meanwhile other data supports the notion of a growing interest in New Zealand among rich
foreigners.
According to stuff.co.nz, they bought more than 3500 square kilometres of New Zealand in the first
10 months of 2016 — more than four times as much as they did in the same period in 2010.
Certain locals too have noticed the growing interest from rich foreigners.
In July last year, a forum discussing New Zealand on prepper website Modern Survivalist prompted
the angry comment from one user who wrote:“Yanks, get this in your heads. Aotearoa NZ is not
your little last resort safe haven for when the s**t hits the fan politacally (sic) or environmentally.
Just no.”
Why tech billionaires are buying luxury
doomsday bunkers in New Zealand
Stephen Johnson September 5, 2018

Rising S. Co.
When the zombies come, when the bombs fall, or when biological warfare breaks, where will you
go?
If you’re a wealthy tech executive in Silicon Valley, odds are it’s New Zealand.

In recent years, the island nation of 4.8 million people has become a go-to spot for Americans
plotting elaborate and expensive plan Bs in the event of world disaster. It’s an investment that
begins to make sense once you reach a certain echelon of wealth.
“It’s known as the last bus stop on the planet before you hit Antarctica,” former Prime Minister John
Key told Bloomberg. “We live in a world where some people have extraordinary amounts of wealth
and there comes a point at which, when you have so much money, allocating a very tiny amount of
that for ‘Plan B’ is not as crazy as it sounds.”
Some wealthy doomsday preppers keep helicopters or private gassed up and ready to go, or go-bags
stuffed with gear, gold coins and medicine. Steve Huffman, the co-founder of reddit, told The New
Yorker he keeps guns and a motorcycle at the ready. Why? A traffic scene from the movie ‘Deep
Impact’.
“Everybody’s trying to get out, and they’re stuck in traffic,” Huffman said. “That scene happened to
be filmed near my high school. Every time I drove through that stretch of road, I would think, I
need to own a motorcycle because everybody else is screwed.”
More recently, some Silicon Valley doomsday preppers have begun building elaborate bunkers in
New Zealand, an island that’s desirable for its lax regulations, remote location and status as a
neutral territory in the event of world war.
Rising S. Co.
It’s become something of an industry. Some bunkers are reported to fit 300 people, costing about
$35,000 a head. But other bunkers, which are constructed in the U.S. and shipped to New Zealand
to be buried secretly, without a trace, can cost up to $8 million.
One deluxe bunker from the manufacturer Rising S. Co, which has recently supplied several Silicon
Valley preppers, comes with garden rooms, a games room and a gun range, in addition to bedrooms,
bathrooms and kitchens.
But New Zealand might soon be a less viable option for wealthy doomsday preppers. In August, the
government passed a law banning the sale of homes to non-residents, meaning anyone looking to
ride out Armageddon on the island of 4.8 million people would need to first obtain citizenship.
Not all in Silicon Valley believe it’s worth the effort, however.
“The world is so interconnected now that if anything was to happen, we would all be in pretty bad
shape, unfortunately,” Sam Altman, president of Silicon Valley startup incubator Y Combinator, told
Bloomberg. “I don’t think you can just run away and try to hide in a corner of the Earth.”
Au Revoir, Europe: What If Britain Left the
EU?, By David Charter
Brutally lucid, this road-map for a UK exit from the Union
should sharpen every political mind
• Jon Cruddas Friday 25 January 2013 20:00

Outsider: Angela Merkel, Polish PM Donald Tusk - and David Cameron

Even today a lazy orthodoxy dominates centre-left political thinking on Europe. We assumed that
the Tory European interregnum - the gap between bouts of internal Euro convulsion - wouldn't last
long. In fact, it lasted six years. David Cameron played the Eurosceptic to win the crown; quitting
the European People's Party group in the European parliament sent a clear signal to traditionalists.
Yet on gaining the top job, he embraced centre-ground modernisation intent on parking the "nasty
party" tag, including its Euroscepticism.
We now cosily assume that this phase ended on 24 October 2011 when 81 Tory MPs rebelled in
Parliament in favour of a "people's plebiscite" on Europe. A few weeks later, Cameron stumbled
into his veto against the Franco-German blueprint for a two-speed single currency. Full-throttle
rewind to right-wing scepticism was the new order of the day - a sign of Cameron's weakness in his
party, as captive to the old guard. The old guard is also the new guard; there is no "pro-European"
Conservative tradition to speak of anymore.

Tory modernisation has hit the wall. Cameron is trapped; content to play the same old songs, get
through the day and await some orderly transition to Boris or Gove. We witness his speech on
Europe this week, promising a referendum by 2017, and conclude that the PM is preoccupied with
internal appeasement.
Comfortable liberal-left orthodoxy might also tell us that Cameron and his party are on the wrong
side of history and consumed by a cancerous internal Euro-savagery. Divisions over the EU brought
down Thatcher and Major, alienated potential voters and so created space for our own "one nation"
story of patriotic self-interest.

So far, so conventional. The Tory travails are good for us. I pretty much sign up to these
conventions. But let's pause and think this through a little. Is it possible that we have fundamentally
misread this one? David Charter's book Au Revoir, Europe contests the fundamentals of mainstream
centre-left European thinking. What if in 2013 - during the 40 anniversary of the UK joining the
European club - we are inexorably heading out? De Gaulle stalled our application for some 12
years, the "joining phase". In so doing he nailed down the Common Agricultural Policy architecture
and a blueprint for common fisheries which loaded the dice against the UK.
From 1973, we witnessed an "integration phase" of building the single market. This ended with the
collision between Thatcher and Delors, the post-ERM fallout and the Maastricht treaty. While this
period saw the creation of the European Union, British opt-outs and the Euro, it also signalled
growing divergence and set in train the festering "alienation phase", even under New Labour.

Now we find ourselves forced toward the exit door as, driven by crisis, economic and institutional
reforms build a "real European Union" anchored around a Euro core. Britain's options shrink
between a "second-tier" membership or a "Brexit". Cameron ratchets up the possibility of the latter
with talk of an undeliverable repatriation of powers. Why might he do this?
Charter – European correspondent of The Times – confronts these issues with quite remarkable
prescience. His analysis of the gravitational pull in each of the main political parties is spot-on.
Through a careful dissection of what went wrong, he asks how and when we might exit, and what
the shake-down might look like. In short, for him, the roof will not fall in.
The origins of our gradual exit are found in the irreconcilable tension between Edward Heath's
foolish promise of no consequential loss of sovereignty and Jean Monnet's belief in
"neofunctionalism" - ever-more sharing of powers, given the benefits of such transfers. Pro-
Europeans never recovered from this basic conceit. Incrementally, this forged a "British trajectory
defined by opposition and exceptionalism".
It is this weaving-together of the original tensions around sovereignty and the moves toward
referenda that sharpen Charter's belief in inevitable exit. He posits that the end-game will be played
out with the prospect of a new treaty. Cameron's holding pattern is of repatriation, and membership
of an outer tier. The "Fresh Start" group of Tory MPs assumes a looser renegotiation given a new
treaty. Charter studies areas ripe for such reform and the stumbling-blocks: structural funds,
financial services, social and employment law, legal jurisdiction and criminal law.
There appears little or no chance: recent history does not bode well while our isolationist stance
means we have no allies. So, he tacitly suggests, is not the real agenda for the Fresh Start group,
with their impossibilist demands, to act as a Trojan horse for eventual exit? Is this the real sub-text
to Cameron's speech? Failure to repatriate will intensify the grounds for divorce through growing
populist disquiet.
Having carefully raised problems with alternative models that forestall full withdrawal - such as the
Norwegian or Swiss options - the author then offers an insightful sectoral analysis of the economic
implications of withdrawal, alongside the political forces that could precipitate precisely this
conclusion.
The last chapter tracks back from a hypothetical British application in 2023 to join NAFTA, the
North American free trade agreement. It charts the critical path to exit following the 2017
referendum driven by a 2015 Tory manifesto commitment, grudgingly matched by Labour. Election
defeat did for Cameron. The new Leader of the Opposition – B Johnson - rode the "no" vote. The
51.4 per cent to 48.6 per cent victory of the "outers" broke the back of the Labour government.
Britain's subsequent free-trade agreement with Brussels took 18 months to deliver while Parliament
struggled with the additional workload in trade, agriculture, fisheries and the like. The UK and
Brussels muddled along. Paradoxically, the zero-sum economic environment ensured that they were
to stay closely intertwined, irrespective of the referendum outcome.
This is a shockingly coherent book. It ascribes a logic to what, from the outside at least, appears
degenerative Tory thinking. For pro- Europeans - about whom the author states "only the scale of
their defeat remains to be settled" – it implies that with Cameron's speech, we have begun another
interregnum leading to the 2017 referendum. The honest assessment is that the mainstream of the
Conservative Party want out. We now have under five years to rebuild a pro-European case from
first principles. This book is a brutal contribution to a consideration of the options that this country
now faces.
Jon Cruddas MP is co-ordinating the Labour Party's policy review
The radical left and the crisis
Issue: 126 14th April 2010 by ISJ

A tale of two journals


This year marks the 50th anniversary of New Left Review (NLR). Not purely coincidentally, this
journal also began its regular appearance (after an abortive launch two years earlier) in 1960. 1
Launched against the background of the rise of a New Left in rebellion against both Western
capitalism and Eastern Stalinism, International Socialism and NLR represent contrasting trajectories
for journals of socialist theory. Certainly since Perry Anderson first became editor in 1962, NLR has
generally maintained a prudent distance from political practice, cultivating a rigorously intellectual
style that, thanks to the talents of many of its contributors, has earned it a deserved international
reputation.
No less serious analytically, International Socialism has, also virtually since its inception, been the
journal of a revolutionary Marxist organisation, now the Socialist Workers Party. Consequently, it
has always tried to make its content as accessible as possible and has concentrated on exploring
issues whose clarification can contribute to a more effective socialist practice. As the editorial in our
first issue declared, “the job we envisage for International Socialism is to bring together original
contemporary social and political analysis that has special relevance to the waging of the class
struggle and the deepening of working-class consciousness”.2

But the two journals have affinities as well as differences. Both have sought, from a British vantage
point but an internationalist perspective, to use Marxist theory in order to illuminate the same
world. The 50th anniversary issue of NLR is a case in point, starting as it does with a major piece by
the editor, Susan Watkins, that seeks to situate the current stance of the journal within the context of
the global economic and financial crisis. Watkins has no doubt about the severity of the crisis and
the limits of the “recovery”. She describes the latter as “patently unstable: a jobless North Atlantic,
with a crippled credit system at its heart; a bubbling East, yet to recalibrate to the shrinking market
for its goods; a mountain of debt still to be settled; speculative funds at loose in the system, driving
commodity-price spikes. Finance is still booby-trapped, while turbulence has shifted east and
south”.3

Potentially serious though the economic prognosis for global capitalism may be, Watkins believes
the political backwash of the crisis, when compared to the aftermath of earlier financial crashes
such as 1873 and 1929, demonstrates the persisting strength of neoliberalism:
Perhaps the most striking feature of the 2008 crisis so far has been its combination of
economic turmoil and political stasis. After the bank and currency crashes of 1931,
governments toppled across Europe—Britain, France, Spain, Germany; even in 1873,
the Grant Administration was paralysed by corruption scandals after the railroad bust,
and the Gladstone Ministry fell. The only political casualties of 2008 have been the
Haarde regime in Iceland and the Cayman Islands authorities. As unemployment mounts
and public-spending cuts are enforced, more determined protests will hopefully emerge;
but to date, factory occupations or bossnappings have mostly been limited to demands
for due redundancy pay. That neoliberalism’s crisis should be so eerily non-agonistic, in
contrast to the bitter battles over its installation, is a sobering measure of its triumph.4

Hence, “meeting no opposition, the neoliberal programme has actually advanced through the crisis,
the bank bailouts effecting a larger expropriation than ever before”. 5 The tilt towards Keynesianism
at the height of the crisis has facilitated not a real change in economic policy regime, but the
strengthening of a “regulatory liberalism” that is merely a variation on neoliberalism more broadly
understood. Watkins sees this analysis as confirming what she describes as the “lucid registration of
defeat” offered by Anderson when NLR was relaunched in 2000: “for the first time since the
Reformation, there are no longer any significant oppositions—that is, systematic rival outlooks—
within the thought-world of the West”. 6 She sympathises with the plight of young radical
intellectuals today: “Flares of protest have been ephemeral; every mobilisation they have known—
alter-globo, climate change, marches against the invasion of Iraq—has ended in defeat.” The article
concludes by musing about the uncertain future of NLR itself: “Can a left intellectual project hope
to thrive in the absence of a political movement? That remains to be seen”.7

Watkins’s article is interesting because it articulates in a particularly systematic and careful way a
widespread view, namely that, confronted with a genuine (to quote Alan Greenspan) once-in-a-
century crisis of global capitalism, the radical left has fumbled the catch, missing a major political
opportunity. 8 This view has been expressed outside the radical left, but it has also been one of the
issues at stake in recent debates within the SWP.
One of the most important variables in politics is time. Answering the question of whether the
radical left has missed the boat depends critically on the duration of the crisis. The Great
Depression lasted from 1929 to 1939. Watkins is right that the immediate impact of the financial
crashes of 1929 and 1931 was a surge in governmental instability (though in Britain the fall of
Ramsay MacDonald’s minority Labour government led to the formation of a Tory-dominated
coalition that dominated politics for the rest of the decade). This reflected not merely the severity of
the crisis, but also the extent to which the structures of bourgeois rule had already been weakened
by the First World War and the social and political upheavals that followed. Generally these
governmental crises marked a shift to the right in official politics—thus social democracy was
expelled from office in Germany as well as Britain.
But, as the Depression continued, intense class warfare developed, marked by the victory of
National Socialism in Germany in January 1933 and the destruction of the Austrian workers’
movement in February 1934, but also by a surge to the left—the Popular Fronts in France and
Spain, the New Deal and the sit-down strikes in the US, the mass strikes and factory occupations in
France in June 1936, the Spanish Revolution of 1936-7. Defeat for the left in France and Spain only
came towards the end of the decade, critically thanks to the restraints on workers’ combativity
imposed by the Popular Front policies of the Communist parties.
By contrast, the present economic crisis struck at a time when bourgeois democracy in the advanced
capitalist states had enjoyed several decades of relative stability after the upheavals of the 1960s
and 1970s. Indeed, the geographical scope of liberal democracy has been hugely extended—to
Southern Europe in the late 1970s and 1980s, to Central and Eastern Europe, South Africa and Latin
America in the 1990s, and to South Korea and Taiwan in the 2000s. Moreover, the terrain of
bourgeois politics has narrowed with the rise of social liberalism—the acceptance of neoliberalism
by the leading parties of social democracy. Of course, all is very far from well with bourgeois
democracy—political convergence at the top, the erosion of living standards and the welfare state,
and the corrosive influence of money on official politics have encouraged widespread popular
disenchantment. Nevertheless, the crisis hit relatively robust structures of capitalist rule that allowed
little space for the articulation of alternatives to neoliberalism, let alone to capitalism altogether.
So, if the crisis were simply a relatively short, though severe shock whose effects were largely
absorbed by the bailouts, then Watkins’s diagnosis of “economic turmoil and political stasis” would
be broadly correct. But, although she discusses different scenarios, she herself doesn’t seem to
expect a rapid return to economic “normality”. Yet a prolonged economic crisis will put pressure on
bourgeois political structures, exposing their fault lines. So let’s look more closely at the
development of the crisis.

Stages in the crisis


One of the lessons of the Great Depression is that a major economic crisis is itself a historical
phenomenon that passes through different stages—in that case, industrial slowdown prior to the
stock-market crash, the crash itself, an initial economic contraction, the banking crisis of 1931,
further contraction combined with the collapse of international trade, a partial recovery in the
middle of the decade, and then renewed recession in 1937-8 before
rearmament and then war production lifted output and employment. Similarly the present crisis has
evolved through a number of stages—the initial credit crunch of 2007-8, the financial crash of
autumn 2008, a very sharp and generalised slump in the winter of 2008-9, and then a “recovery”,
more accurately a stabilisation thanks to the vast amounts of money pumped into their financial
systems and broader economies by the leading capitalist states.
Moreover, the process is uneven. During the Great Depression, those states that were prepared
rapidly to ditch free trade (Britain after 1931) or to rearm (Germany after 1933) recovered more
quickly than others, notably the US. Today, China, thanks to a gigantic programme of state-funded
investment, is growing fast again, to the benefit of those economies that have reoriented to become
its suppliers, whether of manufactured goods (South Korea) or raw materials (Brazil, South Africa).
So what stage have we reached in the present crisis? At its immediate origins was the huge growth
in private debt that fuelled the bubble of the mid-2000s. The resulting debt crisis has migrated from
the private to the public sector. Government borrowing in the main economies has enormously
increased—thanks not simply to the bailouts but also to the fall in tax revenues and increase in
welfare spending imposed by the Great Recession. This is entirely in line with historical experience,
as Carmen Reinhart and Kenneth Rogoff point out in a major econometric study of financial crises:
The aftermath of banking crises is associated with profound declines in output and
employment…the value of government debt tends to explode: it rose by an average of
86 percent (in real terms, relative to pre-crisis debt) in the major post-World War II
episodes…the main cause of debt explosions is not the widely cited costs of bailing out
and recapitalising the banking system… In fact, the biggest driver of debt increases is
the inevitable collapse in tax revenues that governments face in the wake of deep and
prolonged output contractions. 9

How an increase in public debt is addressed is essentially a political question—in other words, it
reflects the prevailing balance of class forces. The enormous government borrowing to pay for the
Second World War was dealt with gradually through a combination of economic growth and
inflation (which reduced the real value of the debt). In the present situation, where neoliberalism
continues to frame policymaking and the surviving banks have been reinvigorated thanks to the
state support they have received and the elimination of many of their rivals, the growth in the
budget deficit is defined as a major crisis that must be addressed immediately by massive cuts in
public spending.
Thus in Britain the Tories, with considerable support from the media and big business, have used
the deficit to set the political agenda for the general election campaign that should be under way by
the time this issue appears. This fits into a broader analysis that identifies a bloated public sector as
the source of Britain’s problems. Promising “swingeing” cuts, however, hasn’t proved that popular.
Moreover, there is a major debate in leading bourgeois circles (reflected in a series of exchanges
among economists on the letters page of the Financial Times) about the timing of any cuts,
reflecting the fear that premature withdrawal of state support for the world economy would
precipitate the “double-dip” recession obsessively anticipated by commentators and policymakers
alike.

The eurozone fractured


All the same, the necessity of public spending cuts is now taken for granted in official politics
across the advanced capitalist economies. Moreover, the luxury of being able, to some extent, to
choose the timing of retrenchment is not available to the weaker capitalist states. This is evident in
the eurozone, where Greece has been targeted by the financial markets, the European Commission
and the leading European Union states for fiscal profligacy, and huge cuts extracted from the social-
democratic government of George Papandreou.
But the Greek crisis has exposed the structural flaws inherent in the construction of the eurozone.
European economic and monetary union (EMU) locked the exchange-rates between the
participating states as their national currencies were replaced by the euro and gave control over
interest rates to the unelected and unaccountable European Central Bank. For the more peripheral
members, this arrangement had the great advantage that the spread between the interest rates on
their bonds and on those of the strongest European economy, Germany, narrowed sharply. In the
conditions of generally low interest rates that prevailed during the credit boom of the mid-2000s,
this facilitated a sharp increase in household debt (to 100 percent of national income in Greece,
Portugal and southern Ireland) in the smaller eurozone economies and the development of housing
bubbles in Ireland and Spain.
The increased consumption that higher borrowing made possible in southern Europe helped to
provide a market for German exports (two thirds of which go to the eurozone). Germany, which
avoided the debt explosions seen elsewhere in the eurozone, was able to regain its position as the
world’s biggest exporter in 2005 by squeezing labour costs against the background of high
unemployment, the Hartz IV “reform” of unemployment benefit forced through by the Red-Green
coalition of 1998-2005, and the shift or (often as effective) threat of a shift of production further
east. Meanwhile, labour costs grew much faster elsewhere in the eurozone. The weaker economies
ran up huge balance of payments deficits that were the obverse of Germany’s ballooning surplus. As
an important study of the eurozone crisis by Research on Money and Finance points out:
The euro and its attendant policy framework have become mechanisms ensuring
German current account surpluses that derive mostly from the eurozone. Peripheral
countries joined a monetary system that purported to create a world money, thus signing
away some of their competitiveness, while adopting policies that exacerbated the
competitiveness gap. The beneficiary of this process has been Germany, because it has a
larger economy with higher levels of productivity, and because it has been able to
squeeze its own workers harder than others. Structural current account surpluses have
been the only source of growth for the German economy during the last two decades.
The euro is a “beggar-thy-neighbour” policy for Germany, on condition that it beggars
its own workers first.10

According to Research on Money and Finance, the immediate precipitant of the eurozone crisis lay
in the general response of member states to the financial crash, namely (with the exception of
Germany) substantially to increase spending and borrowing. The surge in government debt pushed
them onto the money markets to sell their bonds. But the weaker eurozone states found the spread
between the interest rate they had to pay on their bonds and the rate on German government bonds
widened sharply. Targeted by speculators, the costs of its borrowing soaring, the Papandreou
government found itself caught in a vice. “The structural weaknesses of monetary union are
apparent in this regard. All countries have the same access to the money markets; but they do not
have the same access to credit, which is obtained at a different price by each country”.11

EMU’s deeper flaw is that it is at most a monetary union. Fiscal policy—dependent on the power to
tax and spend—remains firmly in the hands of the nation-states. But, in times of crisis, when the
state needs to step in to rescue the market, fiscal policy becomes central: the bailouts and stimuli
were underpinned by the ability of the leading capitalist states to extract resources from their
economies and to borrow on the strength of this ability. Hence the EU’s response to the financial
crash was dominated by national governments’ rescues of their banks and subsidies to their firms.12

This set-up begs the question of what happens when a member-state of the eurozone is threatened
with bankruptcy. The financial markets didn’t just force up the interest rates on the bonds of the
weaker eurozone economies: they also pushed down the euro. This made the Greek crisis a problem
for the entire eurozone. The dominant continental states, France and Germany, were divided over
how to respond: France supporting a coordinated loan to keep Greece afloat, Germany resisting.
Greece threatened to humiliate the EU by going to the International Monetary Fund for help, a bluff
that was called by Germany.
The eventual agreement on a joint IMF-eurozone rescue reflected the fact that a Greek default
would not be in the interest of the German banks, which have lent heavily to Greece and the other
weaker eurozone economies. But the debate within Angela Merkel’s chronically weak conservative-
liberal coalition in Berlin (which was accompanied by ferocious nationalist exchanges between the
German and Greek media) tilted towards the hard line taken by Wolfgang Schäuble, the finance
minister.
He proposed setting up a European Monetary Fund that could come to the rescue of eurozone
members in Greece’s plight, in exchange for a tightening up of the Growth and Stability Pact, under
which EU states are not supposed to run budget deficits greater than 3 percent of national income.
In particular penalty clauses would be inserted allowing states that broke the rules to be deprived of
access to EU cohesion funds or even to have their voting rights temporarily suspended. States could
also opt to leave the eurozone while remaining in the EU.
As the economic commentator Wolfgang Munchau puts it:
This is not about helping countries in trouble. This is about helping them to get out.

The political message of the Schäuble plan is that Greece will be the last bail out ever.
As preparations for a bail out reach an advanced stage, the German public reaction has
become progressively more hostile. If the Schäuble plan had already been in place,
Greece would already have headed to the exit. It is hard to conceive of a situation under
the plan where a country simultaneously fulfils the criteria for aid, and needs it.

Also, the Schäuble plan contains no provision that would ever be binding on Germany.
It would allow Germany to press ahead, unhindered, with its unilateral economic
strategy to eradicate the budget deficit by 2016. Even if southern European governments
were to wake up and accept the need for deep reforms, they would have a hard time
closing a competitiveness gap that is still widening in Germany’s favour. I cannot see,
therefore, how the plan will ever find political acceptance.13

In effect, the German ruling class is defending at any price its economic strategy of trying to
maintain export leadership (which it lost to China in 2010) on the basis of wage repression. This has
been locked in place domestically by a constitutional amendment passed last year that requires the
federal government to reduce its budget deficit to 0.35 percent of national income by 2016: this
blocks any shift to an alternative strategy based on stimulating the home market. The Schäuble plan
expresses Berlin’s determination to lock this strategy in place in the eurozone as well. If other
eurozone states are too weak to cut public spending and labour costs to try and keep up with
Germany, they will be ditched.

Global dilemmas
The eurozone crisis is a European version of a global conflict. The World Trade Organisation
recently congratulated itself on the fact that the initial surge in government subsidies and other
forms of protectionism in response to the slump of 2008-9 hasn’t developed on anything like the
scale of the drive of the imperialist powers to close their markets to their rivals during the 1930s.
But this ignores the fact the most important trade conflict today is over currencies.
It’s a cliche that the most important economic relationship today is that between the US and China:
America provides a huge market for Chinese manufactured goods and in turn borrows back the
dollars it spent on these goods to help finance its balance of payments deficit. But both states have
responded to the slump by allowing the value of their currencies to fall against those of others in
order to make their exports cheaper. Since the summer of 2008, the renminbi has been pegged to the
dollar, which itself has till recently been drifting down on the foreign exchanges.
This has become an increasing source of conflict between Beijing and Washington. The Obama
administration, struggling with a sluggish economy and unemployment at 10 percent of the
workforce, has put increasing pressure on China to lift the peg and to allow the renminbi to
appreciate against the dollar. Obama has said this could create hundreds of thousands of American
jobs. In his speech at the closing session of the National People’s Congress in March, Wen Jiabao,
the Chinese Prime Minister, said, “What I don’t understand is depreciating one’s own currency, and
attempting to
pressure others to appreciate, for the purpose of increasing exports. In my view, that is
protectionism.” Wen also repeated the concern he had expressed a year earlier about the security of
Chinese investments in the US: “We are very concerned about the lack of stability in the US dollar.
I was worried last year… I am still worried this year”.14

One hundred and thirty members of the US Congress reacted by writing to Treasury Secretary Tim
Geithner demanding that he designate China as a “currency manipulator” in a report due in April.
Following a buildup of friction between Washington and Beijing over various issues, the currency
quarrel has developed an edge that no doubt reflects the growing tensions between the dominant
imperialist power and its most serious potential challenger. The Democratic economist Paul
Krugman has become particularly aggressive in campaigning for the US unilaterally to impose a 25
percent surcharge on Chinese imports, a step that probably would provoke a real trade war.15

In fact, China allowed the renminbi to appreciate against the dollar between 2005 and 2008, and
may do so again, possibly in response to the fear of rising inflation caused by the mammoth
government stimulus that Wen also expressed in his speech. Economists think a rising renminbi
probably would not seriously affect China’s export performance. But what the row indicated is that
the Chinese leadership, like the German, remains committed to a high-export strategy, though one
that is based on far lower wages and a far higher rate of accumulation. Martin Wolf of the Financial
Times highlighted the parallels:
China and Germany are, of course, very different from each other. Yet, for all their
differences, these countries share some characteristics: they are the largest exporters of
manufactures, with China now ahead of Germany; they have massive surpluses of
saving over investment; and they have huge trade surpluses… Both also believe that
their customers should keep buying, but stop irresponsible borrowing. Since their
surpluses entail others’ deficits, this position is incoherent… I am beginning to wonder
whether the open global economy is going to survive this crisis.

Surplus countries insist on continuing just as before. But they refuse to accept that their
reliance on export surpluses must rebound upon themselves, once their customers go
broke. Indeed, that is just what is happening. Meanwhile, countries that ran huge
external deficits in the past can cut the massive fiscal deficits that result from post-
bubble deleveraging [reducing borrowing] by their private sectors only via a big surge
in their net exports. If surplus countries fail to offset that shift, through expansion in
aggregate demand, the world is inevitably caught in a “beggar-my-neighbour” battle:
everybody seeks desperately to foist excess supplies on to their trading partners. That
was a big part of the catastrophe of the 1930s, too.16
Of course, the Chinese and German ruling classes retort by claiming that the roots of the problem
lie in the profligacy of the Greeks and the Americans, happily borrowing and spending their way to
disaster. But the basic point stands: every state is seeking to get out of the slump by cutting
spending and borrowing and by exporting, but this begs the question of who will buy these exports.
Wolf is quite right to draw a parallel with the 1930s. Writing at the time, the Marxist economist
John Strachey highlighted what he called “the dilemma of profits or plenty”: “The measures which
will maximise profits will minimise plenty: the measures which will maximise plenty will minimise
profits”.17 In other words, cutting wages to boost profits restricts the demand for the goods and
services. One way out of this dilemma is to export, but if everyone tries to do this the result is too
many goods chasing too little money.
These tensions don’t simply highlight that what Reinhart and Rogoff call “the Second Great
Contraction” is far from over. They also underline that a major obstacle to “recovery” lies in the
conflicts among the leading capitalist powers. This was also demonstrated at the Copenhagen
Climate Change Summit in December, which is analysed in detail by Jonathan Neale in this issue.
Despite the emergence of the G20 as a forum that includes the big economies of the South, and
despite Obama’s more consensual rhetoric, the international capitalist class remains a band of
hostile brothers.

Stages in the struggle.


Whatever these tensions, it’s clear who has been marked out as the fall guy. Workers and the poor,
squeezed by unemployment, wage repression and public spending cuts, are expected to pick up the
bill for the crisis. But of course this assumes that they will accept this appointed role. One reason
why the Greek crisis is so serious is because of the resistance it has evoked from the most
combative workers’ movement in Europe. The general strikes that have been mounted against
Papandreou’s austerity packages have fused the militancy of the organised working class with the
insurrectionary spirit of the youth revolt that swept Greece in December 2008.
This is not the place for an extensive analysis of the response by the workers’ movement
internationally to the crisis. But it is worth saying a little about the situation in Britain. Here too one
can distinguish several stages in the evolution of the class struggle since August 2007. The first, in
2007-8, was dominated by what in retrospect seems like one of the after-effects of the credit boom
—the very sharp rise in the rate of inflation that put significant pressure on living standards
throughout the world. In Britain this produced a revolt over public sector pay that reached its
highpoint in spring 2008.
This phase came a brutal halt after the collapse of Lehman Brothers in September 2008. The left-
wing leaderships of the main unions involved—the PCS, NUT and UCU—reacted to the financial
crash by killing off industrial action over pay. In effect, the trade union bureaucracy abandoned the
field of battle. This created a vacuum that was filled in the first half of 2009 by a series of struggles
—notably the strikes by Lindsey oil construction workers and the occupations at Visteon, Vestas
and Prisme—that were marked by rank and file workers taking the initiative and adopting methods
of struggle that have been rare since the 1970s. Whatever the political ambiguities involved—above
all, the slogan of “British Jobs for British Workers” that dominated the first Lindsey strike—these
struggles, though on a relatively small scale, marked a qualitative shift.
In autumn 2009 a third phase began. This is marked by the return of the bureaucracy to the
battlefield. Very largely this is in reaction to pressures from managements squeezed by the crisis
into pressing for work reorganisations that increase the rate of exploitation. This pattern is very
clear in the big confrontations at Royal Mail and British Airways and on the railways. In higher and
further education there has been a rash of disputes provoked by the fact that in these sectors the cuts
began to be implemented well before the election. So far the picture is mixed—a serious sell out at
Royal Mail, an uncertain outcome at British Airways, victories at Tower Hamlets College and Leeds
University—but resistance is on a large enough scale to encourage the Tories to claim that Britain is
facing a “spring of discontent”.
In any case, the evolution of the struggle in Britain is very different from the passive condition of
supine acceptance portrayed by Watkins. The task of revolutionaries in this situation is to do
everything that they can to enhance the ability of rank and file workers to resist. In particular, this
means building networks of solidarity that can help sustain a particular group of workers when they
are in dispute. The emergence of the Right to Work Campaign, which held a conference of some
900 delegates in January, is an important step in this direction.

The plight of the radical left


But of course what is needed is more than simply resistance and solidarity, essential though these
are. A break has to be made in the political dominance of neoliberalism. In the early years of the
previous decade, radical left parties began to emerge throughout Europe as significant electoral
challengers. This was made possible by two factors—social liberalism, that is, the shift to the right
of social democracy, which created a vacuum on the left, and the rise of the anti-capitalist and anti-
war movements in the wake of Seattle, Genoa and Florence.
The nature and strategies of these parties have been extensively discussed in the pages of this
journal. But it is clear there has been a shakeout over the past three or four years, which has been
especially brutal in Italy, with the eclipse of Rifondazione Comunista, and Britain, where both
Respect and the Scottish Socialist Party (SSP) suffered devastating splits. How is this related to the
two enabling conditions of the radical left parties’ rise? The first—social liberalism—hasn’t gone
away, but the success of Pasok in the Greek elections of October 2009 and of the French Socialist
Party in the regional elections of March 2010 confirms that embracing neoliberalism doesn’t stop
social-democratic parties manoeuvring to capitalise on working class discontent (though the
subsequent fate of the Papandreou government shows that they remain, as ever, impotent in the face
of capital).
What about the second condition, the rise of the movements? Again, addressing this question
properly would require a longer discussion than is possible here. But Watkins is plainly wrong
baldly to assert that they “ended in defeat”—particularly in the case of climate change, where the
Copenhagen summit saw the first demonstrations of any size on the issue, in Copenhagen itself and
in London, and where the summit called by Evo Morales, the Bolivian president, in Cochabamba in
April may act as a focus for further mobilisations. It is true that in most of Europe the anti war
movement declined rapidly after the fall of Baghdad, although in Britain it persisted at a relatively
high level for several years longer. The impasse reached by the anti-capitalist movement has already
been discussed in this journal: it was always true that, unless matched by a comparable rise in the
level of class struggle, the political radicalisation that the movement represented was unlikely to
sustain itself.18

But even if the picture of the movements is thus mixed—anti-war and anti-capitalist undeniably in
decline, climate change just beginning—the ideological radicalisation that they both reflected and
reinforced has not gone away. The renewed intellectual vitality displayed in a variety of Marxist
theoretical debates is a symptom of this. Moreover, the surviving radical left formations represent a
kind of political deposit left behind by the forward surge of the first half of the 2000s. So we aren’t
stuck at the political degree zero that Anderson asserted we were at in 2000.
One can identify two main axes of the European radical left. The first is occupied by Die Linke in
Germany, the Front de Gauche in France, SYRIZA (the Greek Coalition of the Radical Left) in
Greece, and the Bloco de Esquerda in Portugal. Their main reference point is left reformism, though
to differing degrees they shade off into the far left. The second is defined by the Nouveau Parti
Anticapitaliste (NPA) in France, the project through which the Ligue Communiste Révolutionnaire
(LCR) launched a broader party on a revolutionary programme.
All these formations represent real forces, and have made an impact in electoral politics in their
countries. This is a significant crack in the neoliberal consensus. But it is only a beginning.
Moreover, the balance between left and right in the radical left on both a European and a national
scale changes over time. Thus the French regional elections in March saw the Front de Gauche—a
coalition of the Communist Party and Jean-Luc Mélenchon’s left social-democratic Parti de Gauche
—gain a significant lead over the NPA.
One final thread in the tapestry is that the two main parties of the European revolutionary left, the
NPA and the SWP, have both recently experienced splits. At issue in both cases was whether or not
to persist with the project of building a revolutionary party. In the French case, this was the outcome
of a long struggle between right and left within the LCR, with the minority faction headed by
Christian Piquet, whose main reference point is the French republican tradition rather than
revolutionary Marxism, departing to join the Front de Gauche.19

The divisions within the SWP were of much more recent origin, arising during the crisis in Respect
but developing into an argument over perspectives. A small minority, including several former
members of the leadership (one an ex-editor of International Socialism, John Rees), refused to
acknowledge the decline of the anti-war and anti-capitalist movements. While repeating the
mainstream claims that the radical left had missed the boat, they rejected the majority’s arguments
in favour of a change of tactics in response to the economic crisis, and eventually resigned from the
SWP.
These episodes bear witness to the pains of reorientation in a complex and rapidly changing
situation. For the SWP at least, a main political conclusion is that building a revolutionary
organisation is in no sense counterposed to continuing to support and indeed to initiate broader
united fronts in the different fields of struggle. The most difficult of these is the electoral field. Here
the Trade Unionist and Socialist Coalition represents an important attempt to reassemble the
scattered forces of the radical left after the implosions of Respect and the SSP. The campaigns of its
candidates in the general election can help lay the basis of a much stronger left alternative to social
liberalism in future. Given the advance of the fascist right—the broader background to which is
explored by Richard Seymour in his article on contemporary racism, developing this alternative is
an urgent task, and not just in Britain, as the revival of the Front National in the French regional
elections shows.
But the forces of the radical left in Britain are much too weak for revolutionaries to ignore the
millions of organised workers who continue to look, however grudgingly and reluctantly, to Labour.
The slogan “Vote left where you can, vote Labour if you must” does not imply any illusion that,
should Gordon Brown manage somehow to cling on to office, his government’s policies would be
qualitatively better than those of an administration headed by David Cameron. It is rather a means
of pursuing a political dialogue with the vast numbers of militants who have yet to break
Labourism. This dialogue will be all the more important if, as still seems most probable, the Tories
form the next government and intensify the attacks that New Labour has already mounted.

Notes
1: Birchall, 2008.
2: Quoted in Callinicos, 1977.
3: Watkins, 2010, p12.
4: Watkins, 2010, p20.
5: Watkins, 2010, p21.
6: Watkins, 2010, p23; Anderson, 2000, p17.
7: Watkins, 2010, p27.
8: See, for example, a piece by the Guardian’s Andy Beckett, and my reply: Beckett, 2009, and
Callinicos, 2009.
9: Reinhart and Rogoff, 2009, p224.
10: Lapavitsas and others, 2010, p28.
11: Lapavitsas and others, 2010, p46.
12: Callinicos, 2010, pp97-101.
13: Munchau, 2010.
14: Financial Times, 14 March 2010.
15: See Drezner, 2010, and the exchanges it provoked, and for a different, Marxist take; Hart-
Landsberg, 2010.
16: Wolf, 2010.
17: Strachey, 1935, p101.
18: Callinicos and Nineham, 2007.
19: The death of Daniel Bensaïd in January deprived the NPA of its leading Marxist theorist. Our
next issue will carry an assessment of Bensaïd by Sebastian Budgen.

References
Anderson, Perry, 2000, “Renewals”, New Left Review, II/1 (January/February),
http://newleftreview.org/?view=2092
Beckett, Andy, 2009, “Has the Left Blown Its Big Chance?”, Guardian (17 August 2009)
www.guardian.co.uk/politics/2009/aug/17/left-politics-capitalism-recession
Birchall, Ian, 2008, “A Fiftieth Birthday for Marxist Theory”, International Socialism 120 (autumn
2008), www.isj.org.uk/?id=487
Callinicos, Alex, 1977, “Editorial”, International Socialism, series 1, 100 (July 1977),
www.marxists.org/history/etol/newspape/isj/1977/no100/editorial.htm
Callinicos, Alex, 2009, “Yes, the Left Faces Many Challenges—but It’s Not All Doom and Gloom”,
Guardian (21 August 2009) www.guardian.co.uk/commentisfree/2009/aug/21/marx-politics-left-
future
Callinicos, Alex, 2010, Bonfire of Illusions (Polity)
Callinicos, Alex, and Chris Nineham, 2007, “At an Impasse? Anti-Capitalism and the Social
Forums Today”, International Socialism 115 (summer 2007).
Drezner, Daniel W, 2010, “Paul Krugman, Neoconservative”, (15 March 2010)
http://drezner.foreignpolicy.com/posts/2010/03/15/i_think_ive_found_a_purpose_for_the_g_8
Hart-Landsberg, Martin, 2010, “The US Economy and China: Capitalism, Class, and Crisis”,
Monthly Review 61:9 (February 2010), http://monthlyreview.org/100201hart-landsberg.php
Lapavitsas, Costas, A Kaltenbrunner, D Lindo, J Michell, JP Painceira, E Pires, J Powell, A
Stenfors, N Teles, 2010, “Eurozone Crisis: Beggar Thyself and Thy Neighbour”, Research on
Money and Finance (March 2010),
http://researchonmoneyandfinance.org/media/reports/eurocrisis/fullreport.pdf
Munchau, Wolfgang, 2010, “Shrink the Eurozone, or Create a Fiscal Union”, Financial Times, (14
March 2010).
Reinhart, Carmen R, and Kenneth S Rogoff, 2009, This Time is Different: Eight Centuries of
Financial Folly (Princeton University Press).
Strachey, John, 1935, The Nature of Capitalist Crisis (Gollancz).
Watkins, Susan, 2010, “Shifting Stands”, New Left Review, II/61 (January/February 2010),
http://newleftreview.org/?page=article&view=2817
Wolf, Martin, 2010, “China and Germany Unite to Impose Global Deflation”, Financial Times (16
March 2010).
Joshua Sperber September 7, 2018

Western Civilization 101


Notwithstanding the fears of Samuel Huntington and the more overtly violent demonstrations of
self-described Western chauvinists such as the Proud Boys, the term “Western Civilization” is of only
relatively recent creation. Advanced following the First World War, the concept, along with other inventions
such as “Great Books” series, was designed to uphold the merit of a project that had just culminated in an
unprecedented industrial bloodbath. That the idea was promulgated merely decades before an even larger
industrial bloodbath suggests that its promoters ought to have taken a humbler approach in their attempt to
salvage, in fact construct, Western European history. After all, insofar as it even constitutes a coherent and
quantifiable entity, Western Civilization advanced not because of any intrinsic superiority but because of
fortuitous geographic circumstances and no small portion of simple freak luck.

It has been noted that if an informed observer had been standing atop the world in 1500 CE and was asked to
predict which power – among Western Europe, the Ottoman Empire, China, Japan, India, or Russia — would
become dominant over the following centuries, it would have been unlikely that he or she would have chosen
what had until recently been the Western European backwater. It would have been far more sensible to
instead opt for, say, Ming China or the Ottoman Empire, which was in possession of Serbia, Bosnia, Croatia,
Bulgaria, Romania, Albania, Greece, and Hungary and continually menaced, and periodically invaded, lands
further west.

Yet, as we know, Western Europe did become dominant over the next four centuries — though not
necessarily evenly or without setbacks; the so-called Sick Man of Europe defeated Britain in battle as late as
1916. Nevertheless, by WWI, Europe directly or indirectly controlled a full eighty percent of the world’s
landmass, an unprecedented degree of global domination. So how do we explain this extraordinary growth?

The Cultural Myth

For some, the answer is self-evident. The ostensible superiority of what is imagined to be Western culture
inexorably led to the domination of Europe (and its eventual offshore offspring). There are of course
immediate problems embedded in the notion of a coherent and homogeneous European culture giving rise to
European dominance. First, such an explanation fails to address the timing of European ascension. If cultural
superiority explains Europe’s rise, why now? And where was this superiority during the long centuries of the
Middle Ages?

More generally, “European culture” in reality consists of a myriad of practices and customs varying by place,
period, demographic, and other factors. Not only were different European countries — say England and
France — at one another’s throats for centuries, but they would have been astonished to learn that they,
notwithstanding their religious, linguistic, political, and other differences, were in fact family, united within a
single civilization. On the contrary, each identified the other not as a member of the same political and
cultural project but as a fundamental impediment to that project.

Even during the Crusades, the imagined zenith of European unity, the doge of Venice redirected the
Crusaders from an attack on the Islamic World (which included Venetian trading partners such as Egypt) to
Constantinople and Zara, Christian cities but commercial rivals to Venice. Making explicit what was already
apparent, the Venetians understood that the Crusades, which invoked Christianity to pillage and pillaged to
promote Christianity, were a racket.
Perhaps more than any other event, the Protestant Reformation, and the century of geopolitical
pandemonium and mass slaughter following it, dramatically ruptures the idea that Europe contained a shared
and coherent set of values and traditions. Rather, it exposed the profound antagonisms and divergent interests
among those who competed to define a Christianity — the great language of medieval political legitimization
— that had been turned inside out by the growing accumulation of private wealth.

If Catholicism, the religion of late Rome and feudalism, denounced the pursuit of wealth and counseled its
followers to turn the other cheek, Protestantism and particularly Calvinism, mirroring the ideological
demands of the emerging market economy, encouraged industriousness and identified wealth not as a
potential sign of damnation but of salvation. Yet Protestantism, making itself more serviceable to modern
rule, ultimately won a Pyrrhic victory. Relocating God from a clerical intermediary to the believer’s own
conscience, Protestantism ushered in a new ideological hegemony but in so doing diminished its political
relevance.

Insofar as these Christian traditions have endured in a world increasingly dominated by the state and the
market, it has been due to their ability to identify opportunities for institutional aggrandizement that
simultaneously furnishes ideological buttress to the demands (rendering unto Caesar) of the modern system.
That these demands are themselves evolving reminds us that there has been no “Western culture” but rather
ephemeral “Western cultures,” tearing themselves asunder at different places and at different times. The long
historical effect was not a single diamond refined under pressure, but a flotsam of, among other things,
power, brutality, and justificatory sophistry.

Just as the concept of European culture erases enormous variation and conflict within Western Europe, it also
dismisses the pivotal exchanges, famously during the Crusades themselves, that occurred between Western
Europe and the greater world that equipped Europe to expand in the first place. Crusaders were stunned to
encounter the vastly more sophisticated denizens of the Levant and beyond living in more advanced
economies and enjoying greater standards of living with more plentiful and finer luxuries. Crusaders were
also introduced to the superior scientific knowledge of the Islamic world, whose advances in math,
astronomy, medicine, physics, chemistry, and optics (not mentioning philosophy and art) helped shape the
thinking of European thinkers such as Copernicus and Galileo. It is not so much that “Western Civilization”
is an oxymoron — or a “good idea,” as Gandhi famously quipped — but that the civilization in question is
not even intrinsically European.

Geography Not as Destiny

It is not culture, historians and other scholars have shown, but geography that provided the critical
precondition for European ascension. As Jared Diamond has noted, the latitudinal axis of Eurasia, in contrast
to that of Africa and the Americas, facilitated both successful migration and the diffusion of intra- and extra-
European knowledge (farmers who migrated east or west, for instance, did not experience significant
climatic variation and therefore did not have to reinvent technological and agricultural wheels in contrast to
farmers who migrated north or south).

Further, the fractured topographies of the European peninsula, in contrast to the sweeping plains of China or
the wide river valleys of the Nile or the Tigris and Euphrates, helped produce, as Paul Kennedy has observed
in his Rise and Fall of the Great Powers, centuries of political fragmentation. European Conquerors, whether
Charlemagne, the Habsburgs, Napoleon, or Hitler, consistently failed to establish unified and sustained
political rule as they were ultimately impeded by, among other natural barriers, Europe’s numerous mountain
ranges and forests.

Such obstacles not only discouraged conquest by foreign rivals but also led to highly decentralized rule
characterized by roughly comparable powers who, of necessity, continually invested in improvements to
their military technology. The eventual result was an increasingly sophisticated arms race born of the need to
keep formidable rivals at bay, conditions that were generally absent outside of the European peninsula where,
for instance, Ming China held a monopoly on cannon production inside of the empire.

Such geographic diversity also helped encourage the development of the market system. Centralized political
rule in, say, the Ottoman Empire meant that would-be traders could be taxed into bankruptcy, a penny wise
but pound foolish policy that disincentivized private investment. Europe’s political decentralization meant
that traders could play different lords off of one another or, at the least, depart one high-tax fiefdom for a
relatively lower-tax one. Europe’s many navigable rivers further facilitated this trade, which, notwithstanding
rulers’ inclinations toward interference, ultimately provided European states with an enormous source of
perpetually expanding taxable wealth. In other words, Europe’s economic power developed in spite of the
myopia of its leaders, a closed-mindedness that Adam Smith was struggling to penetrate as late as 1776.

None of this ought to imply geographic determinism, as these environmental preconditions are necessary but
certainly not sufficient to explain the expansion of European power after 1500. Indeed, geography taken by
itself, not unlike the cultural explanation, cannot account for the specific timing of Europe’s ascension.

We also need to address contingency, and there is no greater example of it than the Ottomans’ 1453 conquest
of Constantinople. An historic catastrophe for Europe, the conquest of Constantinople, and with it the
collapse of the 1500 year-old Byzantine Empire, deprived Europe of its vital gateway to where all major
economic activity had been previously directed: The East.

It was within the ensuing atmosphere of loss, dread, and despair that Columbus proposed to sail to India via
the west and, after being rejected by several of Europe’s numerous rulers, was ultimately sponsored by Spain
to do so, whereupon he stumbled upon the Americas. Led by Spain and Portugal, Europe proceeded to
commit the largest and most horrific plunder in world history, enslaving and destroying indigenous peoples
and their economies — “with a bible in one hand and a rifle in the other” — and extracting enough silver and
gold to multiply European treasuries fourfold. It was this wealth, which soon migrated from the Iberian
Peninsula to England, that funded the Industrial Revolution, enabling Europe to further increase its
advantage over a globe that it would by and large rule — whether through the invocation of a Christian
civilizing mission, the White Man’s Burden, eugenics, or democracy — for the next four centuries. In other
words, apart from its geographic good fortune, Europe achieved world domination largely because it had
experienced a disastrous military defeat. Interpreting this history as an indication of superiority constitutes
the height of irony.

That we are living in an era in which there are renewed demands to extol and “defend” Western Civilization
is not only a warning that a system that is again experiencing crisis is issuing one more call to arms. It is also
a reminder that the violence with which it has historically advanced its aims is ultimately inseparable from
the values that it invokes to justify them.

Joshua Sperber lives in New York and can be reached at jsperber4@gmail.com.


Joshua Sperber January 15, 2015

The Confusion of the West


It’s only when appearance is mistaken for reality that the “Clash of Civilizations” can become a
plausible interpretation of the recent violence in France. Huntington’s concept was born moribund,
its staying power attributable to its ideological use-value rather than its explanatory power. Indeed,
Edward Said and others have demolished the racist tenets of the Western depiction of a monolithic,
historically static, and fanatical Islamic culture that never underwent Enlightenment.
First, the notion of a discrete Islamic culture or “civilization” antagonizing a similarly discrete,
albeit historically fluid, evolving, and diverse West, ignores the historic interaction between these
ostensibly segregated worlds. It was the West, of course, that systematically undermined secular
nationalist figures including Mossadeq and Nasser and installed and supported tyrants including the
Shah and Saddam Hussein, whose killing and torture of communists and others were not only
directed by the CIA but generated power vacuums then exploited by Islamists. And indeed, to
interpret, à la Bill Maher, Saudi Wahhabism, the Mujahideen, Bin Laden or the Taliban as “natural”
and organic manifestations of a monolithic “Islamic civilization” ignores the West’s central role in,
when it suits its interests, directly supporting reactionary Political Islam.
Second, the “Clash of Civilizations” fallacy ignores the heterogeneity and antagonisms within so-
called “Islamic civilization,” a construct frequently divorced from the presence of actual Muslims
from Indonesia to Los Angeles. The apparent battle-lines between a representation of “Islamic
World backwardness” and Western liberalism have hardened around the debate concerning Charlie
Hebdo‘s proclaimed right to free speech in its continued mockery of Islam. The issue couldn’t be
clearer to the heralds of liberal idealism, as the Islamists are guilty of having inadequate reverence
for the core Western value of free speech (although liberals tend to forget that freedom of speech
concerns freedom from governmental, versus private, interference). Indeed, I even saw a
commercial the other day for the TV program “Madam Secretary” in which the Secretary of State
tells us that “human rights” is the US’s core value, so it must be true.
But while critics and scholars, including Noam Chomsky, cogently demonstrate that “human rights”
is indeed not a core value of the US, which selectively flouts such rights as it sees fit, it is also
possible to concede that human rights, including freedom of speech, in fact is a core US value and
then inquire what it really consists of. For a right’s significance ultimately lies in the power
conferred upon those who grant it. Can you imagine what confusion you would have caused by
telling people a thousand years ago that they can now say what they want free from government
interference? It’s only the total normalization of the modern state that prevents us from recognizing
its granting of rights as something other than a presumption of terrible power. The right conferred is
always defined by the “exceptional” circumstances legitimizing its withdrawal, and the only
difference here between the West and Political Islam is in how those circumstances are determined.
The US, of course, downed foreign planes and scoured the world for Edward Snowden, a leaker of
state secrets and thereby an accused traitor, an executable offense. Notably, the intensity of the hunt
for Snowden existed irrespective of the actual damage the leaks did to the somewhat nebulous, if
not religious, notion of “state security.” The mistake, as many have noted and as Slavoj Žižek and
others continue to make, is to view so-called Islam (or properly Political Islam) as a religion (whose
constructed and modern tradition and authority Political Islam invokes) rather than as the political
movement it in fact represents. That is, the incommensurate debate between the West and Political
Islam represents not a language of politics misunderstanding a language of religion but a language
of politics misidentifying a competing language of politics. And these are politics shaped less by
timeless and placeless metaphysics than the wreckage of Gaza, Fallujah, and Kabul viewed through
the eyes of an already stigmatized minority further alienated by an enduring economic-cum-political
crisis. Indeed, these languages are not fundamentally different insofar as they both reflect the needs
of power in the modern world, regardless of whether this power is devoted to preserving the
legitimacy of the nation-state’s monopoly of violence or establishing the legitimacy of a religion-
invoking reactionary movement attempting to monopolize power itself.
I’m not forwarding a liberal PC argument suggesting that we ignore or relativize the reactionary
violence of Islamist movements. But criticizing such movements without identifying the nationalist
Western doubles that they are pitted against mystifies not only both phenomena but also the world
system that shapes their motives and conduct. Indeed, defenders of Western free speech and human
rights (what does it mean to march with Hollande and Netanyahu?) do not merely explain away the
West’s internal contradictions — from right-wing terrorism to the Espionage Act to prohibitions
against both “crossing the line” in US academia and Holocaust denial in Europe. They also imagine
that their “objective truth claims” exist apart from the antagonisms of their relations with others and
are thereby just as incidental to power as were the 19th century colonists’ obsessive cataloging of
the “human rights” abuses of those they sought to dominate. They are not, as the (bourgeois) ideal
of free speech is inseparable from the West’s ambitions for and language of power, whether in
legitimizing the US nation-state itself or its battles against less evolved Others who “irrationally”
and “primitively” reject Western ideals.
And, to be fair, why would anyone want to join mainstream Western society, which is toxic even
during the “good times”? Finding themselves on the narrow edge of a collapsing economic order
and political center, the Islamists murderously lash out, providing wrong answers to wrong
questions that ultimately mirror, rather than fundamentally challenge, the confusion of the West.
Joshua Sperber has written on libertarianism, labor, and the left and can be reached at
jsperber4@gmail.com
Joshua Sperber April 2, 2013

The Internet, Capitalism, and the State

Robert McChesney’s Digital Disconnect (New Press, 2013) is an informed and engaging account of
the internet’s history and likely future within the context of corporate-dominated U.S. society. Yet
while the book is a useful catalog of the disturbing and sometimes bizarre attributes of today’s
internet, its onus on the internet’s relationship to commercialism and advertising – as opposed to
labor – as well as its pluralist conception of a “corrupted” state hijacked by corporations precludes a
more thorough and critical analysis.
Commercialism on the internet, as in other arenas, has undoubtedly become more intense and
intrusive. McChesney traces this evolution by looking at the internet from its military-created
National Science Foundation Network days to the early 1990s, when a strong anti-commercial
online culture defended a free and open public sphere, to its more recent exponential growth and
privatization. To be sure, McChesney shows the eventual oligopolistic corporate dominance of the
internet was hardly predetermined (Google currently governs 70 percent of searches, Amazon sells
70-80 percent of books online, and the top 50 out of 773,000 websites, according to Matthew
Hindman, account for 41 percent of all internet traffic, with the top seven dominating). Indeed,
McChesney recounts how the traditional media monopolies were horrified by the seemingly
intractable obstacles to profit posed by the early internet: its unique elimination of barriers to entry
(anyone could start a website); the difficulty in forcing users to pay for ubiquitous online content;
the apparent impossibility of enforcing copyrights due to the ease of copying and distributing
content, and the difficulty in ensuring that users would watch advertisements when they had infinite
alternatives.
In short, the internet, for a moment at least, eliminated scarcity, which McChesney notes is a
precondition for profit. Faced with this apparently existential threat, and facilitated by Bill Clinton’s
1996 Telecommunications Act which enabled media cross-ownership and thereby paved the way for
the reemergence of the old monopolies in a new sphere, media giants like Disney, GE, Time Warner,
and Viacom went on a dot.com buying spree. In a coordinated effort to generate scarcity, the major
media owners have since sought to establish “walled gardens” like Facebook, in which entry costs
(e.g. fees, or personal data in this case) are effectively extorted via the isolation and inconvenience
(some jobs require Facebook membership) of exclusion. Seeking “‘enhanced surplus extraction
effect’ – that is, the increased ability to fleece those walled within… the giants are vying to be
digital company stores in a national or global company town.”
Media conglomerates (and the state) have additionally manufactured scarcity by radically extending
copyright coverage. McChesney notes that, libertarian mythologies aside, the market for non-
exclusionary or nonrivalrous goods could not function without government intervention
(notwithstanding Napster-founder Sean Parker’s memorable observation that the music industry had
become a water-seller in a downpour, advising record producers to sell “umbrellas” instead). While
the original aim of copyright protection was to encourage production through ensuring incentives,
present-day media corporations, McChesney continues, benefit from what are in effect
“government monopoly protection licenses” in perpetuity, halting production, competition, and
creativity while generating artificially high prices for consumers. Nothing since 1920 has been
added to the public domain, as media companies, rather than the artists they claim to protect, are
guaranteed “rent” via copyright-cum-monopoly protections decades beyond the life of the artist.
Advertising on the internet also initially presented an obstacle to both websites in need of funding
and advertisers seeking ways to sell to users. Whereas three television networks were originally
able to exert relative leverage on advertisers with few other options, the internet’s profusion of
websites has decisively shifted the advantage to the advertisers, forcing a surfeit of revenue-hungry
sites to compete with one another over relatively scarce funding. Within this highly competitive
context, websites are working to attract profitable advertising by using cookies to monitor visitors’
site visits and activities, collecting user data that sites sell to advertisers who then target users with
highly personalized – and more effective – ads.
Through “targeted advertising,” “persuasion profiling,” “sentiment analysis,” and “commercializing
friendship” (a specialty of Facebook, which uses users’ “likes” to sell products to one’s “friends”),
online advertising has radically expanded the intensity and intimacy with which media consumers
are commodified. As Bruce Schneier notes, ‘“Google has great customer service. Problem is, you’re
not the customer.’” Advertisers are, and the massive market for users’ personal data is only matched
by the advanced and insidious technology that extracts it. Traditional standards of privacy have
been demolished as Skype contains technology to ‘“silently copy’” our conversations while
Smartphones track us and communicate our location and personal details to third parties whether we
know it or not. Needless to say, the government – otherwise neutral or “corrupted” in McChesney’s
account – has collected incalculable sums of personal data, stored in its colossal Utah database for
indeterminate future use. And whereas the Stasi was famously overwhelmed by its abundance of
collected data, this government is developing far more sophisticated processing technologies,
making it an understatement to note that the police state is here and it is locked in.
Notwithstanding the book’s clearly written tour of numerous issues characterizing today’s internet,
from the effective demise of net neutrality via Smartphones to the multiplication of cloud
computing, McChesney’s account is diminished by a dubious conception of the state that leads to an
inadequate analysis of capitalism and, thereby, a flawed prescription. McChesney sees the state that
developed the internet in neutral terms – as opposed to the rapacious corporations who seek to take
its helm – without noting that the internet was designed to distribute and maintain data in the event
of nuclear war. That is, at its earliest stage, the internet represented the state’s irrepressible drive to
sustain a system of power that among other things produced the conditions for a global holocaust.
The state soon after presented – as it did with 19 th century land-grants to the railroads – the internet
to the market, whose privatization would generate the tax revenue that the state could never create
on its own. It is unclear why McChesney believes that the state needed to be “corrupted” –
Congress “is under the thumb of big money” – in order to make this self-serving decision. The
internet has never existed apart from state exigencies; and though these exigencies can be varied and fluid, it
takes a liberal leap of faith to assume that the well-being of its subjects is one of them.

By contrast, Alexander Galloway’s Protocol, focusing on the military origins of the internet, shows that, as
Eugene Thacker writes in the introduction, “control has existed from the beginning.” Rejecting the
ubiquitous metaphor of the internet as a “network,” Galloway shows how the internet’s governing protocols
(Transmission Control Protocol and Internet Protocol) distribute information horizontally among different
computers while, at the same time, the internet’s Domain Name System governs internet addresses through
vertically regulating this horizontal information. By eschewing the prevailing “network” metaphor in favor
of a more literal and concrete description of the internet’s vertical-horizontal system of control, Galloway is
able to describe a standardizing internet code that, among other things, problematizes popular notions of
internet “connectivity,” “collectivity,” and “participation.”

Specifically, Galloway shows how the benefits of connectivity, collectivity, and participation are inseparable
from their opposites; new possibilities for action have simultaneously produced new capacities for control.
For instance, Galloway recounts how the communications company Verio permanently disconnected the
activist troupe The Yes Men from their server and thereby their website following the activists’ anti-corporate
prank targeting Dow Chemical regarding the Bhopal disaster. The benefits of connectivity are inseparable
from a new dependency and vulnerability created by state and corporate power’s capacity to disconnect
whomever it chooses. This capacity to disconnect users from the internet, at least in terms of the state, has
been intrinsic to the medium long before corporations came onto the scene.

And while McChesney cogently discusses capital’s zero-sum game with labor, this understanding does not
adequately inform his proposal of a government voucher system as a means to subsidize journalism.
Defining journalism – which is undoubtedly experiencing crisis – as a public good, McChesney proposes
that taxpayers be allowed to allocate 200 dollars per year to the online non-profit journalism site of their
choice, comparing his plan to government funding for public schools while invoking the legacy of Jefferson
and Madison’s support for newspaper subsidies to make his case.

McChesney paraphrases Paul Krugman’s discussion of Michal Kalecki to argue that government job
programs would be opposed by business merely because “if the public realizes that the government has the
resources to establish full employment, the realization would undermine the notion that the central duty of
government is to create a climate in which business has confidence in the system and therefore eventually
invests to create jobs.” Here McChesney psychologizes the economy in suggesting that it is people’s
attitudes that prevent government job creation, not the fact that government jobs decrease unemployment and
thereby increase the cost of labor. In proposing effective government subsidization of labor, McChesney
ignores the manner in which we arrived at our current moment. Capital, faced with reduced avenues for
profit, decided that US labor is too costly, and it will only invest in it again when that cost is reduced or
“corrected.” If the government slows this correction through adding public jobs, the private sector will likely
continue to sit on its capital depriving the state of its tax revenue, et cetera. McChesney’s assertion that
“inequality” has “corrupted” the political system obscures the fact that it is the system that produces
inequality, as well as “special interests,” in the first place; so why seek to return to an earlier state, when we
know where accumulation eventually goes? And if half of the government wants to eliminate PBS and “Big
Bird,” what likelihood is there that this same government would support a massive job subsidy plan? And if
through some miracle this proposal were adopted, what would stop the perennial and relentless government
backlash from rolling it back – especially when the next recession comes around.

The FCC indeed rejected McChesney’s plan for being too “radical,” but the point is that if we are trying to
generate systemic change it is not nearly radical enough, as it is based on some of the same premises that are
part and parcel of the media propaganda McChesney so skillfully criticizes. Indeed, McChesney believes that
markets have a place in the “good society” – notwithstanding their inexorable drive toward expansion – and
asserts that a lack of economic growth threatens democracy. But is it not the uncritical obedience to
economic growth that has us subordinating life to the market in the first place? And is it not the very fact that
journalism is collapsing due to its unprofitability a valid reason for rejecting a system that insists on the
primacy of profitability? Why fight a reactive and rearguard battle to establish what would be a precarious
niche instead of rejecting capitalism – the system of not merely corporations but private property and profit –
itself?

Whereas McChesney compares his voucher proposal to the “public good” that is public education, this again
reflects an undeservedly optimistic view of government. Or, insofar as public education is a “public good” it
is one that meets the needs of not “society” but society under the state, as it houses youth, indoctrinates them
in individualist and nationalist ethos, rewards punctuality and obedience, and grades and divides them in
accordance with society’s brutally unequal division of labor. Education, as it currently exists, is an incubator
of inequality and should hardly be invoked as a model for resource distribution. Moreover, as evidenced in
Chicago, New York, and the rest of the country, public education is on the chopping block, as charter schools
believe that they can train students just as well as the government and turn a buck while they’re at it. Even
the young U.S. government’s decision to subsidize newspaper postage, which McChesney extols, was
inseparable from the nascent state’s desire to cultivate a national consciousness within a diffused federalist
system. Again, the “public good” here is inseparable from the good of the state.

McChesney’s focus on internet commercialism and personal data collection is important, but this
consumerist orientation leads him to overlook one of the biggest stories of the internet under capitalism: the
proliferation of unpaid social production. The internet has masterfully channeled users’ “free time” toward
“hobbies” that produce websites’ content. Facebook and dating sites are of course mainly composed of users’
photos, personal descriptions, commentary, and musings. Similarly, the owners of Yelp have made a fortune
via the unpaid restaurant reviews of its millions of contributors, who simultaneously serve to discipline
restaurant workers (for free!) via consumerist ideology. Online newspapers and sports sites have become far
more interesting by soliciting readers’ comments, whose frequent thoughtfulness, wit, and learnedness often
provide more compelling reading than the original content. So not only are we being advertised to and
having our personal data taken while we’re on the internet, we’re also working for free to make sure that
there’s an internet at all. Through such personally intensive and infinite productivity, the internet has opened
whole new spatial and temporal arenas for profit, not only through the radically increased shadow work of
maintaining our online networks and bringing work home with us – which McChesney briefly addresses
– but also through redefining how we conceptualize our relationship to social reality itself.

Notably, the further transformation of ourselves into permanent commodified profit-producers by the internet
is not connected to telecommunication “special interests” corrupting the government. It is a mere symptom
of a capitalist system that benefits, and is in turn entirely supported by, the state.

Joshua Sperber lives in Brooklyn and can be reached at jsperber4@yahoo.com


What is a President? The CEO of Capitalism
by Joshua Sperber

Ongoing left debates regarding Bernie Sanders’ presidential campaign are frequently characterized
by a shared premise. Whether arguing, for instance, that Sanders is dismissive of race or countering
that his emphasis on economics necessarily entails anti-racism, both sides tend to assume that
Sanders would be able to meaningfully advance his politics if he were to become president. That is,
both sides generally presuppose the liberal notion of pluralism, which conceives of a neutral and
malleable state that can be shaped and reshaped by those who govern it.
The history of the presidency illustrates a very different story, one in which the political party and
personal inclinations of presidents (let alone candidates) are generally irrelevant to how they wield
power. Presidents – whether Constitutional Law professor/community organizers or religious
zealots with MBAs – historically have advanced the objective interests of the nation-state,
prioritizing its international power and the profitability of its economy above all other
considerations. Notwithstanding cogent left criticisms of Sanders, the key question is not whether
Sanders is a phony but what, if elected president, he will in fact be sworn to do. In other words,
what are presidents?
The Constitution was of course designed to replace the Articles of Confederation, whose
preservation of revolutionary anti-monarchism (“The Spirit of 1776”) resulted in what the framers
came to fear as a dangerously weak state. The decentralized Articles did not have an executive and
instead placed power in the legislature (the “People’s Branch”) and the states. Not only did such
decentralization preclude national coherence but it also prevented the national government from
raising taxes and thereby armies, leaving it, among other things, unequipped to suppress mass
debtor insurrections.
Encouraging state legislatures to eliminate debts through inflating state currencies and issuing “stay
laws,” debtor insurrections horrified leaders who argued that revolutionary liberty had gone “too
far.” Indeed, debtors’ repudiation of property rights (sometimes destroying debt records directly)
reflected the growing power of Hamilton and Madison’s dreaded (if not oxymoronic) “majority
faction,” which according to Madison threatened not merely the small creditor class but the
“permanent and aggregate interests of the community” as well.
Significantly, the Framers discussed the threat of foreign invasion and the threat of domestic
insurrection in the same vein. But while the former would clearly challenge the national character of
the state, the latter – conducted by citizens after all – would not. That is, Madison and Hamilton’s
nation-state is not a clean slate of pluralistically competing factions but has instead always been
intrinsically defined by the general interests and demands – if not the personal economic interests of
the founders – of the propertied class. Aggregating concrete competing interests into an imagined
national community, the framers established antagonistic property relations as the cornerstone of the
nation-state and, more specifically, guaranteed that the propertied few would be protected from the
property-less many. Accordingly, the Framers designed a government that “multiplied” and
“diffused” factions while “filtering” the “violent passions” of the masses through “insulated” and
“responsible” “elites” in order to obstruct the majority’s inevitable “rage for paper money, for
abolition of debts, for an equal division of property, or for any other improper or wicked project….”
Steward of the State
The Constitution not only centralized power but also eliminated the legislature’s dominance by
establishing a bicameral Congress and a “separation of powers” that enabled the executive to
become supreme. Article II granted the president a powerful veto, and its provision for unity and
relative vagueness provided the executive with the tools for the “energy,” “decision, activity,
secrecy, and dispatch” deemed necessary for “strong government.” Aghast at the power of the
Constitution in general and the new executive in particular, Patrick Henry warned that the “tyranny
of Philadelphia” would come to resemble the tyranny of King George.
Predictably, George Washington exploited Article II’s vagueness, invoking the “take care” clause to
crush the Whiskey Rebellion and capitalizing on the omission of Article I’s qualifier “herein granted
shall be vested in” to issue the Neutrality Proclamation. But it was not until Thomas Jefferson’s
presidency that the objective character of the presidency became manifestly clear. It is indeed an
emblematic irony of U.S. history that while the Jeffersonians won most of the early presidential
elections, continental and international imperial pressure to expand led them to frequently
implement Hamiltonian policies once in office. While Washington and Adams (one also thinks of
the Alien and Sedition Acts) expressed Hamiltonian political orientations, Jefferson personified a
diametrically opposed U.S. political tradition. Whereas Hamilton was a loose constructionist who
advocated for a large national government and a strong executive that would pursue manufacturing
following the British model of development, Jefferson was a strict constructionist who advocated
for a small national government and weak executive that would pursue agrarianism following the
French model of development. Yet, in spite of his lifelong principles, Jefferson in significant
respects presided like a Hamiltonian, violating his strict constructionism via the Louisiana Purchase
and the Fourth Amendment via his aggressive, albeit unsuccessful, Embargo Act.
Andrew Jackson continued this pattern, expanding the power of the executive as well as the national
government notwithstanding his previous advocacy of small government and states’ rights. Beyond
his unprecedentedly aggressive use of the veto (Jackson was the first president to use the veto on
policies he merely disliked instead of those deemed unconstitutional), Jackson threatened to use
military force against South Carolina if it did not yield to the national government during the
Nullification Crisis. And it is notable that when Jackson did support states’ rights after Georgia
violated the Supreme Court’s ruling in Worcester v. Georgia, it was in the name of expelling the
Southeast’s Native-Americans in order to clear the land for profitable exploitation by African
American slaves. That is, Jackson supported the states as long as they were pursuing nation-
building rather than their own parochial interests.
And though the growth of the executive was neither even nor always linear, its long-term evolution
has been characterized more than anything else by massive and bipartisan aggrandizement. Even
periodic setbacks, such as the Congressional backlash against Nixon’s “imperial presidency,”
proved to be ephemeral. Reagan merely danced around the War Powers Resolution in his illegal
funding of the Contras, while Obama circumvented the WPR by declaring that his war on Libya
wasn’t in fact a war. By the time of the George W. Bush Administration, the executive – usurping
the Congress via signing statements and the courts via military tribunals, among countless other
encroachments – had unprecedentedly expanded its power. Contrary to liberal mythology, Bush was
hardly an anomaly, as his response to 9/11 built upon Clinton’s attack on civil liberties following the
Oklahoma City bombing, just as Obama’s “kill lists,” surveillance, and drone warfare have
expanded Bush’s apparently permanent state of exception.
Manager of Capitalism
It is important to note that this expansion of executive power did not occur in a vacuum. On the
contrary, executive aggrandizement has more often than not correlated to emergencies in general
and capitalist crises in particular. As “steward” of the system, to use Theodore Roosevelt’s
appellation, the modern president is devoted not only to expanding the power of the state vis-à-vis
international competitors but also to maintaining the conditions for the capitalist economy with
which it, in large measure, competes. Jackson aimed to open new arenas for capitalist accumulation
not only through the primitive accumulation of Indian removal and chattel slavery but also through
eliminating corrupt, monopolistic, and ossified economic institutions such as the Charles River
Bridge Company and Biddle’s Bank.
Jackson’s incipient capitalism had become a mature and complex system producing enormous
social and political problems by the turn of the century. In turn, Theodore Roosevelt radically
expanded presidential power by inverting Jefferson’s interpretation of the Constitution: while
Jefferson claimed that the president can only do what the Constitution explicitly permitted,
Roosevelt claimed that the president could do anything that the Constitution did not explicitly
forbid. As such, Roosevelt intervened in the Coal Strike of 1902 and threatened to seize and run the
mines after failing to initiate arbitration meetings, while the Hepburn Act saw the U.S. issuing price
controls for the first time.
Although progressives applauded the executive’s reinvention as a “trust-busting” “referee” after
decades of pro-business policies, the presidency had in fact remained consistent in its relationship to
capitalism. When nascent capitalism required primitive accumulation and (selective) laissez-faire,
Jackson gave the system what it needed; when rampaging capitalism threatened to destroy its own
social and economic bases during the Gilded Age, Theodore Roosevelt did the same.
Before (if at all) considering the interests of the people that he nominally represents, the president
must insure that they constitute a ready and exploitable workforce in the case of economic
expansion or that they do not threaten the state’s social and political stability in the case of
depression. Indeed, the president (though typically not more myopic business leaders) has
frequently recognized the danger of killing the golden goose during capitalist crises, a point made
explicitly by that giant of the liberal imagination, FDR. As recounted by Neil Smith in The
Endgame of Globalization, FDR explained his rationale for the New Deal to business leaders: “‘I
was convinced we’d have a revolution’ in the US ‘and I decided to be its leader and prevent it. I’m a
rich man too,’ he continued, ‘and have run with your kind of people. I decided a half loaf was better
than none – a half for me and a half for you and no revolution.'” Such cynical calculations allow us
to reconcile the “good FDR” of the New Deal with the “bad FDR” who interned Japanese-
Americans and firebombed Tokyo, Dresden, and other urban centers.
Notwithstanding the limitations of the New Deal (which among other things emphasized selective
social redistribution at the expense of preserving mass exploitation), the Keynesian rescue package
had run out of gas by 1973. Amid renewed global competition and the increase in oil prices, profit
contracted, but for the first time since the postwar “Golden Age of Capitalism” had begun, spending no
longer mitigated the effects of the glut. According to Tony Judt, Labor Prime Minister James Callaghan had
“glumly explained to his colleagues, ‘We used to think that you could just spend your way out of a
recession…I tell you, in all candour, that that option no longer exists.’”
It was within this context that laissez-faire, now refashioned as neoliberalism, rose from the dead, as it
provided the apparent solutions (e.g., privatization, tax cuts, and deregulation) that Keynesianism could not.
Put differently, capitalism generated a second wind not only by moving investment from industry to finance
but also by cannibalizing the apparatus that had helped rescue it from its previous crisis. The growing chasm
separating postwar liberal politics from the post-1970s new economics gave rise to “new” liberals including
Clinton, Blair, Schroeder, Obama, and Hollande, who, operating within an increasingly limited range of
action, attempted to manage liberalism’s strategic retreat. In so doing, liberal politicians have frequently
compensated for their exhausted economic programs by embracing cultural issues, a strategy that has been
termed, “Let them eat marriage.” While liberals accurately note that the monstrous right would be “even
worse,” their warning is nevertheless dishonest insofar as it ignores that liberals are wedded to the political-
economic system whose noxious effects produce such reactionaries in the first place.
Lest we conclude that this is a case of the domestic political cart leading the economic horse, it is crucial to
reiterate that the collapse of economic liberalism has been a global phenomenon, whether expressed through
Bill Clinton’s declaration that “the era of big government is over,” Francois Mitterand’s assertion that “‘The
French are starting to understand that it is business that creates wealth, determines our standard of living and
establishes our place in the global rankings,”’ or anti-austerity Syriza’s ongoing implementation of austerity.
That is, assuming that it would be desirable, the New Deal is unlikely to return (although a new world war or
some other catastrophe can indeed press the “restart” button on capitalist development assuming there’s
anyone left to exploit). Given the enormous global economic and structural constraints delimiting the
presidency, it is possible to argue that Barack Obama, demonstrating prodigious “activity,” has done a
remarkable job in advancing his domestic and international agendas. Rather than being “weak” or a “sell-
out,” Obama very well might be, as liberals stress, the best we can hope for – a possibility that more than
anything else radically indicts the system itself.
Obama’s political victories on Iran, Cuba, healthcare, and gay marriage should not be compared to his
failures. They should instead be compared to his other, far more reactionary, achievements including
Afghanistan, Libya, Yemen, Pakistan, the Tran-Pacific Partnership Trade Treaty, mass surveillance, and the
prosecution of whistleblowers, policies regularly conducted with Hamiltonian “energy,” “decision,”
“secrecy,” and “dispatch.” These latter policies neither contradict nor are inconsistent with Obama’s liberal
successes. Their common denominator is the presidential articulation of the primacy of the nation-state – and
thereby capital accumulation – above all other concerns. The voters’ concerns are considered only when they
are serviceable to these paramount interests.
Given the enormous powerlessness of the voter, it is unsurprising that the injunction “hope” so often
accompanies political campaigns. Bill Clinton was “The Man from Hope,” Obama campaigned on “Hope,”
and, overseas, Syriza promised that “Hope is Coming.” Selecting who will rule without any ability to control
the content of that rule, the voter casts the ballot as an act of faith. Investing political and emotional energy
into nothing more than the good name of the system (election nights are always exercises in flag-waving
celebration of a system that lets us choose our rulers), voters incorrectly argue that voting is better than doing
nothing and condemn those who abstain. Yet, the disillusioned are not to blame for forces that they have no
control over. And if the disillusioned do become interested in challenging the abuses of everyday life, it will
not be through voting but through criticizing the system that voting acclaims. The opposite of hope is not
despair. It is power.
Joshua Sperber January 2, 2014

Is BDS the Left’s "Save Darfur"?


In the mid-2000s the “Save Darfur Coalition,” backed by Christian and Jewish groups as well as the
US House of Representatives, sought military intervention in southern Sudan in order to “save”
African Muslims from what the campaign characterized as genocide. Mahmood Mamdani criticized
the movement through comparing the conflict in Darfur to the US war in Iraq, asking why
Americans would focus on the former and not the latter. Mamdani also questioned why US activists
would encourage intervention in Darfur while ignoring the far larger humanitarian crisis in the
Congo. “Save Darfur” supporters such as Nicholas Kristof responded that the campaign was urging
action where action was possible. “Save Darfur,” Mamdani concluded, reflected more about the
politics of the War on Terror – and white saviors’ need to side with so-called “good (African)
Muslims” against “bad (Arab) Muslims” – than an intelligible concern about mass killings per se.
While the violence in Israel and the occupied territories is far milder and more unilateral than what
occurred in Darfur, among many other differences, there are several parallels between the organized
responses to Darfur and Israel’s occupation. BDS, responding to some (but not other) Palestinian
calls for boycott, divestment, and sanctions of Israel, is of course not calling for a military
intervention of Israel, though in attempting to apply sanctions on a sovereign country it is using the
“soft weapon” of war that states use to bend other states to their will. Indeed, the Left often
criticizes the unfairness of sanctions (e.g. on Iraq) not only for sanctions’ targeting of innocents but
also for their counterproductiveness. While sanctions weaken societies, they also predictably often
cause governments to consolidate their power rather than forfeit their sovereignty by capitulating to
international pressure. Moreover, for sanctions to become something other than a mere moral
stance, they require states to adopt and impose them, if only via the UN and international law which
are contingent on the Security Council. But does BDS really imagine that the US and UK are
motivated by moral concerns and not their own power?
BDS is also similar to Save Darfur in that BDS is characterized by a fundamentally opportunistic
stance. It focuses on Israeli discrimination against Arabs in Israel and Israel’s ongoing brutalities in
the occupied territories, not countless other areas of state oppression let alone the inherent violence
of the global system itself. BDS argues that this game of whack-a-mole is a practical necessity and,
setting up a false dichotomy, suggests that targeting Israel is no doubt better (for those with Left
politics) “than doing nothing.”
BDS supporters have responded to critics’ complaints that Israel is being “singled out” by noting
that Israel is already singled-out by its preferential treatment by the US, which provides it with
billions per year and diplomatic immunity in the Security Council. But, given that Israeli behavior
is contingent on this US support, why does the campaign not focus on the source of it, that is, the
US itself? If the Left is powerless against the sponsor, why is power against the client necessarily a
good thing? Moreover, why is the Left powerless against the US in the first place?
For many BDS supporters, subscribing to a liberal pluralist conception of domestic politics, it is too
impractical to sway the US on Israel due to the outsized influence of the so-called Israel Lobby,
which ostensibly coerces the US to do what it otherwise would not (notwithstanding the Lobby’s
recent failure regarding Iran). However, for those who reject the notion that domestic lobbies drive
US foreign policy and that the US is not “neutral” but instead, like all states, follows realpolitik
designed to advance its own power, such liberal pluralism is misleading (the US has also given
great sums over the years to Colombia and Turkey, countries that do not have significant lobbies in
DC; the top recipient of US support for 2011, by the way, was Afghanistan, which should also
problematize the very concept of superpower “support”).
While BDS supporters have emphasized that Israel should be singled out due to the US’s massive
support, this ignores the reasons for this support, most of which is earmarked for consumption of
US military equipment, and whose purely economic component is only a seventh of that given to
Egypt, reflecting the US’s determination not to appease the Israel Lobby but to have stable allies
bordering the Suez Canal: He who controls the Mediterranean controls the world, and he who
controls the Suez controls the Mediterranean. If Israel didn’t exist, it would have to be invented.
Prioritizing an ostensibly “practical” “do something approach” over understanding root causes,
BDS takes sides in identity swapping while leaving structural conditions in place. After all, many
Western Leftists agreed after the Second World War that the solution to the intractable vulnerability
of a historically persecuted people was to grant them a state of their own. Indeed, it is only when the
international state system is assumed as a natural given that the answer to being a victim is to
become a perpetrator, as if the violence inside the newly independent South Sudan is an exemplar
rather than a warning to those who would mistake nationalism for freedom.
While BDS regularly invokes the legacy of the international boycott of South Africa in
delegitimizing Apartheid, it should be noted that South African poverty has worsened since the end
of white rule, with a major difference being that since this impoverishment is “colorblind” the Left
(or anybody else) is no longer urgently concerned about it. That is, BDS is not only opportunistic
geographically but also conceptually; the causes of exploitation and violence as such are not being
condemned. Reflective of the identity politics characteristic of the Age of Obama, BDS’s efforts to
replace Israeli nationalism with Palestinian nationalism, and thereby Israeli landlords with
Palestinian landlords, is fully compatible with the exclusion and impoverishment of people in
Palestine and Israel.
A criticism of BDS does not suggest that those who are concerned about human welfare and dignity
should ignore Israel’s behavior or not work to end the Occupation. It does suggest, however, the
dangers in proffering apparent remedies that merely take sides within, and thus reinforce, the
international system that is at the root of the problem.
Joshua Sperber lives in New York and can be reached at jsperber4@yahoo.com
Vol. 35 No. 16 · 29 August 2013

Vanity and Venality


Susan Watkins is the editor of New Left Review.
• Un New Deal pour l’Europe by Michel Aglietta and Thomas Brand
Odile Jacob, 305 pp, £20.00, March 2013, ISBN 978 2 7381 2902 4
• Gekaufte Zeit: Die vertagte Krise des demokratischen Kapitalismus by Wolfgang Streeck
Suhrkamp, 271 pp, £20.00, March 2013, ISBN 978 3 518 58592 4
• The Crisis of the European Union: A Response by Jürgen Habermas, translated by Ciaran
Cronin
Polity, 120 pp, £16.99, April 2012, ISBN 978 0 7456 6242 8
• For Europe!: Manifesto for a Postnational Revolution in Europe by Daniel Cohn-Bendit and
Guy Verhofstadt
CreateSpace, 152 pp, £9.90, September 2012, ISBN 978 1 4792 6188 8
• German Europe by Ulrich Beck, translated by Rodney Livingstone
Polity, 98 pp, £16.99, March 2013, ISBN 978 0 7456 6539 9
• The Future of Europe: Towards a Two-Speed EU? by Jean-Claude Piris
Cambridge, 166 pp, £17.99, December 2011, ISBN 978 1 107 66256 8
• BuyAu Revoir, Europe: What if Britain Left the EU? by David Charter
Biteback, 334 pp, £14.99, December 2012, ISBN 978 1 84954 121 3

All quiet on the euro front? Seen from Berlin, it looks as though the continent is now under control
at last, after the macro-financial warfare of the last three years. A new authority, the Troika, is
policing the countries that got themselves into trouble; governments are constitutionally bound to
the principles of good housekeeping. Further measures will be needed for the banks – but all in
good time. The euro has survived; order has been restored. The new status quo is already a
significant achievement.
Seen from the besieged parliaments of Athens and Madrid, from the shuttered shops and boarded-up
homes in Lisbon and Dublin, the single currency has turned into a monetary choke-lead, forcing a
swathe of economies – more than half the Eurozone’s population – into perpetual recession. The
Greek economy has shrunk by a fifth, wages have fallen by 50 per cent and two-thirds of the young
are out of work. In Spain, it is now commonplace for three generations to survive on a single salary
or a grandparent’s pension; unemployment is running at 26 per cent, wages go unpaid and the rate
for casual labour is down to €2 an hour. Italy has been in recession for the past two years, after a
decade of economic stagnation, and 42 per cent of the young are without a job. In Portugal, tens of
thousands of small family businesses, the backbone of the economy, have shut down; more than
half of those out of work are not entitled to unemployment benefits. As in Ireland, the
twentysomethings are looking for work abroad, a return to the patterns of emigration that helped
lock their countries into conservatism and underdevelopment for so long. Why has the crisis taken
such a severe form in Europe?
Part of the answer lies in the flawed construction of the European Union itself. Though Americans
have been hard hit by the great recession, the US political system has not been shaken. In contrast to
most European incumbents, Obama sailed through his re-election. Only in isolated pockets like
Detroit has elected government been replaced by technocrats. In Europe, private and public debt
levels were generally lower before the financial crisis struck. But the polity of the European Union
is a makeshift, designed in the 1950s to foster an industrial association embracing two large
countries, France and Germany, with a population of about fifty million each, and their three small
neighbours. It was then expanded, piecemeal fashion, to incorporate nearly thirty states, two-thirds
of which adopted a shared currency at the height of the globalisation boom – a project aimed in part
at preventing a significantly larger, reunified Germany from dominating the rest.
The EU’s hybrid constitution includes, among much else, a decision-making European Council
(summit meetings of the heads of the 28 governments); an overarching secretariat, the European
Commission, with thirty-odd departments (directorates-general) and its own bureaucracy; a
Parliament that discusses Commission proposals; and a supreme court to rule on any disputes. The
blueprints for the euro that were drawn up in the 1990s added a further layer of confusion, for they
bore no intelligible relation to any of the above.

‘The euro is essentially a foreign currency for every Eurozone country,’ the French economist
Michel Aglietta and his co-author, Thomas Brand, write in Un New Deal pour l’Europe. ‘It binds
them to rigidly fixed exchange rates, regardless of their underlying economic realities, and strips
them of monetary autonomy.’ For Aglietta, a currency is essentially a social contract: behind it
stands a sovereign guarantor with the power to tax its citizens in return for the public goods and
services it provides for them. The euro had no such support; it bears ‘a promise of sovereignty’ that
has not been kept. Un New Deal pour l’Europe contrasts the scheme for the euro embodied in the
1992 Maastricht Treaty with the 1970 Werner Plan for monetary union, a Franco-German attempt to
protect the European economies from the buffeting of floating exchange rates at a time when the US
was pulling out of the Bretton Woods system.
The earlier project had envisaged the six EEC member states defining a collective fiscal policy
marked by a strong social dimension. The single currency agreed at Maastricht lacked the backing
of a taxable citizenry; its guiding principle was price stability, guaranteed by an independent
European Central Bank that would operate ‘in splendid isolation’, underwritten at arm’s length by
the member states. The idea was that the euro would give free rein to financial liberalisation across
the continent; market efficiency would see to it that savings were reinvested in an optimal manner,
ensuring a general convergence of the eurozone economies. Aglietta and Brand attribute the
difference between the two plans to the intellectual sea-change of the intervening decades – the
triumph of monetarism and rational-choice theory. Europe, they write, is now a prisoner of its
leaders’ decision to embed the flawed concept of an inflation-targeting central bank in ‘institutional
marble’.
Just as important was the international context. A single currency might have worked for the core
group of closely aligned economies – France, Germany, the Benelux countries – envisaged in the
Werner Plan. Instead, the architecture of the Eurozone, concocted in response to the fall of the
Berlin Wall, became fatally entangled with the project of EU enlargement. As it took shape from the
mid-1990s, the single currency became available to any country that could claim to meet the
minimal convergence criteria, in a spirit of geopolitical expansionism strongly backed by
Washington and London. The result, as the vanity of the leading continental powers combined with
the venality of the smaller ones, was a heterogeneous group of 17 economies, with divergent
dynamics, tied to a uniform exchange rate and enjoying a shared credit rating. Rather than helping
them converge, the common currency exacerbated the underlying differences between them.
Domestic manufacturing in the Mediterranean countries was squeezed by Chinese imports at the
lower end – textiles, ceramics, leather goods – while German companies gained an increasing
market share at the upper end: cars, chemicals, machinery. At the same time, the easy credit of the
globalisation bubble created the illusion that Europe was equalising upwards, as southern
consumption was fuelled by northern banks’ cross-border lending.
When the crisis came in September 2008, the EU governments loyally toed the G20 line, pledging
public funds to save the banks and shore up the economies. The 2009 Vienna Initiative underwrote
the exposure of big Austrian and German banks in Central and Eastern Europe with government and
IMF funds. By the start of 2010 the bank rescues, combined with recessions exacerbated by burst
property bubbles, had widened deficits and piled up government debts. The ratings agencies began
targeting the most indebted states – Greece, then Ireland, Portugal, Italy and Spain. Speculation on
an exit from the Eurozone or a collapse of the currency helped drive the rates of government
borrowing to unaffordable levels.
What followed was a thirty-month tug of war between the financial markets and the Obama
administration, on the one hand, and Berlin and the ECB on the other, in which Germany
grudgingly agreed to guarantee the debts of other member states on condition it was allowed to
dictate the outlines of their national budgets. ‘No guarantees without control,’ as Merkel put it. In
effect, Germany was to stand in for Aglietta’s absent sovereign power. On every occasion that panic
came to a head – in May and November 2010 with the Greek and Irish bailouts; summer 2011 with
tremors over Spanish and Italian bonds; November and December 2011 with the ousting of
Papandreou and Berlusconi, followed by Cameron’s veto of the Fiscal Compact treaty; and summer
2012 with the Greek elections and the spectre of a Spanish banking collapse – Berlin acquiesced to
the demands of the US Treasury.
Merkel’s one attempt to forge an independent path, the October 2010 Deauville agreement to force
Greece’s creditors to write off some of their lending, was swiftly quashed. Washington was willing
to go along with German austerity policies – Obama himself phoned Zapatero in May 2010 to
harangue him about the need for Spanish spending cuts – as long as the chains of debt leading back
to Wall Street were guaranteed. In September 2011 the US treasury secretary flew to Poland,
gatecrashing a meeting of EU finance ministers to press his agenda. The list included emergency
bailout loans, ECB bond purchases, bank funding, quantitative easing, eurobonds and Eurozone
equivalents to US bank resolution and deposit insurance mechanisms. The US Treasury line was
backed by the German SPD and Greens, the financial press and the world media.
In May 2010 the European Council agreed to set up a temporary €440 billion European Financial
Stabilisation Facility (EFSF), later supplemented by a permanent €500 billion European Stability
Mechanism (ESM). Underwritten by the Eurozone powers (with the help of Goldman Sachs, BNP
Paribas, Société Générale and RBS), these entities would borrow money on the markets to provide
loans to any country requesting help in meeting the billion-dollar interest payments on its national
debt, on condition that the country agreed to an externally administered programme of fiscal
austerity and structural reforms. For the financial markets, however, what mattered was not Berlin-
inspired economic policies but an open-ended guarantee by the Central Bank. This involved a
change of the guard at the ECB, which had to abandon its founding no-bailout mandate. At Merkel’s
insistence, ideological cover was provided by the Fiscal Compact treaty, committing member states
to inscribe a 3 per cent deficit limit in their constitutions. Once this was agreed in December 2011,
the ECB dispensed a trillion dollars to Eurozone banks in super-cheap long-term credit. Even this
was not enough; it was only in September 2012, when Mario Draghi, the president of the ECB,
announced that the bank was ready to buy unlimited quantities of member states’ bonds – again
with strict conditions attached – that the markets’ bets against the euro were taken off the table and
the furore over Europe’s monetary policy could subside.
*
The EU that has emerged from this epic battle is significantly more autocratic, German-dominated
and right-wing, while lacking any compensatory charm. The catastrophists, it’s true, have been
proved mistaken. Far from disintegrating, the Eurozone has continued to expand: Latvia is adopting
the euro in 2014, as Estonia did in 2011. Croatia joined the EU – or rather its ‘periphery’, as two
sardonic Croats put it in the Guardian – this summer. But the EU hasn’t simply muddled through
either. Instead, driven by the financial markets, with the US Treasury and the German Chancellery
tugging at the tiller, it has lurched into a new phase of unification, characterised by the same skewed
mix of centripetal and centrifugal logic that has shaped its course since Maastricht: asymmetrical
integration, combined with inegalitarian enlargement.

At supranational level, the ‘controls’ demanded by Berlin have produced an ad hoc economic
directorate with no legitimation beyond the emergency itself. The Troika – it has no official name –
was scrambled together in April 2010 to take over direction of the Greek economy, as the condition
for its first EFSF loan. Composed of functionaries from the European Commission, the ECB and
the IMF, it now governs Portugal, Ireland, Cyprus and Greece, and has been permanently inscribed
in the European Stability Mechanism. The Troika issues Memoranda of Understanding on the same
model as the IMF, which dictate every detail of the member states’ legislative programmes: ‘The
government will ensure that the legislation’ – for cuts in health and education, public sector
redundancies, reductions in the state pension – ‘is presented to Parliament in Quarter 3 and agreed
by Parliament in Quarter 4’; ‘the government will present a Privatisation Plan to Parliament and
ensure it is speedily passed’; even, ‘the government will consult ex ante on the adoption of policies
not included in this Memorandum.’
The Troika’s record of economic management has been abysmal. Greek GDP was forecast to fall by
5 per cent from 2009 to 2012; it dropped by 17 per cent and is still falling. Unemployment was
supposed to peak at 15 per cent in 2012; it passed 25 per cent and is still rising. A V-shaped
recovery was forecast for 2012, with Greek debt falling to sustainable levels; instead, the debt
burden is larger than ever and the programme has been renewed. No one has been held to account
for this debacle. Further rounds of cuts are scheduled for 2013, without any economic rationale.
Another 15,000 public sector workers have to be sacked to meet the requirements of this summer’s
quarterly review; the entire staff of the Greek broadcasting corporation has been dismissed. The
number of doctors by headcount must fall by another 10 per cent this year, as in 2012; hospital costs
are to be cut by another 5 per cent, after 8 per cent in 2012, and the Troika wants to see a substantial
further reduction in hospital beds.
The most aggressive component of the Troika is the European Commission’s Directorate for
Economic and Financial Affairs. Its public face is the beefy blond Olli Rehn, usually photographed
haranguing Mediterranean lawmakers in viceregal style. In his native Finland he has been compared
to Bobrikov, the hated tsarist governor-general who tyrannised the country in the early years of the
20th century until a patriot shot him dead. Rehn’s understanding of his job was revealed in the
comment he made as he lashed out against the IMF’s mild criticism of the Greek programme: he
informed the Financial Times that the Troika should march ‘in together, out together’, on the model
of the Nato powers in the Balkan war.
Like many European commissioners, Rehn had been summarily rejected by his own electorate.
Educated in the US and at St Antony’s College, Oxford, he entered the Helsinki Parliament as a 29-
year-old in 1991 and was quickly seconded to the office of Esko Aho, the Centre Party prime
minister. Aho’s government was detested for the harsh spending cuts it imposed, exacerbating the
already severe early-1990s recession. When the party was consigned to the opposition benches in
the 1995 election, Rehn made his way to Brussels, where he landed the plum job of chef de cabinet
for Finland’s European commissioner, into whose shoes he stepped in 2004. (Aho became executive
vice-president of Nokia.) His first brief was EU Enlargement; Romania and Bulgaria were whisked
into the fold in 2007, with evidence of massive political and economic corruption brushed aside. An
ardent disciple of Merkel’s finance minister, Wolfgang Schäuble, and his hardline stance on
budgetary and labour market discipline, Rehn was promoted to Economic and Financial Affairs just
as the Greek crisis was erupting in 2010.
Since then, the European Council has successively extended the Commission’s remit for ‘economic
surveillance and enforcement’. First came the European Semester (2010), a new process in which
Brussels sets annual targets for all member states, whose budgets must be submitted to Rehn’s
office before they can go before the parliaments. Countries considered to be ‘at risk’ are subject to
the EU’s ‘excessive deficit procedure’ and face fines of up to 0.2 per cent of GDP. A series of
overlapping intergovernmental agreements (the Euro Plus Pact in 2011, the Fiscal Compact in 2012)
and EU regulations (known in its miserable jargon as the six-pack and two-pack) gave the
Commission greater powers to intervene if any state was not already following its strictures on
lowering unit labour costs, increasing labour market flexibility and making the requisite budget
cuts. Within the Commission, the Economic Directorate has been given more sway: Rehn must be
consulted about other commissioners’ initiatives if they affect government spending.
*
The new powers of the European Commission and the Troika mark a real diminution of democratic
control. Before the crisis, the EU had left major decisions on taxation, pensions, unemployment pay,
public spending, health and education in the hands of national governments, considering them
sensitive enough to require parliamentary legitimation. Now they are effectively subject to the
diktat of EU functionaries. The constitutional niceties have been preserved, so far, in that
parliamentary majorities have been found to vote the emergency measures through. But in countries
where unemployment and economic misery are running high, the MPs are supported by a minority
of voters. In Greece, barely 30 per cent of the total electorate cast a vote for the winning New
Democracy-Pasok-Dimar coalition in June 2012; those who did so were mostly pensioners and rural
voters, worried for their savings, while in the big cities and among under-55s, a majority voted for
the anti-Memorandum Syriza. In Spain, the governing People’s Party is down to 23 per cent in the
polls, the centre left Socialists even lower, and Madrid’s budget strictures are fiercely contested by
Catalonia. In Italy more than half the voters opted for Eurosceptic parties in February 2013 and
Mario Monti, the EU establishment favourite, crashed out with 10 per cent.

Is electoral democracy compatible with the type of economic policies the EU – backed at a distance
by Washington and Wall Street – wants to impose? This is the question posed by the Cologne-based
sociologist Wolfgang Streeck in Gekaufte Zeit, a book that is provoking debate in Germany. Streeck
argues that since Western economic growth rates began falling in the 1970s, it has been increasingly
hard for politicians to square the requirements of profitability and electoral success; attempts to do
so (‘buying time’) have resulted in public spending deficits and private debt. The crisis has brought
the conflict of interests between the financial markets and the popular will to a head: investors drive
up bond yields at the ‘risk’ of an election. The outcome in Europe will be either one or the other,
capitalist or democratic, Streeck argues; given the balance of forces, the former appears most likely
to prevail. Citizens will have nothing at their disposal but words – and cobblestones.
Brussels’s response to the curtailment of democracy has been to propose a ‘commensurate increase’
in the role of the European Parliament, to lend democratic legitimacy to the Commission’s
expanded powers; but the Parliament is constitutionally incapable of that. Its electoral process can’t
do what voters expect of parliamentary elections – i.e., determine the make-up of the ensuing
government. The Parliament’s first incarnation, the Common Assembly, was established in 1952 as
a gathering of MPs from the national parliaments to provide a democratic sounding board for the
High Authority of the European Coal and Steel Community, forerunner of the Commission. The
Assembly was granted power of dismissal over the Authority and could vote down its budget. From
the start, however, both institutions saw their relationship as one of close co-operation: unanimity
would strengthen their bargaining power vis-à-vis the national governments, represented in the
Council of Ministers and later in the European Council, where real power came to reside.
De Gaulle had mocked the idea of direct elections to a consultative body, but in the 1970s Giscard
gave it the green light, in exchange for the small states’ conceding a greater role to
intergovernmental summitry. The first Europe-wide elections were held in 1979, but the
Parliament’s function was still advisory. The MEPs were not lawmakers; their task was to issue a
majority opinion on the directives drafted by the Commission and agreed by the Council, with the
details hammered out in closed committees. The creeping extension of the Parliament’s ‘co-
decision’ capabilities over the past twenty years relates to its capacity to propose amendments to the
Commission’s wording, which the Council can override. The cosy relationship continues: the vast
majority of co-decision directives are agreed beforehand in informal ‘trialogues’ between
representatives of the Commission, the Council and the Parliament. The condition for MEPs’
success in getting an amendment adopted is its acceptability to the other institutions, not its
importance to European voters.
Most of the MEPs’ work is done in the twenty-odd committees set up to cover specific policy areas:
foreign affairs, agriculture, transport, justice, the EU budget. Committee appointments are
controlled by the leaderships of the party groups – the centre right European People’s Party (EPP),
centre left Socialists and Democrats (S&D), liberal Alliance of Liberals and Democrats for Europe
(ALDE) – and allocated on a proportional basis. At the monthly Strasbourg plenaries, the groups
issue voting cues to guide their members through the bewildering sequence of resolutions:
regulations for EU airports, animal feed, cellphone registration procedures and so on. The EPP and
S&D groups control two-thirds of the seats, so agreement between their leaders ensures a majority
of plenary votes. The Commission has had nothing but praise for the speed with which the
Parliament’s committees and party groups put their stamp on the draconian measures for the
Eurozone. In an inter-institutional tweak that is supposed to lend the European Commission a
democratic lustre, each of the party groups will nominate a candidate for the Commission
presidency, to succeed the hapless Barroso in the run-up to the Parliament’s elections in May 2014;
names being touted include Donald Tusk, Guy Verhofstadt, José Luis Zapatero, Pascal Lamy,
Martin Schulz and Barroso himself. If turnout continues to fall as it has since 1979, the winner
could end up with the support of less than 10 per cent of European voters. In sum, the European
Parliament appears unreformable.
*
A substantial section of Europe’s intelligentsia has rallied to defend the new round of market-driven
integration as the best of all possible outcomes. Jürgen Habermas devotes The Crisis of the
European Union: A Response to demonstrating that the balance of power ‘has shifted dramatically
within the organisational structure in favour of the European citizens’. Although the citizens
themselves are regrettably apathetic about it, post-national democracy is well on its way to being
realised through the European Parliament; the mass media should do more to help them appreciate
its significance. In a recent essay taking issue with Streeck, Habermas argued that failure to offer
full support for the emergency Eurozone measures amounts to a capitulation to right-wing populism
that ‘repeats the errors of 1914’. He hopes the German elections will produce a truly grand coalition
– CDU, SPD, FDP, Greens – to push through the supranational blueprints for fiscal and political
union. ‘Only the Federal Republic of Germany is capable of undertaking such a difficult initiative,’
he concludes, with a flourish of the sort of provincial arrogance that used to be a British prerogative
but has become common in the German media. What has been obliterated here is France. (The
former leader of the Greens, Joschka Fischer, has claimed that ‘Germany is and has been the driving
force behind European integration.’)

For Europe!, a manifesto co-authored by the German Green Daniel Cohn-Bendit and the Belgian
Liberal Guy Verhofstadt, has even wilder pretensions. ‘Only the European Union’ is able to
‘guarantee the social rights of all European citizens and to eradicate poverty’; ‘only Europe’ can
solve the problems of globalisation, climate change and social injustice; the ‘shining example’ of
Europe has ‘inspired other continents to go down the path of regional co-operation’; ‘no continent is
better equipped to renounce its violent past and strive for a more peaceful world.’ Cohn-Bendit and
Verhofstadt out-catastrophise Habermas: if the single currency fails, so does the European Union –
‘two thousand years of history risk being wiped out.’ For Europe! is a hymn to discipline, which
emerges – surprisingly – as green-liberalism’s central theme. A ‘strong’ authority is required to
‘enforce compliance’; ‘discipline is vital to the Eurozone.’ Asked by a Libération journalist whether
the European Stability Mechanism was not ‘a technocratic dictatorship’, Verhofstadt preferred to
call it a ‘transitional stage’ – after all, nation-states had existed for centuries before universal
suffrage.
The Munich sociologist Ulrich Beck’s German Europe at first strikes a refreshingly critical note. It
opens with his incredulity on hearing a radio newsreader announce in late February 2012: ‘Today
the German Bundestag will decide the fate of Greece.’ For Beck, the new inter-state hierarchy in the
Eurozone has no democratic legitimacy: it is entirely a product of economic power. Spain, Greece
and Italy are being subordinated to austerity policies prescribed by Berlin and designed with the
German electorate in mind, and as a result entire regions are being ‘plunged into social decline’.
Debtor nations are becoming the new EU underclass, their democratic rights reduced to acceptance
or exit. Merkel may have had her leading role thrust upon her but she has made the most of it. The
test for Eurozone measures is whether or not they promote Germany’s national interest and
Merkel’s domestic position. The upshot is ‘brutal neoliberalism for the outside world, consensus
with a social democratic tinge at home’.
Beck relates Berlin’s ‘swaggering’ universalism, its ‘arrogant conviction’ that Germany has the right
to determine the national interests of other countries, to the former West Germany’s takeover of the
GDR. Its ‘know-it-all attitude’ and ‘quasi-imperialist sense of superiority’ to East Germans is now
the template for Eurozone crisis management, with the critical difference that solidarity has no place
here. But the confidence of ‘Europe’s schoolmasters’ in Gerhard Schröder’s 2003 neoliberal
package is misplaced, Beck argues, for its effect in Germany has been to create a universalised
precariat: of the new jobs, 7.4 million are ‘mini-jobs’ at €400 per month, three million are
temporary positions, one million agency jobs. German growth has come mainly from exports, not
least to the southern Eurozone. Yet Beck’s final prescriptions bear no relation to his diagnosis. His
enthusiasm for a non-accountable Eurozone economic directorate is no more qualified than that of
Habermas, Cohn-Bendit or Verhofstadt. Like the others he believes that it must be defended against
complaints that it is ‘above the law’ on the grounds that it is necessary in order to save the EU order.
*
Counterintuitively perhaps, German debates have concentrated on the politics and sociology of the
European crisis, while the most imaginative economic alternatives have come from France. Michel
Aglietta and Jean-Luc Gréau offer proposals for democratically constituted Eurozone budgetary
federations, while in Les Dettes illégitimes François Chesnais mines the experience of the Latin
American debt crises for useful lessons, drawing especially on Ecuador’s successful ‘debt audit’,
which examined in detail what obligations had been accrued in the name of the state and which
ones might legitimately be repudiated. Aglietta has also outlined a step-by-step path that Greece
could take to a new currency, via a structured default and devaluation, while still remaining within
the European Union. The price would be high, but no higher than the price the Greeks have had to
pay anyway, and relief would have been in prospect by now. (As for the effect of a Greek default on
Spanish and Italian bonds, that price has been paid as well.) In the June issue of the Cambridge
Journal of Economics, Jacques Mazier and Pascal Petit envisage a multiple European monetary
system: a single external euro, which would float against the other currencies on international
markets, but coexist with non-convertible national euros, which in turn would have fixed but
adjustable intra-European parities – a variant of what China envisages as a middle stage for the
convertibility of the yuan.
Whatever their merits, in a Europe run from Berlin, Brussels and Frankfurt, the political forces to
champion such ideas are lacking. Yet German dominance during the Eurocrisis has depended above
all on French compliance: active collaboration under Sarkozy, passive absence of opposition under
Hollande. There is something anomalous about the neutralisation of France as an actor on the
European stage and the brittle character of German hegemony must stem in part from it. The
conventional explanation is that the French economy is too weighed down by its statist legacies for
the Elysée’s word to carry much authority, but the figures don’t bear this out. France has now
overtaken the UK, after a swifter recovery from the crisis. Its public debt, including bank rescues, is
lower than Britain’s and its manufacturing sector is in better shape. Unemployment is worse, but
average household income is higher, inequality lower and infrastructure and healthcare in another
league. France faces the same global problems as the other advanced economies, but the reason it
has ceased to play a leading role in Europe must lie elsewhere – perhaps in a sclerotic political
system and intellectual entrancement with Atlantic liberalism, as well as the cross-border
entanglements of BNP Paribas and Société Générale.

For the longer term there is no shortage of proposals for European ‘economic union’ and ‘political
union’, headings that cover a wide variety of arrangements, many of which have been under
discussion for years. All the schemes work on the assumption that economic policy should aim at
reduced public spending and low-cost labour markets, as the failsafe recipe for stability and growth;
all see political union as a means by which to apply this policy; nearly all gesture towards the
European Parliament as the mechanism for lending it ‘legitimation’. Where the proposals differ is in
the respective weight they give to intergovernmental bodies as opposed to supranational ones, and
in their minimalist or maximalist versions. Decisions will depend not only on the balance of forces
between the states – Germany, historically a champion of the small states’ supranational agenda, has
shifted towards intergovernmentalism in the course of the crisis – but also on external shocks, as the
current negotiations over banking union demonstrate. Supranational oversight of the national
banking sectors by the ECB is relatively uncontroversial; but Berlin is leading the resistance to the
Commission’s proposals for EU-wide deposit insurance (backed by the ESM funds) and a
supranational authority which would have the power to take over bankrupt German banks. Still,
another financial tremor could see an emergency mechanism cobbled together which, like the
Troika, would become part of the EU system.
Minimalist intergovernmental versions of economic and political union – sketched in Van
Rompuy’s European Council report in June 2012 and currently backed by Merkel – lean towards
‘integrated frameworks’, whereby the national governments agree to co-ordinate budgetary and
economic policies (deficit limits, ‘competitiveness’, pensions etc), monitored by the Commission. A
maximalist intergovernmental version, favoured by Schäuble, envisages explicit policy co-
ordination for the Eurozone alone, perhaps upgrading the Eurogroup finance ministers’ meetings to
quasi-cabinet status. Supranational or ‘federalist’ versions of political and economic union focus on
strengthening the Commission as a proto-government, with the European Parliament given a higher
profile. Maximalist supranational variants include Barroso’s September 2012 report to the
Parliament and the Commission’s November 2012 ‘Blueprint for a Deep and Genuine Economic
and Monetary Union’, which set out longer-term plans for an autonomous Eurozone budget and a
eurobond market, conditional on tight central controls over national spending.
The problem is Europe’s citizens. Substantial new powers for supranational or intergovernmental
bodies would require another treaty, which would entail referendum campaigns over ratification in
at least two member states – effectively, an invitation to voters to make their dissatisfaction known.
In Spain, the number expressing ‘mistrust’ of the EU has risen from 23 to 72 per cent over the past
five years; in Germany, France, Austria and the Netherlands the figure is around 60 per cent. Fewer
than a third of Europeans now ‘trust’ the EU; a majority give unemployment and the state of the
economy as their chief concerns.
Still, much could be done without involving the voters, as Jean-Claude Piris points out in The
Future of Europe: Towards a Two-Speed EU? Piris was the EU’s chief lawyer for two decades,
responsible for the technical drafts of the Maastricht, Amsterdam, Nice, Constitutional and Lisbon
treaties before his retirement in 2010. He is a stern judge of his own handiwork: expansion has
robbed the EU of its coherence and identity; the Parliament has failed to win voters’ confidence; the
Commission is intellectually weak, the Council hampered by unanimity requirements; voter
disaffection rules out a much-needed institutional fresh start. Instead, Piris argues, Article 136 of the
Treaty on the Functioning of the European Union gives the Eurozone countries ample scope for
fiscal and economic co-ordination; a core group could use a political declaration to provide
themselves with a coherent identity and future project.
*
What will become of the EU countries outside a more tightly co-ordinated Eurozone? In Au Revoir,
Europe: What if Britain Left the EU? David Charter, a Times journalist, argues that the combined
dynamics of growing Euroscepticism in the UK and increasing integration in the Eurozone mean
that London will either have to negotiate a form of second-tier membership – some have proposed
making a virtue of a looser outer ring, which could include Turkey and the Balkan states as well as
Britain – or quit the EU altogether. One can see why many in Europe might welcome that
possibility. Britain has loyally fulfilled De Gaulle’s prediction that it would serve as a Trojan horse
for US interests in Europe. Cameron has lately spared no efforts in defending London’s derivative
traders – mostly subsidiaries of US banks – from EU regulation, let alone taxes, while backing
savage austerity programmes and urging Germany to step up to the mark.
Charter’s history of the UK’s relationship with Europe is a useful reminder that much of what
people loathe about the EU has been the result of British intervention. Polls say a majority would
favour a single market – Thatcher’s dream – without the mass of EU regulations, but the latter are a
precondition for the former. By the 1980s, every advanced industrial economy had built up its own
web of health and safety rules, not without reason: specifications for product labelling; safety
requirements for electrical appliances; protocols for food standards and abattoir inspection agencies;
restrictions on toxic substances, such as lead paint on children’s toys. National import tariffs could
be removed by fiat, but to create a single market these ‘non-tariff barriers to trade’ needed to be
brought into alignment, sector by sector, across a growing number of national economies. Naturally
the Brussels negotiating committees charged with the task became a target for corporate lobbyists.
Then, from 2006, the EU’s bid to become the global leader in environmental regulation – eagerly
backed by Blair, who hoped to greenwash his reputation – helped to produce a plethora of further
edicts, on everything from plastics to lightbulbs.

Au Revoir, Europe offers a brisk cost-benefit analysis of what leaving the EU would mean for
Britain. Charter’s findings on the economic effects broadly concur with those of the Economist, in
its anti-exit intervention of December 2012. On the plausible assumption that the UK could
negotiate a bilateral trade deal with the EU, the medium-term effects would be negligible; Charter
effectively demolishes Clegg’s claim that 3.5 million UK jobs ‘depend’ on membership of the EU.
The short-term impact on investment would be more dramatic, perhaps comparable to the financial
crisis, when foreign investment fell from £196 billion to £46 billion between 2007 and 2010. Over
the medium term, investment would recover in tandem with growth, once post-EU arrangements
were on a solid footing. As for the UK contribution to the EU budget, €11.3 billion is barely a blip
in the public accounts: 1 per cent of total government spending.
Leaving the EU would allow the UK to adopt a tightly controlled visa system for other Europeans;
EU nationals make up 40 per cent of net UK immigrants and currently have free entry for up to
three months, with indefinite leave to stay if they have a job or are self-employed. But Brits would
be subject to equivalent barriers on trying to enter the EU; around a million are resident in other EU
countries, above all Spain, where they can collect their UK pensions through the local post office
and enjoy free healthcare. Post-exit, the repatriation of Costa Brava grandparents would add to UK
social service costs; demographic recalibration would be further ratcheted towards the elderly and
dependent by the exclusion of fit young Poles and Romanians. For the Economist, cheap immigrant
labour is a principal reason for remaining in the EU (along with being useful to Washington and
having a voice in financial sector regulation); Charter suggests that costs and benefits in this area
cancel each other out.
Au Revoir, Europe was published before Cameron promised to hold an in-out referendum in 2017,
but Charter anticipates something like it. He sketches an exit trajectory unfolding over the next ten
years. In the 2015 election, all three parties include an in-out referendum pledge in their manifestos,
a reluctant Edward Miliband having been convinced by his advisers that Labour can’t afford to be
the only party not to offer voters a choice. Labour scrapes in and duly holds the referendum. Boris
Johnson, who has replaced Cameron as leader of the opposition, leads an energetic Out campaign,
trumping the lacklustre Ins. On a 54 per cent turnout, the UK votes by 51.4 to 48.6 per cent to leave
the EU. Miliband’s lame-duck government limps on to conclude a UK-EU free trade agreement,
with London now paying substantial fees for access to the single market and agreeing to take
account of EU legislation in drafting its own laws. Geography and trade dictate that Britain is still
closely entwined with its ex-partners; exiting the EU doesn’t mean adieu to Brussels.
How plausible is this scenario? The Labour leader’s gratitude for White House guidance on the in-
out referendum – popular consultation was curtly dismissed by Obama as ‘unhelpful’ – ensures that
it won’t be featuring in Labour’s election campaign. The Liberal Democrats had no qualms about
dropping their own referendum commitment for the 2010 coalition agreement. So the referendum
will depend on a clear-cut Tory victory in 2015, which at present looks against the odds. The
population drain to the South-East has left constituency boundaries heavily weighted in Labour’s
favour, and in 2010 UKIP denied the Tories a dozen marginal seats, by skimming off a few
thousand votes in each. A Conservative pollster has predicted a ninety-seat majority for Labour,
which sounds far-fetched, especially given low turnout and tactical voting – Tories warning that a
vote for UKIP means a vote for Miliband, who has also, to his credit, mortally offended Rupert
Murdoch. But a Lib-Lab government would rule a referendum out.
If one were to be held, though, the likelihood would be a vote for the status quo. UKIP’s rise is not
due to a sudden, post-crisis surge of Euroscepticism in England (Scotland will have none of it) but
to the collapse of support for the three main parties. For the first 15 years of its existence, UKIP
struggled to get 3 per cent in national polls. Its breakthrough came in the 2004 European Parliament
elections, fought on the basis of PR. As the others slumped to unprecedented lows – Labour brought
down by Iraq, the Tories still wandering in the post-Thatcher wilderness – UKIP garnered 16 per
cent of the vote and 12 MEPs, a sixth of the UK cohort, whose ample salaries, perks, office staff
and resources could be diverted to local party-building. In the 2009 European elections, Labour’s
rout – barely 15 per cent of the vote – put UKIP, on 17 per cent, in second place to the Tories.
Since 2010, the Liberal Democrats’ embrace of spending cuts and student fees has created a tri-
partisan consensus, leaving UKIP the best-known receptacle for the protest vote. Meanwhile the
English political mainstream has shifted so far to the right that UKIP’s domestic policies – limited
visas for immigration, grammar schools, sacking teachers and local government workers – are only
mild exaggerations of what the others proclaim. Academy schools, pursued with equal zeal by
Labour and Tory education secretaries, select or exclude pupils on the basis of management diktat,
not even bothering with an exam. Phil Woolas, Labour’s immigration minister, pledged his party to
a ‘war’ on undocumented immigrants; his 2010 election leaflet, attacking a Liberal Democrat rival
who had been wooing the local imams, was reminiscent of the Tories’ famous 1960s slogan, ‘If you
want a nigger for a neighbour, vote Labour.’
Opinion polls currently indicate 41 to 54 per cent for EU exit, 24 to 38 per cent for staying in and 8
to 30 per cent don’t knows. But these are soft figures, measuring off the cuff political views rather
than hard-headed calls on material personal advantage. The transatlantic bully pulpit will make
much of short-term financial upheaval and higher interest rates; fear will favour the status quo. In
the 1975 referendum, a much more anti-European population voted two to one to remain in the
Common Market. A pioneering mood of national confidence, a willingness to strike out into the
unknown, is lacking in England as much as it is in Scotland. The Trojan horse will be staying put.
Europe’s luck is out on that front, too.

As for the immediate future, the Berlin-Brussels axis can probably continue to manage the crisis on
Germany’s terms as long as its worst effects are confined to the small peripheral economies –
Greece, Cyprus, Portugal and Ireland. The latest figures indicate a fragile 0.3 per cent improvement
in second-quarter Eurozone growth compared to 2012, sustained almost entirely by Germany and
France. But if the world economy were to take a turn for the worse, Spain and Italy would pose
problems on another scale. The ECB’s bond-buying programme would oblige their already
discredited politicians to submit to Troika rule. Yet the financial market turmoil that greeted Ben
Bernanke’s murmur this summer about a cut in quantitative easing was a reminder that the lull
won’t last for ever. Five years of zero interest rates and $14 trillion from the Federal Reserve have
produced only stuttering American growth. China, with falling exports, teeters on the brink of a
bank and local government debt disaster. Europe is vulnerable from both directions: higher interest
rates will raise the risk of default by its states and banks, while German exports depend increasingly
on China’s construction boom. This summer the Bundesbank revised down its forecasts for 2014.
The austerity regime has yet to be tested in its homeland.

Letters
Vol. 35 No. 18 · 26 September 2013
What Susan Watkins misses in her account of the state of the European Union is that for German and French
intellectuals Europe remains a political project first and an economic one second (LRB, 29 August). In the UK, we
have just two considerations: are there material benefits for us and, if so, are they worth the loss of direct
sovereignty that membership of the EU entails? The stakes are much higher on the other side of the Channel. This
is why Watkins cannot understand how Ulrich Beck can be so trenchant in his critique of what has gone wrong
and yet end up defending the autocratic measures which have impoverished the southern states at the behest of
‘Merkiavelli’. But as Beck explains, ‘Europe is an alliance of former world cultures and great powers, which are
bent on finding an escape route from their own warlike past.’ Like Jürgen Habermas and Daniel Cohn-Bendit,
Beck believes it is because of the EU that Europeans no longer fight wars against each other. The project of the
single currency went so badly wrong because it was an economic answer to a political question. But it was also a
political imperative to save it.

Julian Preece
Swansea University

Vol. 35 No. 19 · 10 October 2013

True, as Susan Watkins writes, the Troika and a swaggering Berlin behind it have increasingly taken legislative
control out of the hands of elected – or, as is increasingly the case, appointed – governments (LRB, 29 August).
However, she risks writing domestic political elites out of the story altogether. ‘Screw the Troika’ is a common
enough refrain at the anti-austerity rallies we attend in Portugal, but it is a message that concedes too easily the
government’s insistence that blame lies solely with Brussels and Berlin and downplays just how happy Portuguese
elites are to see their own neoliberal vision take shape. The Troika’s insistence on cutting public expenditure has
given Portuguese neoliberals an opportunity to implement their dream policies. Indeed, the government’s austerity
drive has exceeded any external demand. ‘We are going beyond the Troika memorandum,’ Prime Minister Passos
Coelho said of the 2012 budget, a relentless attack on public servants that continues today. The government has
been called ‘more troikista than the Troika’. The externalisation of blame, on the part of apparatchiks and
protesters alike, risks placing domestic culprits beyond critical scrutiny altogether.

Tor Krever and Teresa Almeida Cravo


Coimbra
Robert Hunziker September 7, 2018

Billionaires Plan Escape From Apocalypse

Well, the harsh truth about the integrity and fortitude of billionaires is finally out in the open for all
to see, and the results are repugnant: Billionaires are gutless, chicken-hearted cowards. The proof is
found in the pudding as several Silicon Valley billionaires purchase massive underground bunkers
built in Murchison, Texas shipped to New Zealand, where the bunkers are buried in secret
underground nests.
All of which begs this question: What’s with capitalism/capitalists? As soon as things turn sour, they
turn south with tails between their legs and hightail it out of Dodge. However, they feast on and
love steady, easy, orderly avenues (markets) to riches, but as soon as things heat up a bit, they turn
tail and run.
History proves it time and again, for example, FDR rescued capitalism, literally rescued it, from
certain demise by instituting social welfare programs for all of the citizens as capitalists fled and/or
jumped off buildings.
Then during the 2008 financial meltdown capitalists were found curled up in the corners of rooms
as all hell broke lose. Taxpayers, “Everyday Joes,” had to bail them out with $700B in public funds,
and even more after that. All public funds! Taxpayers, average Americans, bailed them out!
Capitalists can’t take the heat as well as gritty American industrial workers that ended up bailing
them out of the “jam of the century.” As explained by Allen Sinai chief global economist for
Decision Economics, Inc, discussing Milton ‘laissez-faire’ Friedman’s free-market dogma vis a vis
the 2008 economic meltdown: “The free market is not geared to take care of the casualties, because
there’s no profit motive.”
The chicken-hearts from Silicon Valley already have Gulfstream G550s ($70M each) readied at a
Nevada airstrip for the quickie escape journey to NZ.
Escape, from what?
Well, of course, the 99%, you silly!
This revelation comes from Robert Vicino, founder of the Vivos Project, a builder of massive
underground bunkers who claims the Silicon Valley elites developed detailed plans to flee to New
Zealand while attending the World Economic Forum in Davos, Switzerland where the world’s
richest biggest kahunas meet every year to take victory laps and discuss future biz battles.
According to Vicino: “They foresaw a revolution or a change where society is going to go after the
1 per centers. In other words, them.” (Source: Olivia Carville, Wealthy Americans Have Stepped up
Investment in New Zealand. Parliament Votes to Ban Foreigners From Buying Bolt-Hole Homes,
Bloomberg LP, Sept 5, 2018)
Maybe a copy of Jean-Paul Marat’s Chains of Slavery (publ. 1774) was passed out as bedtime
reading material at Davos. Surely a radical political theorists like Marat’s verbal flailing of
“princes/aristocrats” with extremely harsh mean-spirited judgment must have dealt them a sleepless
night or two.
And maybe the nighttime reading packet also included Victor Hugo’s Ninety-Three (publ. 1874),
which discussed the beheading of King Louis XVI, the Terror, and the monarchist revolt that was
brutally suppressed by the new Republic. Heads rolled!
Whatever the source or reason, something happened to flush-out these yellow belly lily livered
fraidy-cats, out into the open, but actually deep underground where they’ll live in constant fear that
the 99% will hunt them down. Lest they forget, when mobs ruled France, thousands of aristocrats
lost their heads to the guillotine, not a fun prospect, which likely has today’s billionaires shaking in
their Lucchese boots. Otherwise, why the elaborate plans?
The sorriest part to this story has everything to do with the “privileged” aspect imbedded in the
glory of capitalism, which ratchets upwards in lock step with vast monetary accumulation. The
formula seems to be that the more money one has the more “privileges” one is entitled to at the
expense of everybody else, same as 18th century France in Hugo’s Ninety-Three, wherein an
aristocrat’s carriage hit and ran over (and killed) the child of a poor family but the aristocrat,
without hesitating, kept on going lickety-split to his estate to hide behind locked gates, which is
another example of a yellow belly lily livered chicken that can’t take the heat when times get rough.
Evidently, money doesn’t equate to toughness because a principled billionaire would stay back and
help in times of trouble, not run for cover. Billionaire plans to escape as soon as things turn rough is
a sure sign of weak-kneed namby-pamby pansies on the premises!
Victor Hugo comprehended the French Revolution very well. The aristocrats, whilst prancing about
in goofy ruffled clothes and sprayed with perfume and riding in golden carriages, turned up their
noses at the masses, looking down upon them with contempt. No help for the starving. In turn, their
hubris turned hungry masses into a machine of mass murder as they roamed the streets in search of
some dignity, similar to America’s bereft industrial cities today; loss of dignity turns people into
something different, nothing to lose, anger, elect Trump, get really mad, do crazy things. In Paris in
the 18th C they killed the rich, with abandon.
Actually, billionaires today have much more to consider other than a French Revolution-style
uprising that targets the one-percent (them). There are other factors at work like nuclear war, a killer
germ, or climate Armageddon, when every street turns dangerous with gangs trolling for food and
gasoline and ammo and rich people.
Solving all of those nasty problems, New Zealand allows émigrés to buy residency via investor
visas. As a result, super wealthy Americans have poured a fortune into the country and whenever
possible acquire palatial estates. For example, James Cameron of Titanic fame bought a mansion at
Lake Pounui.
Sotheby’s claims several well-heeled Americans have bought multimillion-dollar properties in the
Queenstown area over the past two years. Maybe these billionaires are privy to information of a
pending alert or upcoming disaster. Who knows?
Peter Thiel, the PayPal billionaire and renowned super-super-super libertarian and unapologetic
Trumpster love-fester achieved New Zealand citizenship in only 12 days and bought not only his
citizenship but a $13.8M estate in Wanaka, a lakeside community.
According to a phone interview with the former PM of New Zealand John Key, “If you’re the sort
of person that says I’m going to have an alternative plan when Armageddon strikes, then you would
pick the farthest location and the safest environment – and that equals New Zealand if you Google
it… It’s known as the last bus stop on the planet before you hit Antarctica. I’ve had a lot of people
say to me that they would like to own a property in New Zealand if the world goes to hell in a
handbasket.” (Ibid)
Hence, when Armageddon hits, all of the billionaires will huddle together in New Zealand.
That’s really creepy!
Paul Street September 7, 2018

Climate of Class Rule: Common(s)er Revolt


or Common Ruin

Freeman and slave, patrician and plebeian, lord and serf, guild-master and journeyman,
in a word, oppressor and oppressed, stood in constant opposition to one another, carried
on an uninterrupted, now hidden, now open fight, a fight that each time ended, either in
a revolutionary reconstitution of society at large, or in the common ruin of the
contending classes.

– Karl Marx and Frederick Engels, 1848

“A Level of Criminality Almost Hard to Describe”


The great orange dumpster fire Donald J. Trump’s part in the reigning United States media-politics
horror show is to distract the populace from the lethal pillaging of the commons behind the
scenes. The leading left thinker Noam Chomsky put it very well in an interview last March:
“Trump’s role is to ensure that the media and that public attention are always
concentrated on him. So every time you turn on a television set, it’s Trump; open the
front page of the newspaper: Trump…. So every day there’s one insane thing after
another and then, you know, he makes some crazy lie….and the media looks at it and
says “No, [not true]’…But meanwhile he’s onto something else and then you go to
that…”

“And while this show is going on in public, in the background the wrecking crew is
working…systematically dismantling every aspect of government that works for the
benefit of the population. …In the case of global warming, it’s almost indescribable.
Not only has the U.S. pulled out – uniquely alone in the world – from the international
efforts to do at least something about it. But, beyond that…the Trump Administration is
going out of its way to increase the threat. Listen to his State of the Union Address, the
only phrase about global climate was to talk about ‘our beautiful clean coal,’ the worst
polluter there is…The new budget that’s coming out …sharply cuts research and
support for any kind of renewable energy: more subsidies and support for the most
polluting, destructive things.”
“And, it’s not just Trump, it’s the entire Republican leadership. So, if you look at the
2016 election, at the primaries, every single candidate, not a single exception, either
denied that global warming is taking place or said ‘Maybe it is but we shouldn’t do
anything about it,’ which I think is worse. They were called the moderates, like [John]
Kasich. If you look at Trump himself, or say Rex Tillerson, Secretary of State, they
know perfectly well that humans are causing global warming. In fact, Trump has golf
courses all over; he hasn’t built a wall in Mexico yet but he’s building walls around his
golf courses to make sure that the sea level doesn’t destroy them.”

“Rex Tillerson, the CEO of ExxonMobil – since the 1970s scientists at ExxonMobil
have been – we now know, they been made public, forced to be made public – they’ve
been producing severe warnings to the leadership about the effect of the use of
petroleum on destroying the environment. So they all know about it but they’re not
doing anything about it, which is a level of criminality that is almost hard to find words
to describe. I mean, here are, you know, educated well-off rich people, upper elite, who
know that what they’re doing is destroying the prospects for human – organized human
life – and do it anyway because they make more profits tomorrow. Can you think of an
analog for that in human history? I really can’t” (emphasis added).

“Not a Wake-up Call Anymore”


Jump a half-year ahead to the late summer and early fall of 2018. Fully 17 of the 18 warmest
years since modern record-keeping began have occurred since 2001. Numerous record-setting heat
and related deadly weather (wildfires, droughts, rains, flood, mudslides etc.) events have occurred,
as predicted in the (supposedly controversial) climate models produced by scientists who have been
trying to warn the world for many years about the eco-cidal consequences of burning fossil fuels on
a mass scale. One headline I recall this summer announced that 2018 was the year in which global
warming went from being a future “threat” to a lived “menace.” As the New York Times’ climate
correspondent Somini Sengupta wrote last August:
“This summer of fire and swelter looks a lot like the future that scientists have been
warning about in the era of climate change… In California, firefighters are racing to
control what has become the largest fire in state history. Harvests of staple grains like
wheat and corn are expected to dip this year, in some cases sharply, in countries
as different as Sweden and El Salvador. In Europe, nuclear power plants have had to
shut down because the river water that cools the reactors was too warm. Heat waves on
four continents have brought electricity grids crashing…And dozens of heat-related
deaths in Japan this summer offered a foretaste of what researchers warn could be big
increases in mortality from extreme heat.”

“ ‘It’s not a wake-up call anymore,’ Cynthia Rosenzweig, who runs the climate impacts
group at the NASA Goddard Institute for Space Studies, said of global warming and its
human toll. ‘It’s now absolutely happening to millions of people around the world.’”

“For many scientists, this is the year they started living climate change rather than just
studying it. ‘What we’re seeing today is making me, frankly, calibrate not only what
my children will be living but what I will be living, what I am currently living,’
said Kim Cobb, a professor of earth and atmospheric science at the Georgia Institute of
Technology…”
As I started writing this essay, Tropical Storm Gordon was gaining strength in the overheated Gulf
of Mexico, where the temperature was 87 degrees Fahrenheit – too warm for a swimming pool.
Extreme weather and its collateral damage are only tips of the melting iceberg, semi-metaphorically
speaking. The real climatological shit hits the eco-exterminist fan when we can’t grow enough food,
find enough water, and keep ourselves cool enough to survive – and when global warming
combines with collapsing social and technical infrastructure to bring pandemics that wipe out much
of an increasingly thirsty, under-nourished, and over-heated human race.
“Heat waves are bound to get more intense and more frequent as emissions rise… On the horizon,”
Sengupta warns, “is a future of cascading system failures threatening basic necessities like food
supply and electricity.”
The existential wall is already being hit in some countries. El Salvador farmers suffered a
disastrous corn harvest this summer as temperatures spiked to a record 107 degrees Fahrenheit.
They went without rain for six weeks in some regions.
A hint of the dark future comes from northern Europe: “Wheat production in many countries of the
European Union is set to decline this year. In Britain,” Sengupta reports, “wheat yields are projected
to hit a five-year low. German farmers say their grain harvests are likely to be lower than normal.
And in Sweden, record-high temperatures have left fields parched and farmers scrambling to find
fodder for their livestock.”
Blaming Arrogant and Stupid Humanity
The culprit? A long and widely read New York Times Magazineessay written by Nathaniel Rich last
July was titled “Losing Earth.” Rich blames the human species for its childish failure to properly
heed the alarms of its intellectual adults – scientists. Rich indicts a “human nature” flawed by an
inability to “sacrifice[e] present convenience to forestall a penalty imposed on future generations.”
He condemns homo sapiens’ tendency to “obsess over the present, worry about the medium-term
and cast the long term out of our minds, as we might spit out a poison.” The offender, according to
Nathaniel Rich, is We the People. We simply aren’t wired to plan responsibly beyond the present
moment of immediate gratification.
A recent Truthdig essay by the leading left thinker Chris Hedges is titled “Saying Goodbye to Planet
Earth.” Surveying the damning evidence and likely future path of human-generated environmental
ruin, Hedges concludes that “the refusal of our species to significantly curb the carbon emissions
and pollutants that might cause human extinction” has brought “human-induced change to the
ecosystem” that, “will probably make the biosphere inhospitable to most forms of life.” The enemy
is arrogant humanity itself, perpetrator of “the Anthropocene” – the reckless alteration of Earth
systems byhomo sapiensand its carbon-intensive industrialized lifestyle.
Humanity as a Whole or Capital?
Other thinkers of an eco-Marxian bent, myself included, narrow the diagnosis. They historicize the
climate crisis, situating it in the specific historical context of capitalism. The concept of “the
Anthropocene” has rich geological validity and holds welcome political relevance in countering the
carbon-industrial complex’s denial of humanity’s responsibility for contemporary climate change,
they note. Still, they counsel, we must guard against lapsing into the historically misleading,
fatalistic, and often class-blind use of “Anthro,” projecting the currently and historically recent age
of capital onto the broad 100,000-year swath of human activity on and in nature. As the Green
Marxist environmental sociologist and geographer Jason Moore reminded radio interviewer Sasha
Lilley last a few years ago, “It was not humanity as a whole that created …large-scale industry and
the massive textile factories of Manchester in the 19th century or Detroit in the last century or
Shenzen today. It was capital.”
Indeed, it was not humanity as a whole that built the Dakota Access Pipeline (DAPL)in 2015 and
2016. It was capital, corralled in the accounts of Energy Transfer Partners, under the supervision of
a reckless, eco-cidal and profit-mad billionaire named Kelcy Warren, who funded the DAPL with
billions of dollars from across the world’s leading financial institutions.
It was not humanity as whole that hid evidence of Greenhouse Gassing’s deadly impact on human
prospects. It was capital on various levels but most particularly in the form of Exxon-Mobil, who
(in the greatest climate and environmental crime in history) buried the findings of its very own
cutting-edge scientists in the 1970s and 1980s— an offence that that, as Chomsky says, “is almost
hard to find words to describe.”
Moore and other left analysts argue with good reason that it is more appropriate to understand
humanity’s Earth-altering assault on livable ecology as the “Capitalocene.” It is just a relatively
small slice of human history – roughly the last half-millennium give or take a century or so – during
which human society has been socially and institutionally wired by a specific form of class rule to
relentlessly assault on an ultimately geocidal scale.
It is only during the relatively brief period of history when capitalism has ruled the world system
(since 1600 or thereabouts by some calculations, earlier and later by others) that human social
organization has developed the inner, accumulation-, commodification-, “productivity”-, and
growth-mad compulsion to transform Earth systems – with profitability and “productivity”
dependent upon on the relentless appropriation of “cheap nature” (cheap food, cheap energy, cheap
raw materials and cheap human labor power) Moore maintains that “humanity’s” destruction of
livable ecology is explained by changes that capitalism’s addictive and interrelated pursuits of profit
and empire imposed on its behavior within “the web of life.”
It is capitalism and its quarterly earnings obsession with short-term profits, not Rich’s “human
nature,” that constantly plunders and poisons the commons and trumps long-term planning for the
common good.
In terms of measurable material consequences, it is true, the real destructive and Earth-altering
impact of capitalism dates not from the beginning of capitalism but from more recent history. The
original “Anthropocene” argument pegged the major changes with the onset of the Industrial
Revolution around 1800 but recent Earth science findings point to 1945 and the post-WWII era of
US-led global monopoly-capitalist economic expansion as the real material onset of the
Anthropocene/ Capitalocene. That is something to keep in mind when reading the often brilliant
left-environmentalist Naomi Klein, who tends, as Sam Gindin has noted, to hedge her descriptions
of “capitalism’s” disastrous environmental impact by particularly criticizing the profits system of
the post-1970s neoliberal era and not capitalism per se.
Still, the social, historical, political and class-historical DNA of the eco-cidal capitalist disease
crystalized during Europe’s transition from feudalism to capitalism in the wake of the Black Death.
This matters for those who want to avert catastrophe. There is no desirable remedy without a proper
historical diagnosis. Those who want to avert a new Black Death on a planetary scale need to
confront the imperial world system that emerged in feudalism’s aftermath – capitalism as such.
We cannot afford denial and evasion about eco-exterminist systems of class rule any more than we
can afford denial and evasion of human beings’ impact (under the command of capital) on life
systems.
“We Think We’re Not Part of the Biosphere”
These differences aside, there’s a wonderful line in Chris Hedges’ “Goodbye Earth” essay. Hedges
features the chilling (no irony intended) insight of astrophysics professor Adam Frank, who
theorizes that “If you develop an industrial civilization like ours, the route is going to be the same,”
Frank says.“You’re going to have a hard time not triggering climate change” (Frank mind-
expansively guesses that this drama has already been played out on other planets in a universe that
is now known to include millions of potentially life-supporting worlds).
“We think we’re not a part of the biosphere—that we’re above it—that we’re special,” Frank told
Hedges. But “We’re not special. We’re the experiment that the biosphere is running now.”
That’s exactly right, however one understands and periodizes “the Anthropocene.” Perhaps the
greatest mistake privileged humans ever made was to follow Descartes and other leading Western
thinkers (none more stridently perhaps than Sir Francis Bacon) in advocating the species’
supposedly noble mission of becoming “like masters and owners of nature.”
We are no such thing. Pretending that we are somehow above “Nature” is species suicide.
Think of the title of Nathaniel Rich’s article: “Losing Earth.” Does it not remind one of the old
global McCarthyite charges that “We [the United States] Lost China” (and/or North Korea and
Cuba and Vietnam)?
I can lose my cell phone or my fountain pen. You can lose your thermos. But we can’t “lose Earth”
or “China,” because they were never ours to possess in the first place. Who in the name of God (or
whatever other higher power one wants to cite) told Nathaniel Rich that “Earth” was “ours” to
“lose”? Was Rich’s title not the height of anthro-centric arrogance?
Super-Rich Folks Looking to Escape “The Event”
How do the “masters of the universe” – the members of the world’s “unelected dictatorship of
money” (Edward S. Herman and David Peterson’s excellent phrase) – feel about the dire threats
posed to human existence by the system that has generated their outsized opulence and power?
Perhaps they fantasize that their fortunes will permit them to somehow escape to other worlds or
(less fantastically) to insulate themselves with armed guards and special resource stockpiles on this
one.
In an essay bearing the suitably appropriate title “Survival of the Richest,” the leading academic
“media theorist” and “digital economics” professor, lecturer, and documentarian Douglas Rushkoff
tells of an invitation he accepted last year to “a super-deluxe private resort to deliver a keynote
speech to what I assumed would be a hundred or so investment bankers. It was,” Rushkoff writes.
“by far the largest fee I had ever been offered for a talk—about
    half my annual professor’s salary—   
all to deliver some insight on the subject of ‘the future of technology.’” When Rushkoff arrived he
found that his real assignment was to help five ultra-rich financial parasites figure out how they and
their families might survive the coming collapse of a world they themselves had (quite
unmentionably) helped wire for disaster:
“…I was ushered into what I thought was the green room. But instead of being wired
with a microphone or taken to a stage, I just sat there at a plain round table as my
audience was brought to me: five super-wealthy guys—yes,    all men—from
    the upper
echelon of the hedge fund world. After a bit of small talk, I realized they had no interest
in the information I had prepared about the future of technology. They had come with
questions of their own.”

“They started out innocuously enough. Ethereum or bitcoin? Is quantum computing a


real thing? Slowly but surely, however, they edged into their real topics of concern…
Which region will be less impacted by the coming climate crisis: New Zealand or
Alaska? Is Google really building Ray Kurzweil a home for his brain, and will his
consciousness live through the transition, or will it die and be reborn as a whole new
one? Finally, the CEO of a brokerage house explained that he had nearly completed
building his own underground bunker system and asked, ‘How do I maintain authority
over my security force after the event?’”

“‘The Event.’ That was their euphemism for the environmental collapse, social unrest,
nuclear explosion, unstoppable virus, or Mr. Robot hack that takes everything down.”

“This single question occupied us for the rest of the hour. They knew armed guards
would be required to protect their compounds from the angry mobs. But how would
they pay the guards once money was worthless? What would stop the guards from
choosing their own leader? The billionaires considered using special combination locks
on the food supply that only they knew. Or making guards wear disciplinary collars of
some kind in return for their survival. Or maybe building robots to serve as guards and
workers—if
    that technology could be developed in time.”

“That’s when it hit me: At least as far as these gentlemen were concerned, this was a
talk about the future of technology. Taking their cue from Elon Musk colonizing Mars,
Peter Thiel reversing the aging process, or Sam Altman and Ray Kurzweil uploading
their minds into supercomputers, they were preparing for a digital future that had a
whole lot less to do with making the world a better place than it did with transcending
the human condition altogether and insulating themselves from a very real and present
danger of climate change, rising sea levels, mass migrations, global pandemics, nativist
panic, and resource depletion. For them, the future of technology is really about just one
thing: escape.”

The Deadly “Buffer of Wealth”


It is important I think, to note that the climate crisis hits disadvantaged populations first and the rich
and powerful last. One problem “our species” faces is that class rule tends to delay a civilization’s
capacity to perceive threats to its continued existence until the full consequences of the
civilization’s deadly practices are felt by those who have been protected by class privilege from
environmental harm. By the time the ruling class gets it, things have gone too far.
This in one of the timeworn paths to societal ruin discussed in a paper published five years ago by
mathematician Safa Motesharrei, atmospheric scientist Eugenia Kalnay and political scientist Jorge
Rivas in the journal Ecological Economics. Reviewing past societal collapses, they reflected on a
potential current global scenario in which:
“[T]he Elites—due to their wealth—do not suffer the detrimental effects of the
environmental collapse until much later than the Commoners. This buffer of wealth
allows Elites to continue ‘business as usual’ despite the impending catastrophe. It …
explain[s] how historical collapses were allowed to occur by elites who appear to be
oblivious to the catastrophic trajectory (most clearly apparent in the Roman and Mayan
cases). This buffer effect is further reinforced by the long, apparently sustainable
trajectory prior to the beginning of the collapse. While some members of society might
raise the alarm that the system is moving towards an impending collapse and therefore
advocate structural changes to society in order to avoid it, Elites and their supporters,
who opposed making these changes, could point to the long sustainable trajectory ‘so
far’ in support of doing nothing.”

Is this not the state of “humanity” under the command of capital today, with many millions of
disproportionately poor and powerless people already suffering from climate disruption while the
wealthy few continue to enjoy lives of unimaginable, environmentally shielded opulence atop a
recklessly fossil-fueled planet so vastly unequal that the world’s eight richest people possess as
much wealth between them as the bottom half of the species?
It’s “the rich,” not humanity in general, that “are destroying the Earth,” as Herve Kempf noted in
the title and text of an important book eleven years ago. At the same time however, it is in fact up to
“our species,” yes, humanity, to save itself and other Earthly life forms by engaging in a great mass
uprising against those who have plundered and poisoned the commons for private profit. (If there’s
another intelligent life form out there that survived the transition to high-tech modernity and
developed the capacity to save other species in the galaxy, now would be the time for them to travel
through tie and space to lend us a hand. I’m not holding my breath for that!) The best bet we have,
my fellow world citizens and common(s)ers, is is eco-socialist people’s revolution here on the
planet itself.
Democratic Audit UK 22/08/2018

How democratic are the UK’s political parties


and party system?
For our 2018 Audit of UK Democracy, Patrick Dunleavy and Sean Kippin examine how
democratic the UK’s party system and political parties are. Parties often attract criticism from
those outside their ranks, but they have multiple, complex roles to play in any liberal democratic
society. The UK’s system has many strengths, but also key weaknesses, where meaningful reform
could realistically take place.

What does democracy require for political parties and a party system?
Parties (and now other forms of ‘election fighting organisation’, like referendum
campaigns) are diverse, so four kinds of democratic evaluation criteria are needed:

(i) Structuring competition and engagement

• The party system should provide citizens with a framework for simplifying and
organising political ideas and discourses, providing coherent packages of policy
proposals, so as to sustain vigorous and effective electoral competition between
rival teams.
• Parties should provide enduring brands, able to sustain the engagement and trust
of most citizens over long periods. Because they endure through time, parties
should behave responsibly, knowing that citizens can hold them effectively to
account in future.
• Main parties should help to recruit, socialise, select and promote talented
individuals into elected public office, ranging from local council to national
government levels.
• Party groups inside elected legislatures (such as MPs or councillors), and elites
and members in the party’s extra-parliamentary organisation, should help to
sustain viable and accountable leadership teams. They should also be important
channels for the scrutiny of public policies and the elected leadership’s conduct
in office and behaviour in the public interest.

(ii) Representing civil society

• The party system should be reasonably inclusive, covering a broad range of


interests and views in civil society. Parties should not exclude or discriminate
against people on the basis of gender, ethnicity or other characteristics.
• Citizens should be able to form and grow new political parties easily, without
encountering onerous or artificial official barriers privileging existing,
established or incumbent parties.
• Party activities should be regulated independently by impartial officials and
agencies, so as to prevent self-serving protection of existing incumbents.

(iii) Internal party democracy and transparency

• Long-established parties inevitably accumulate discretionary political power in


the exercise of their functions. This creates some citizen dependencies upon
them and always has ‘oligopolistic’ effects in restricting political competition
(for example, concentrating funding and advertising/campaign capabilities in
main parties). To compensate, the internal leadership of parties and their
processes for setting policies should be responsive to a wide membership, one
that is open and easy to join.
• Leadership selection and the setting of main policies should operate
democratically and transparently to members and other groupings inside the
party (such as party MPs or members of legislatures). Independent regulation
should ensure that parties stick both to their rule books and to public interest
practices.

(iv) Political finance

• Parties should be able to raise substantial political funding of their own, but
subject to independent regulation to ensure that effective electoral competition is
not undermined by inequities of funding
• Individuals, organisations or interests providing large donations to parties or
other ‘election fighting organisations’ (such as referendum campaigns) must not
gain enhanced or differential influence over public policies, or the allocation of
social prestige (such as honours).
• All donations must be fully transparent, and without payments from ‘front’
organisations or foreign sources. The size of individual contributions should be
capped where they raise doubts of undue influence.

Recent developments: the party system


Political parties in the UK are normally stable organisations. Their vote shares and party
membership levels typically alter only moderately from one period to the next. But since 2014,
party fortunes have changed radically in the UK, particularly in England and Scotland. In 2017 the
top two parties secured more than four-fifths of votes in the UK (Figure 2), whereas in England
(their ‘home ground’) their share was 73% only two years earlier (Figure 1). With the Brexit
referendum won for ‘Leave’ in 2016 and its party leadership in chaos without its former leader
Nigel Farage, the UK Independence Party’s (UKIP’s) support in England in 2017 plummeted to 2%
– whereas two years earlier they commanded one in seven English votes at the general election (and
their opinion poll ratings were higher). Already in 2015, the Liberal Democrats’ vote share had
fallen sharply to just 8% in England (and lower elsewhere), around a third of its 2010 level – as the
electors punished them for their 2010–15 ‘austerity’ coalition government with the Tories. In 2017
their support still languished, although in local council elections in 2017 and 2018 they secured
around one in six votes.
Yet the most fundamental difference in the UK party systems between the elections arose from the
Brexit referendum in June 2016. Figure 1 below shows that in 2015 the competition space of British
politics was still essentially one-dimensional – so that parties could still be organised on a classical
left-right dimension, with the left standing for more public-sector spending and egalitarian policies,
and the right standing for free-market solutions, less welfare spending and stronger policies on
restricting immigration. There was a pro- and anti-European Union dimension in British politics in
2015 but only UKIP, with their advocacy of EU withdrawal, placed it centre stage. For the rest the
issue was sublimated, with the Cameron-led Conservatives and Miliband-led Labour both offering
very similar and quite consensual-seeming ‘European’ policy positions. Inside the Tories, although
strong currents of Euroscepticism were beginning to predominate again behind the scenes, this issue
hardly featured in Cameron’s 2015 campaign.
Figure 1: The party system in England, in the May 2015 general election

Source: P. Dunleavy, 2017 Lecture.


Notes: The positions of party ‘circles’ show their approximate left/right position; the size of the circles shows indicates
their vote shares in England. Parties with names underlined won seats.

By 2017, Figure 2 shows that a year after the shock June 2016 referendum vote for ‘Leave’ the
space of party competition was clearly two-dimensional, with the left-right ideological spectrum
now cross-cut slantwise by a three-fold cleavage between:
• Strong Eurosceptics committed to implementing the ‘Leave’ vote, whatever the
consequences, perhaps even walking away from the EU with a ‘no deal’ outcome – shown in
the purple-shaded area.
• Strong ‘Remainers’ committed to retaining the closest possible relationship with or full
customs union and single market access to the EU, and perhaps to holding a second
referendum for the public to approve the detailed outcome of withdrawal negotiations –
shown in the green shaded area. Significant sections of public and elite opinion here were
also willing to see the 2016 vote reversed if possible.
• In between, in the unshaded area, lie the largest blocs of elite and public opinion, committed
to implementing the ‘Leave’ vote so that ‘Brexit means Brexit’ as May insisted, but also
seeking the best possible compromise outcome for the UK in retaining links to the EU while
yet not having to accept ‘freedom of movement’ of EU citizens into the UK, or any EU
policies, or jurisdiction by the European Court of Justice.

Figure 2: The UK’s changed party system at the 2017 general election and the subsequent
Brexit negotiations phase

Source: P. Dunleavy, 2017 Lecture.


Notes: The positions of party circles show their approximate left/right position; the size shows their vote shares at the
2017 general election. The dotted line around the Liberal Democrats indicates their approximate level of support in
2017 and 2018 local elections (16%, calculated using the BBC’s national equivalent votes share measure).

These pro- and anti-Brexit lines of cleavage affect both the main parties. There are more
Conservative ultra-Leavers and more Labour strong Remainers, but both the top two parties are
internally divided into the three groups above. Only the Liberal Democrats, Scottish National Party
(SNP), and the Greens came out fully for remaining in the EU or as close as possible, while the
now-diminished UKIP was equally clearly for leaving ‘come what may’. The divisions within the
main parties meant that although Theresa May called the snap 2017 election supposedly to
strengthen her bargaining hand in negotiations with Brussels, in fact the EU withdrawal issue was
again handled in a ‘sub voce’ manner by both Conservatives and especially Labour – whose policy
position concentrated on domestic issues and remained deliberately very vague on European issues.
A succession of parliamentary votes on Brexit legislation in 2017 and 2018 has so far only
confirmed the picture in Figure 2, with Labour’s position varying quite markedly depending on the
detailed wording of each vote. Significant numbers of Conservatives have voted against the May
government’s ‘shaky compromise’ strategies at various stages, while many Labour MPs in strong
Leave-voting constituencies have supported the government against their party line on occasion
(while others, particularly London MPs, have rebelled for pro-EU amendments). Jeremy Corbyn
has especially kept Labour’s policy line so subtly modulated as to be almost invisible outside
Parliament itself.
So British party politics has never in recent history been so complex, and party labels have rarely
been so little use in predicting how people stand on the dominant issue facing the UK. At the same
time the successive ‘suicide’ decisions of the Liberal Democrats (in 2010–15, by backing the
Cameron-Clegg coalition government and implementing austerity policies for five years) and of
UKIP (by losing Nigel Farage as leader at the height of the party’s Brexit success, and being unable
to replace him in any coherent way) have boosted the Conservative-Labour dominance of the
political process. The apparent two-party predominance broadly endured in opinion polls into mid-
2018 raising questions about whether the UK (or at least England) has decisively shifted back in
love with ‘two-party’ competition? Or will multi-partism survive (as it clearly has at local level) and
grow back once the stress of Brexit decisions eases?

Recent developments: inside the parties


Labour: In the extended 2017 election campaign Jeremy Corbyn reversed a 20 percentage point
deficit in the opinion polls at the outset, thanks to a growth in younger supporters and sophisticated
online campaigning. Aided by May’s campaign misfiring, his leadership produced an unprecedented
10 percentage point growth in Labour’s vote share over six weeks.
This performance cemented Corbyn’s leadership and the policy changes that he had implemented,
shifting the Labour Party decisively leftwards in opposition to austerity cuts; and contemplating re-
extending public ownership again for the railways, water and perhaps other industries. He
maintained support for implementing the 2016 Brexit vote, while successfully masking or finessing
this stance with pro-Remain supporters (not least amongst the young). His triumph came after two
torpid years. In summer 2015 Corbyn was only just allowed to stand for the leadership at all by the
naïve generosity of some centrist MPs in getting him 15% of the Parliamentary Labour Party (PLP)
signatures. His runaway victory, with over three-fifths support amongst the party’s newly enlarged
membership, was greeted with horror by the PLP’s centre-right, but showed how astonishingly out
of touch most Labour MPs had got from their activists. In summer 2016 Corbyn’s perceived failure
to campaign overtly enough for Remain was the trigger for four-fifths of his shadow cabinet to
resign, triggering another leadership election. Yet the attempted coup was almost farcically mis-
handled. No viable alternative candidate had been identified in advance, and an attempt to make
Corbyn re-gather nominations from 15% of MPs before he could stand again also failed. He
subsequently romped home with 62% support from members, against a lacklustre and previously
unknown centrist candidate, Owen Smith.
At long last the PLP had to accept his leadership, and Corbyn and his MPs held their nerve when
May called a snap election. They gave her the two-thirds consent of the Commons that she needed
under the Fixed-term Parliaments Act, despite lagging Labour badly in the polls. The process for
defining a Labour manifesto then worked well, producing a popular document with few hostages to
fortune. And in the aftermath of the narrow 2017 defeat, Corbyn steered a rule change through the
party’s National Executive lowering the PLP nominations bar to 10% of MPs, so ensuring that a
future left candidacy for the leadership should be feasible. Most of the new MPs in 2017 are
Corbynites, the shadow cabinet has worked well (despite Labour’s evasiveness on Brexit), and
Labour’s poll ratings have broadly tied with the government’s into summer 2018. The alleged
influence of Momentum, a parallel movement of Labour supporters, has not so far produced clear
evidence of far-left ‘entryism’, and threats to sitting MPs from the left have been relatively few.
On another front Corbyn has faced strong and vocal criticism by UK Jewish organisations that
Labour has failed to crack down on anti-semitism within its ranks. An official Labour report found
that the problem was small scale. And the NEC subsequently took actions to strengthen disciplinary
penalties for members breaching the party’s code of conduct – whose most prominent casualty
included former London mayor Ken Livingstone, who resigned from the party in spring 2018 over
the issue. The party’s vulnerability to attack here reflects three factors: the re-growth of the Labour
left (who condemn the illegal permanent Israeli occupation of territories seized after the 1967 war);
Corbyn’s identification with this position, and Labour’s remodelling itself as a multi-ethnic urban
party. The PLP has demanded a stronger definition of anti-semitism in the code of conduct.
However, the party’s defenders argue that the pro-Israel lobby in the UK systematically categorises
every criticism of that state as anti-semitism, normally without any evidence of prejudice against
Jews as a race having been expressed – in order to close down criticism of Israeli repressive actions
against Palestinians.
Conservatives: The party under Theresa May also increased their 2017 vote share, reaping a
dividend from UKIP’s collapse. Yet this was not enough to retain a Commons majority against the
Labour surge, nor to save May’s legitimacy with her party for ‘wasting’ David Cameron’s (small)
2015 majority. May became a party leader and prime minister on notice, with an expectation that at
some point she would be superseded, either by resigning or by a leadership contest being triggered.
Her original accession in 2016 (with only an aborted election, from which all other candidate fell
away) turned into a liability when May proved an uncharismatic (allegedly ‘robotic’) performer on
the campaign trail. And her two top aides were widely blamed for mishandling a 2017 manifesto
pledge on taxing the elderly to fund social care, resulting in their subsequent speedy departure.
May also faced a difficult task of party management over its Brexit strategy, which constantly
plagued her during her first two years in office. She ensured that Brexiteers formed a third of her
cabinet, gave them some key negotiating roles (notably David Davis, supposedly in charge of
negotiations) and brought her main erstwhile rival for the leadership, Boris Johnson, into the cabinet
in the (deliberately?) inappropriate role of foreign secretary. In July 2018, she forced a long-delayed
confrontation over the UK’s Brexit negotiating position with the Brexiteers in the cabinet at a
Chequers awayday, only to see Johnson and Davis both resign two days later and a guerrilla war
escalate in Parliament with her large group of Brexiteer MPs.
The Conservative’s key problem is that both wings of the party have suffered cataclysmic defeats in
intra-party battles in living memory, which were so fundamental for both sides that maintaining the
Tories’ famous capacity to coalesce under pressure has become very difficult. For the right, the
1990 ejection of Margaret Thatcher from the leadership by the pro-European centre-left created a
‘stab-in-the-back’ myth that fuelled a bitter Euroscepticism that grew and became more intense over
nearly three decades. For the centre-left, the Brexit Leave vote became a symmetrical disaster,
causing the consequent ejection of David Cameron (and his Chancellor/heir apparent George
Osborne) from Downing Street. The Tory right’s role here was one Remainers find equally hard to
forgive – reversing as it does 43 years of centre-left policies on the EU.

Strengths, Weaknesses, Opportunities, Threats (SWOT)


analysis
Current strengths Current weaknesses
Party membership in the UK has increased from a low
Britain’s party system is stable, and the base in 2010, but it is still low. Around 950,000 people
main parties generally provide coherent are party members, out of a population of 65.6 million,
platforms consistent with their ‘brand’ with Labour and the SNP both showing strong recent
and ‘image’, despite the party cleavages growth. Conservative membership is now perhaps the
caused by the Brexit issue (see above). most elderly of all the parties and remains small relative
to Labour’s renewed mass membership.
Plurality rule elections privilege established major
parties with strong ‘safe seat’ bastions of support, at the
Britain’s political parties continue to
expense of new entrants. The most active political
attract competent and talented individuals
competition thus tends to be focused on a minority of
to run for office.
around 120 marginal seats, with policies tailored to
appeal to the voters therein.
Entry conditions vary somewhat by party,
but it is not difficult or arduous to join
It is fairly simple to form new political parties in the
and influence the UK’s political parties.
UK, but funding nomination fees for Westminster
Labour initially opened up the choice of
elections is still costly. And in plurality rule elections
their top two leadership positions to a
new parties with millions of votes may still win no seats,
wider electorate using their existing trade
as happened to UKIP in 2015. At local level, some one-
union networks and a £3 ‘supporter’
party dominant areas also produce councils with no
scheme (in 2015), but later reverted to
opposition councillors at all.
full members only voting, after tensions
with the party’s MPs.
All the main parties (except perhaps
Labour has had long-running difficulties with allegations
UKIP) have recruited across ethnic
of anti-semitism amongst some party members in recent
boundaries, helping to foster the
years (see above). Some critics argue that the
integration of black and ethnic minority
Conservatives have failed to tackle Islamophobia within
groups into the mainstream of UK
their ranks.
politics.
Labour has involved a wider set of Most mechanisms of internal democracy have accorded
‘supporters’ in its affairs and used digital little influence to their party memberships beyond
campaigning more. And the separate choosing the winner in leadership elections. Jeremy
Current strengths Current weaknesses
group Momentum has helped channel Corbyn claims to be counteracting this and listening
back disillusioned, left-leaning people more to his members. However, in consequence, Labour
who had left the party under Blair and struggled to delineate the relationship between MPs in
Brown, and younger people into ‘parallel’ the parliamentary party (who MPs saw as not
Labour involvements through both answerable to voters) and the enlarged membership
‘clicktivist’ and more ‘old school’ (who may not reflect Labour voters’ views well). These
activism. tensions eased in 2017.
The UK’s main political parties are not
over-reliant on state subsidies and can There are large inequities in political finance available to
generally finance themselves either parties, with some key aspects left unregulated. These
through private membership fees, may distort political (if not) electoral competition.
individual donation and corporate Majority governments can alter party funding rules in
donations, or (in Labour’s case) trade directly partisan and adversarial ways (see below).
unions funding.
In the restricted areas where it can
The ‘professionalisation of politics’ is widely seen as
regulate the parties, the Electoral
having ‘squeezed out’ other people with a developed
Commission is independent from day-to-
background outside of politics (but see below).
day partisan interference.
Future opportunities Future threats
Before the 2016 Brexit vote the UK seemed to be
historically evolving towards multi-party politics, Critics argue that the cross-cutting of both the
a trend that also found expression in elections top two parties by Brexit positions shown in
beyond Westminster and English local Figure 2 above means that party labels and
government. New and ‘outsider’ parties identities are no longer effectively structuring
strengthened anti-oligopoly tendencies. Since (but instead obscuring) the dominant issues in
then, however, public opinion showed a renewed UK politics.
emphasis upon top two-party competition.
In multi-party conditions, plurality rule elections
for Westminster may operate in ever more
Some strong ‘new party’ trends have emerged
eccentric or dramatic ways, as with the SNP’s
towards broadening involvements using digital
2015 landslide in Scotland almost obliterating
means and extended outreach/lowered barriers to
every other party’s MPs there. The SNP’s strong
membership within Labour and the SNP. These
support in 2014–16 threatened to create a
developments could strengthen party ties with
‘dominant party system’ system in Scotland,
civil society, reversing years of weakening.
where party alternation in government ceases for
Alternatively these effects may ebb away again
a long period. However, this prospect soon
(see below).
receded with both Tory and Labour revivals
north of the border.
Digital changes also open up new ways in which
parties can connect to supporters beyond their
formal memberships and increase their links to The growth of political populism and identity
and engagement with a wider range of voters. divisions post-EU referendum has ‘hollowed
Parties now generally conduct their leadership out’ the centre ground of British politics, with
elections using an online system which makes it the Liberal Democrats unable to regain their
easier to register a preference. Other matters of earlier momentum.
internal party business and campaigns could soon
be affected, potentially including setting policy.
The advent of far greater ‘citizen vigilance’ Moves by governing political parties to alter
operating via the web and social media like laws, rules and regulations so as to skew future
Future opportunities Future threats
Twitter and Facebook creates a new and far more
intensive ‘public gaze’ scrutinising parties’ political competition and disadvantage their
internal operations. Tools such as ‘voting advice’ rivals can set dangerous precedents that degrade
application apps or the Democratic Dashboard the quality of democracy. The Conservative
also allow voters to access reliable information government changes to electoral registration and
about elections and democracy in their area – redrawing of constituency boundaries may all
information that neither government nor the top have such effects, even if implemented in non-
parties has so far either been able or willing to partisan ways.
provide.
All the UK’s different legislatures (Westminster,
and the devolved assemblies/parliaments in
Scotland, Wales, Northern Ireland, and London)
have now sustained coalition governments of
different political stripes and at different periods,
and each has operated stably. Therefore, the UK’s
adversarial political culture does not rule out
cross-party cooperation where electoral outcomes
make it necessary.

Changes in the Scottish party system


By contrast to England, and to a large extent Wales also, in Scotland politics has long operated
across two ideological dimensions, with left/right cleavages cross-cut by another issue of equal
(sometimes greater) salience: should Scotland stay in the UK, or not? And how much power should
be devolved to Edinburgh? Following the extraordinary mobilisation around the 2014 independence
referendum (which was narrowly lost by 55 to 45%) this line of cleavage greatly benefited the SNP
(and the Scottish Greens in a much smaller way). It tended to undermine and push together the other
four parties, all of which campaigned to keep the union with the UK.
Despite their ’Indy’ referendum defeat the SNP’s enhanced membership and morale meant that by
the time of the 2015 general election they gained a pre-eminence as the ‘voice for Scotland’ against
the prospect of a clear majority Tory UK government, as shown in Figure 3a. Gaining half of all
Scottish votes in 2015, they won all but three of the country’s 59 seats, leaving Labour’s traditional
dominance of Scottish representation in the UK Parliament shattered with just one MP, the same
number gained by the Conservatives and Liberal Democrats. For a time, it looked as if the SNP
would exert a hard-to-challenge dominance in Scottish politics, controlling as they did both the
Scottish government in Edinburgh, a majority of all MSPs and almost all Scottish representation at
Westminster, against a multiply-divided opposition.
Figure 3: The Scottish party system at the 2015 and 2017 general elections
Source: Dunleavy, LSE Lecture Notes for course Gv311.

Notes: The size of each party circle indicates its rough size and salience in the party system, and its approximate
position in two-dimensional space. The numbers in each circle show that party’s vote share percentage in Scotland.
Parties with names underlined won seats.

In the 2016 Scottish Parliament elections, however, the SNP as incumbents lost a little ground in
votes (down to 42%, and 63 of 129 seats), while the Tories jumped nearly 11% to become the main
opposition on 23% support, and Labour fell back badly to third. The Liberal Democrats were
unchanged, but the Greens moved from 2 to 6 seats, becoming critical for the SNP staying in power.
Nicola Sturgeon looked to have four more years as First Minister, and when Scotland voted by 62 to
38% not to leave the European Union, her allies quickly raised the prospect of holding a second
referendum on independence far more speedily than anyone had previously envisaged – not least to
resist a Westminster ‘land grab’ for EU powers that the SNP argued could permanently reset the
devolution settlement in the UK’s favour.
By 2017, however, public support for any second independence referendum amongst Scottish voters
was clearly a minority view. The new Scottish Conservative leader, Ruth Davidson, moved her
party’s position decisively towards the political centre, endorsed more devolution of powers to
Scotland, and sharpened criticisms of the SNP’s government at Holyrood. The Tories perhaps
attracted more support from pro-union Labour and Liberal Democrat voters as the most viable
unionist opposition.
In addition, during the June 2017 election campaign Jeremy Corbyn’s UK national leadership also
shifted Labour’s image leftwards, and brought the party back in line with the Scotland’s left-leaning
political spectrum. The party also backed more powers for Scotland and slightly blurred its rejection
of independence (for instance, no longer making support for independence inconsistent with Labour
membership). These changes caused a significant swing back to a multi-party system, shown in
Figure 3b above. The later easy victory of Corbynite Richard Leonard as Scottish Labour leader
consolidated these changes, although he has yet make much of a mark with voters at large.
The SNP could not sustain its 2015 majority vote share, losing a quarter of its support. Its seats
were slashed back from 56 to 35, just under three-fifths of the total of Scotland’s 59 MPs. The scale
and speed of these seat reversals was damaging. It was not until spring 2018 that the SNP dared to
publicly re-launch the idea of an Indy 2 referendum, at some point after Brexit had occurred,
perhaps in 2020 or 2021. The danger of Scotland becoming a ‘dominant party system’ – where the
same party is a serial winner against a fragmented opposition incapable of co-operating to defeat it
– clearly had receded after 2016.

Structuring competition and party ‘brands’


We noted above that the main alternative dimension in England has been the pro- and anti-EU one,
increasingly overlapping in UKIP’s campaigning with anti-immigrant sentiments. The right-wing
press have also explicitly played to anti-immigrant views, notably in their Brexit coverage, but
officially the Tories have not played along. However, Theresa May’s insistence on maintaining the
net immigration target of below 100,000 people a year, which was set under the Cameron
government when she was Home Secretary, and which has never been even vaguely approached by
actual, much higher migration levels, undoubtedly reflects a sub voce Conservative appeal on the
same lines. Attitudes towards immigration are far more aligned with existing left-right cleavages,
especially as Labour has developed towards being more of an urban/multicultural party, less
dominated by its working class/trade union lineage.
Both the top two British parties have had chronic difficulties in organising around the
EU/immigration aspect of politics, maintaining an agreed strategy of not vocally campaigning on
immigration, lest it stir up ethnic tensions. As we saw above, Labour has become progressively
more pro-EU since Brexit (echoing more the strongly European stances under previous leaders) and
the Conservative MPs (if not their leadership) have become more anti-EU and pro-Brexit.
The enduring quality of parties’ appeals is borne out by recent research showing that strong party
supporters place themselves ideologically at the same place as the parties they identify with.
Supporters tend to accurately perceive their own party’s position, but to see opposing parties as
more ‘extreme’ than they are. On the centre-left in 2017 there were multiple overlaps of party
supporters’ views amongst Labour, the Greens and Liberal Democrats, while on the right the
Conservatives and UKIP overlapped in some anti-EU positions. Yet in mid-terms, between general
elections, around two-fifths of those backing major parties told IPSOS-MORI they did not know
what they stood for.
So are main parties failing to communicate their brands in a sustained and consistent manner? A
potential explanation may lie with the various processes of party ‘modernisation’ that took place
over recent years, with each of the three main parties attempting to ‘move to the centre’. The shifts
to a more ‘managerialist’ politics of detail that occurred before Corbyn, the EU referendum and
May’s realignment of the Tories may have left many voters less clear what each party advocates.
But the reconfiguration of British party politics since 2016 now suggests that a realignment of the
party system may be in train, with UKIP potentially eliminated altogether, to the Tories’ great
benefit.

Electing party leaders, or not


For a brief period in the 2010s, all the parties enacted protracted processes in which their mass
memberships would elect the party leaders, albeit from fields of contenders that were initially
defined by MPs. Yet some of these arrangements now look as if they are likely to change or fall into
abeyance. Jeremy Corbyn’s two commanding party leadership election wins in 2015 and 2016 set
him up to almost succeed as a campaigner in the 2017 general election, and the changes lowering
the share of MPs needed for nomination (noted above) may guarantee that Labour’s internal
elections remain critical for the party in future.
However, in the other two leading parties, the members’ voice has recently been de-activated and
leadership competition denied. In June 2016, following Cameron’s shock resignations, complex
politicking amongst Tory MPs meant that Boris Johnson did not even make the nomination stage
and Michael Gove was ignominiously eliminated at the ‘winnowing out’ second ballot of Tory MPs.
The clear frontrunner Theresa May was left facing only the relatively unknown Brexiteer Andrea
Leadsom in a run-off vote by party members that would in theory take all summer long. Leadsom
withdrew, making May the unelected but initially unquestioned leader. Effectively the Tory MPs’ fix
denied their party members any chance to vote.
However, May’s subsequent huge problems as party leader, and her lack of success as a campaigner
at the 2017 general election, may mean that the next Tory leadership contest will have to run by the
book and involve members after all. The complex politics of precipitating a new contest without
seeming to be ‘disloyal’ put many alternative leaders off in 2017–18, especially while May could be
left to bear the burden of the Brexit negotiations. But as time wears on, the pressure for a resolution
of her perceived ‘caretaker only’ leadership tenure will intensify.
The second party where members effectively lost a vote was the Liberal Democrats. When they
came to elect a new leader after their 2015 general election losses their party had only eight MPs
left in the Commons to choose from. Tim Farron took the helm in 2015 but made little impact. In
2017 he stood down and the elderly returning MP Vince Cable was the only candidate to replace
him. By mid-2018 he had largely failed to improve the party’s lowly opinion poll ratings, perhaps
reflecting Cable’s own close involvement in the 2010–15 coalition government. The party’s deputy
leader, Jo Swinson, may be the party’s best hope of remaking its image in time for a 2021 general
election, by passing the leadership baton to a new gender and generation.

Internal democracy for policy-making


All the parties have moved to greater transparency and openness in their affairs, and have different
arrangements for intra-party democracy to periodically set aspects of party policy. Labour’s
widening of membership and election of the party’s National Executive Committee by members is
the most radical innovation, and has created a left majority under Corbyn.
The remaining parties still operate more orthodox arrangements. In theory, Liberal Democrats have
the most internally democratic party, with the federal party and party conference enjoying a pre-
eminent role in policy formation. Yet in the coalition period the exigencies of the party being in
government seemed to easily negate this nominal influence (as has long been argued to be the case
in the top two parties). Conservative Party members have relatively little formal influence over
party policy, with key decisions made largely in Cabinet or Shadow Cabinet, and to a lesser degree
by the national party machine. At local level, members have more influence, but they rarely
challenge sitting MPs. UKIP’s members are not empowered by their party’s constitution, which
declares that motions at conference will only be considered as ‘advisory’, rather than binding. The
Green Party probably allows its membership the greatest degree of influence over internal policy,
but in local government has had to tighten up in the few areas where it has exercised power (such as
in Brighton).

Recruiting political elites


The main political parties regularly sustain a steady stream of individuals to run for political office,
who can be socialised, selected and promoted into their structures. However, the impression has
gained ground that increasingly only candidates with professional, back-office backgrounds are
being chosen. In fact, such ‘politics professionals’ make up less than one in six MPs, far lower than
popular accounts envisage. However, it is true that: ‘MPs who worked full-time in politics before
being elected dominate the top frontbench positions, whilst colleagues whose political experience
consisted of being a local councillor tended to remain backbenchers’. So politics professionals
within the top parties do tend to dominate media and policy debates.
In terms of wider social diversity, the 2017 parliament is in some ways (notably gender and
ethnicity) the most diverse and representative ever. Yet as Campbell et al noted in 2015 (when the
same claim was made): ‘To put the progress made in perspective, the UK would need to elect 130
more women and double the current number of black and ethnic minority MPs to make its
parliament descriptively representative of the population it serves.’ Just 2% more MPs were women
in 2017. The problem is that research continues to shows that all the main parties’ membership is
disproportionally white, male, middle aged and middle class, with the problem being most severe
for the Conservatives. Against this background achieving sustained and rapid improvements in the
recruitment of diverse prospective candidates is tricky to achieve.

Representing civil society


The standard theme of now dated textbook discussions is that the major political parties are
declining in their ability to recruit members, and thereby becoming ‘cartel parties’ dependent for
their lifeblood upon large donors (such as very rich individuals for all parties, or trade unions with
large membership blocs for Labour), or upon state subsidies to parties. Yet Figure 4 shows that this
narrative of continuous decline has not been accurate for British parties as a whole in the 21st
century.
Figure 4: The membership levels of UK political parties, 2002–18
Source: Lukas Audickas, Noel Dempsey, Richard Keen Membership of Political Parties, House of Commons Library
Briefing Paper SN05125, 1 May 2018, p.8.

Notes: The vertical axis here shows thousands of members, from annual accounts submitted to the electoral
commission, data from parties’ head offices and, in the case of the Conservatives, media estimates. The Labour party
membership numbers of 2015 and 2016 include full party members and affiliated supporters, but not ‘registered
supporters’ (who paid only £3). Dotted lines show estimates based on media reports.

The last four years in Figure 4 show soaring numbers of members for the SNP since the
independence referendum and of the Labour party since easier membership rules, low cost fees, and
the post-general election changes. Some observers point out that now with 522,000 individual
members, a Corbyn-led Labour has gained perhaps £8m in annual fees and so may be able to reduce
its dependence on affiliated trade unions’ block fee payments – a goal that eluded all previous
Labour leaders. The Conservatives also moved against the unions again. The Trade Union Act 2016
introduced an ‘opt-in’ requirement for political levies for new members of trade unions, replacing
the previous opt-out provision. This may (gradually) hit Labour’s union income in future years, or it
may be mitigated by improvements in union communication practices.
All these changes mean that parties now draw very different proportions of their income from
membership subscriptions. Figure 5 shows that the Greens and SNP are the parties for whom
membership fees count most as a source of income, with the Conservatives bottom, and the Liberal
Democrats nearby at the bottom. Labour, Plaid Cymru and UKIP are in the intermediate group.
Figure 5: Income from membership revenues as a percentage of total income

Source: Party annual accounts submitted to the Electoral Commission

In some European countries, a recent rejuvenation of party politics has taken two contrasting forms.
Some new left parties committed to a different kind of ‘close to civil society’ politics emerged on
the left (like Podemos in Spain) and Syriza in Greece. More often though populist, anti-EU/anti-
immigration parties grew markedly on the radical right. Some observers even discern the ‘death of
representative politics’ in such changes. But in the UK the highly insulating plurality rule voting
system at Westminster has asymmetrically protected the top two UK parties, with the UKIP wave
artificially excluded from Parliament on the right in 2015. And left-of-centre movements have
happened not in new parties but within the ranks of Labour (in England) and the SNP (in Scotland).
These latter changes have proved resilient so far, but they may still not endure if either party
experiences setbacks in future.
Political finance
The core foundations of the UK’s party funding system lie in electoral law. Two key provisions are:
(i) the imposition of very restrictive local campaign finance limits on parties and candidates; and (ii)
the outlawing of any paid-for broadcast advertising by parties in favour of state-funded and strictly
regulated party election broadcasts (set by votes won last time). Opposition parties also have the
benefit of a degree of state funding (called ‘Short money’ and again related to votes received) but
this is only available to those parties with at least one MP. The bulk of the funds so far has gone to
fund the leaders’ offices of Labour, the SNP and Liberal Democrats.
Political finance nonetheless still matters immensely in UK politics because two types of spending
are completely uncontrolled, namely: (iii) supra-local campaigning and advertising in the press,
billboards, social media and other generic formats; and (iv) general campaign and organisational
spending by parties, which is crucial to parties’ abilities to set agendas and create media coverage
‘opportunities’, especially outside the narrowly defined and more media-regulated election periods
themselves.
In terms of private donations Figure 5 shows that the Conservative Party gained just over half of the
total across the 2013–17 period, mostly from very rich people. Labour, meanwhile, received a
smaller 32%, partly from mass membership and trade union fees, with some large individual
donations also. The Liberal Democrats, in government until 2015, also gained some large gifts – as
did UKIP.
Figure 5: Donations to political parties, 2013–17

Party £ millions % of all donations


2013 2014 2015 2016 2017 Total 2013–17
Conservatives 15.9 29.2 33.2 17.5 37.1 132.9 50.5
Labour 13.3 18.7 21.5 13.9 16.1 83.5 32
Lib Dems 3.9 8.3 6.7 6.4 6.3 31.6 12
UKIP 0.67 1.2 3.3 1.6 0.65 7.4 2.8
SNP 0.04 3.8 1.2 0.14 0.87 6.5 2.5
Source: Electoral Commission

Notes: Percentages may not sum to 100% due to rounding.

Donating to parties is supposedly transparent. All gifts must be declared and sources made clear,
and funding is regulated by the Electoral Commission. But unlike many liberal democracies, there
are no maximum size limits on UK donations, although donations from overseas have been clamped
down on. Critics argue that ‘the fact that political parties are sustained by just a handful of
individuals makes unfair influence a very real possibility even if the reality is a system that is more
corruptible than corrupt.’ Close analysis also shows a strong link between donations to political
parties and membership of the House of Lords, now almost entirely in the gift of party leaders.
Despite supposedly stronger rules applying to ‘good conduct’ in public life (following scandals
around 2009). In the past Conservative and Labour leaders have both been very reluctant to give up
the lubricating role of the honours system in sustaining their funding hegemony and easing internal
party management. The Tories (and Liberal Democrats in a lesser way) continue to take full
advantage of this. However, Corbyn has made only two Lords appointments, and the SNP will take
no seats there. Meanwhile the Liberal Democrats have far and away the highest ratio of peerages
and knighthoods amongst their past MPs of any UK political party.
Although party finance regulation is impartially implemented in a day-to-day manner, there is little
to stop a government with a majority from legislating radically to change party finance rules in
‘sectarian’ ways that maximise their own individual party interests and directly damage opponents.
In the UK’s ‘unfixed’ constitution, only elite self-restraint, Tory party misgivings or perhaps House
of Lords changes (which made a difference to the anti-union law in 2016) can prevent directly
partisan manipulation of the opposition’s finances.

Conclusions
The conventional wisdom of ‘parties in decline’ does not now fit the recent history of the UK well,
with some membership levels growing, and others fairly stable. Some ‘new party’ trends emerged
(for a while) within Labour and the SNP, utilising different, more digital ways of mobilising and
stronger links to parts of civil society. Internal party elections of most key candidates (not leaders)
are generally stronger now than in earlier decades (except within UKIP). So parties are not yet just
the self-serving ‘cartels’ that critics often allege.
Yet many problems remain. The Brexit divide cuts across party lines in an acute way, producing
deliberate vagueness in what each of the two top parties say to voters on this crucial issue. The
provisions for party members to elect leaders were left unused in the Conservative Party in 2016,
and for a time created almost insupportable strains within Labour under Corbyn. The problem of a
‘club ethos’ uniting MPs in the main parties was evident in the over-protection that the Westminster
election system grants Conservatives, Labour and now the SNP; in the very partial regulation of
political financing and the (only weakly regulated) effective ‘sale’ of honours; in the ability of
governments to legislate in sectarian ways to weaken their opposition parties; in weak internal
democracy controls or influence over parties’ policy stances and manifestos; and in the sheer scale
of parliamentary party remoteness from membership views that can arise.

About the authors


Patrick Dunleavy is Professor of Political Science and Public Policy at the LSE, co-director of
Democratic Audit and Chair of the Public Policy Group.
Sean Kippin is a PhD candidate and Associate Lecturer at the University of the West of Scotland
and a former editor of Democratic Audit.
Last reply was 1 week ago
1. Campaign spending and voter turnout: does a candidate’s local prominence influence the
effect of their spending? : Democratic Audit UK
View 1 week ago
[…] representing other political parties are captured. Given the complex nature of Britain’s
party system, the campaign efforts of parties and candidates beyond the traditional ‘big
three’ may become […]
Reply
2. Tim Williamson
View 2 weeks ago
I cant fail to be impressed with such a knowledgable and articulate academic paper. My
problem is that it generally does not seem to relate to me as a citizen who feels he has no
control over UK politics – effectively disenfranchised. So just a couple of points:
Firstly, your criteria: “Parties should be able to raise substantial political funding of their
own,” doesn’t seem to be in the least democratic. Why should the rich have a party that is so
well resourced, while the poor do not. It’s rigged.Funding is crucial to campaigning. How
much money could a poor person contribute to a party that would represent them? I doubt
they could manage £10. So why aren’t donations capped at £10?
Secondly, under “threats”, why is there no mention of the massive effects of micro-targeting
and other techniques in social media campaigns? How can this not be brainwashing? It
needs really drastic action.
Reply
3. Sean Swan
View 2 weeks ago
With the greatest respect to both Sean and Prof Dunleavy, please either use the term ‘Great
Britain’ rather than ‘UK’, or include the DUP and Sinn Fein.
4.
Mike Whitney November 28, 2012

Shadow Banking
Regulators are worried about the explosive growth of shadow banking, and they should be. Shadow
banks were at the heart of the last financial crisis and they’ll be at the heart of the next financial
crisis as well. There’s no doubt about it. It’s simply impossible to maintain a system where
unregulated, non-bank financial institutions are able to create their own money (credit) without
oversight or supervision. The money they create–via off-balance sheets operations, securitization,
repo or other unmonitored mega-leveraging activities–feeds into the economy, creates artificial
demand, lowers unemployment, and fuels growth. But when the cycle slams into reverse (and debts
are no longer serviced on time), then thinly-capitalised shadow banks begin to default one-by-one,
creating a daisy-chain of counterparty bankruptcies that push stocks into a nosedive while the
economy slips into a long-term slump.
Sound familiar?
The reason the global economy is still in a shambles a full 5 years after Lehman Brothers collapsed,
is because this deeply-flawed system –which had previously generated 40 percent of the credit in
the US economy–was still in rebuilding-mode. But now, according to a new report by the Financial
Stability Board, shadow banking has made a comeback and is bigger than ever. The FSB found that
assets held by shadow banks have swollen to $67 trillion, a sum that’s nearly as large as global GDP
($69.97 trillion) and greater than the $62 trillion that was in the system prior to the Crash of ’08.
The more shadow banking grows, the greater the probability of another financial crisis.
So what is shadow banking and how does it work?
Here’s how Investopedia defines the term:
“The financial intermediaries involved in facilitating the creation of credit across the
global financial system, but whose members are not subject to regulatory oversight. The
shadow banking system also refers to unregulated activities by regulated institutions.

Examples of intermediaries not subject to regulation include hedge funds, unlisted


derivatives and other unlisted instruments. Examples of unregulated activities by
regulated institutions include credit default swaps.

The shadow banking system has escaped regulation primarily because it did not accept
traditional bank deposits. As a result, many of the institutions and instruments were able
to employ higher market, credit and liquidity risks, and did not have capital
requirements commensurate with those risks. Subsequent to the subprime meltdown in
2008, the activities of the shadow banking system came under increasing scrutiny and
regulations.” (Investopedia)

Shadow banking may have “come under increasing scrutiny”, but not a damn thing has been done
to fix the problems. The banks and their lobbyists have beaten back all the sensible reforms that
would have made the system safer. Instead, we’re back at Square 1, where credit is expanding in
leaps and bounds by–what Pimco’s Paul McCulley called–“a whole alphabet soup of levered up
non-bank investment conduits, vehicles and structures”. What we are seeing, in essence, is the
privatizing of money creation. Privately-owned financial institutions of every stripe are increasing
the amount of credit in the system even though the underlying collateral they’re using may be
dodgy and even though they may not have sufficient capital to honor claims if there’s a run on the
system.
Let’s explain: When a bank issues a mortgage, it is required to hold a certain amount of capital
against the loan in case of default. But if the bank securitizes the mortgage, that is, it chops the
mortgage up into tranches, pools it with other mortgages, and sells it as a bond (mortgage backed
security), then the bank is no longer required to hold capital against the asset. In other words, the
bank has created money (credit) out of thin air. This is the ultimate goal of banking, to maximize
profits off zilch capital.
So how is this different than counterfeiting?
There’s no difference at all. The banks are creating “near money” or what Marx called “fictitious
capital” without sufficient resources, without supervision, and without any regard for the damage
they may inflict on the real economy when their ponzi-scam blows up. What matters is profits,
everything else is secondary.
We live in an economy where the Central Bank no longer controls the money supply. Interest rates
only play small part in this new paradigm where risk-oriented speculators can boost broad money
by many orders of magnitude by merely increasing their debt levels. This new phenom has
intensified systemic instability and caused incalculable harm to the real economy. Keep in mind,
that ground zero in the financial crisis was a shadow bank called The Reserve Primary Fund. That’s
where the trouble really began.
In 2008, the Reserve Primary Fund (which had lent Lehman $785 million and received short-term
notes called commercial paper) was unable to keep up with the withdrawals of clients who were
concerned about the fund’s financial health. The sudden erosion of trust triggered a run on the
money markets which sent equities plunging. Here’s how Bloomberg sums it up:
“On Tuesday, Sept. 16, the run on Reserve Primary continued. Between the time of
Lehman’s Chapter 11 announcement and 3 p.m. on Tuesday, investors asked for $39.9
billion, more than half of the fund’s assets, according to Crane Data.

“Reserve’s trustees instructed employees to sell the Lehman debt, according to the SEC.

“They couldn’t find a buyer.

“At 4 p.m., the trustees determined that the $785 million investment was worth nothing.
With all the withdrawals from the fund, the value of a single share dipped to 97 cents.

“Legg Mason, Janus Capital Group Inc., Northern Trust Corp., Evergreen and Bank of
America Corp.’s Columbia Management investment unit were all able to inject cash into
their funds to shore up losses or buy assets from them. Putnam closed its Prime Money
Market Fund on Sept. 18 and later sold its assets to Pittsburgh-based Federated
Investors.
“At least 20 money fund managers were forced to seek financial support or sell holdings
to maintain their $1 net asset value, according to documents on the SEC Web Site.”
(“Sleep-At-Night-Money Lost in Lehman Lesson Missing $63 Billion”, Bloomberg)

The news that Primary Reserve had “broken the buck” sparked a panic that quickly spread to
markets across the world sending stocks into freefall. Primary Reserve was the proximate cause of
the financial crisis and the global crash, not subprime mortgages and not Lehman Brothers. This
fact is obfuscated by the media to conceal the inherent dangers of the shadow system, a system that
is just as rickety and crisis-prone today as it was in September 2008.
Although there are ways to make shadow banking safer, the banks and their lobbyists have resisted
any change to the current system. Recently, the banks delivered a stunning defeat to Securities and
Exchange Commission chairwoman Mary Schapiro who had been pushing for minor changes to
money market accounts that would have made this critical area of the shadow system safer and less
susceptible to bank runs. Schapiro’s drubbing at the hands of an all-powerful financial services
industry sent shockwaves through Washington where even diehard friends of Wall Street –like Ben
Bernanke and Treasury Secretary Timothy Geithner–sat up and took notice. They have since joined
the fight to implement modest regulations on an out-of-control money market system which
threatens to crash the financial system for the second time in less than a decade.
Keep in mind, that the changes Geithner, Bernanke and Schapiro seek are meager by any standard.
They would involve “a floating net asset value, or share price, instead of their current fixed price,”
or more capital to back up the investments in the money market fund (just 3 percent) in case there’s
a panic and investors want to withdraw their money quickly. That sounds reasonable, doesn’t it?
Even so, the banks have rejected any change at all. They believe they have the right to decieve
investors about the risks involved in keeping their money in uninsured money market accounts.
They don’t think they should have to keep enough capital on hand to cover withdrawals in the event
of a bank run. They’ve decided that profits outweigh social responsibility or systemic stability.
So far, Wall Street has fended off all attempts at regulatory reform. The banks and their allies in
Congress have made mincemeat of Dodd Frank, the reform bill that was supposed to prevent
another financial crisis. Here’s how Matt Taibbi summed it up in a recent article in Rolling Stone:
“At 2,300 pages, the new law ostensibly rewrote the rules for Wall Street. It was going
to put an end to predatory lending in the mortgage markets, crack down on hidden fees
and penalties in credit contracts, and create a powerful new Consumer Financial
Protection Bureau to safeguard ordinary consumers. Big banks would be banned from
gambling with taxpayer money, and a new set of rules would limit speculators from
making the kind of crazy-ass bets that cause wild spikes in the price of food and energy.
There would be no more AIGs, and the world would never again face a financial
apocalypse when a bank like Lehman Brothers went bankrupt.

Most importantly, even if any of that fiendish crap ever did happen again, Dodd-Frank
guaranteed we wouldn’t be expected to pay for it. “The American people will never
again be asked to foot the bill for Wall Street’s mistakes,” Obama promised. “There will
be no more taxpayer-funded bailouts. Period.”

Two years later, Dodd-Frank is groaning on its deathbed. The giant reform bill turned
out to be like the fish reeled in by Hemingway’s Old Man – no sooner caught than set
upon by sharks that strip it to nothing long before it ever reaches the shore.” (“How
Wall Street Killed Financial Reform”, Matt Taibbi, Rolling Stone)

Congress, the White House and the SEC are all responsible for fragile state of the financial system
and for the fact that shadow banking has not been brought under regulatory oversight. This mess
should have been cleaned up a long time ago, instead, shadow banking is experiencing a growth-
spurt, adding trillions to money supply and pushing the system closer to disaster. It’s shocking.
MIKE WHITNEY lives in Washington state. He is a contributor to Hopeless: Barack Obama and
the Politics of Illusion (AK Press). Hopeless is also available in a Kindle edition. He can be
reached at fergiewhitney@msn.com.

Вам также может понравиться