Вы находитесь на странице: 1из 8

1From:

http://idlewords.com/talks/sase_panel.htm
The Moral Economy of Tech
by Maciej Ceglowski
This is the text version of remarks I gave on June 26, 2016, at a panel on the
Moral Economy of Tech at the SASE conference in Berkeley. The other panel
participants were Kieran Healy (whose remarks are here), Stuart Russell and
AnnaLee Saxenian. We were each asked to speak for ten minutes, to an audience
of social scientists.
I am only a small minnow in the technology ocean, but since it is my natural
habitat, I want to make an effort to describe it to you.
As computer programmers, our formative intellectual experience is working with
deterministic systems that have been designed by other human beings. These can
be very complex, but the complexity is not the kind we find in the natural world.
It is ultimately always tractable. Find the right abstractions, and the puzzle box
opens before you.
The feeling of competence, control and delight in discovering a clever twist that
solves a difficult problem is what makes being a computer programmer
sometimes enjoyable.
But as anyone who's worked with tech people knows, this intellectual background
can also lead to arrogance. People who excel at software design become
convinced that they have a unique ability to understand any kind of system at all,
from first principles, without prior training, thanks to their superior powers of
analysis. Success in the artificially constructed world of software design promotes
a dangerous confidence.
Today we are embarked on a great project to make computers a part of everyday
life. As Marc Andreessen memorably frames it, "software is eating the world".
And those of us writing the software expect to be greeted as liberators.
Our intentions are simple and clear. First we will instrument, then we will
analyze, then we will optimize. And you will thank us.
But the real world is a stubborn place. It is complex in ways that resist
abstraction and modeling. It notices and reacts to our attempts to affect it. Nor
can we hope to examine it objectively from the outside, any more than we can

step out of our own skin.


The connected world we're building may resemble a computer system, but really
it's just the regular old world from before, with a bunch of microphones and
keyboards and flat screens sticking out of it. And it has the same old problems.
Approaching the world as a software problem is a category error that has led us
into some terrible habits of mind.
BAD MENTAL HABITS
First, programmers are trained to seek maximal and global solutions. Why solve a
specific problem in one place when you can fix the general problem for
everybody, and for all time? We don't think of this as hubris, but as a laudable
economy of effort. And the startup funding culture of big risk, big reward
encourages this grandiose mode of thinking. There is powerful social pressure to
avoid incremental change, particularly any change that would require working
with people outside tech and treating them as intellectual equals.
Second, treating the world as a software project gives us a rationale for being
selfish. The old adage has it that if you are given ten minutes to cut down a tree,
you should spend the first five sharpening your axe. We are used to the idea of
bootstrapping ourselves into a position of maximum leverage before tackling a
problem.
In the real world, this has led to a pathology where the tech sector maximizes its
own comfort. You don't have to go far to see this. Hop on BART after the
conference and take a look at Oakland, or take a stroll through downtown San
Francisco and try to persuade yourself you're in the heart of a boom that has
lasted for forty years. You'll see a residential theme park for tech workers,
surrounded by areas of poverty and misery that have seen no benefit and ample
harm from our presence. We pretend that by maximizing our convenience and
productivity, we're hastening the day when we finally make life better for all those
other people.
Third, treating the world as software promotes fantasies of control. And the best
kind of control is control without responsibility. Our unique position as authors
of software used by millions gives us power, but we don't accept that this should
make us accountable. We're programmerswho else is going to write the
software that runs the world? To put it plainly, we are surprised that people seem
to get mad at us for trying to help.
Fortunately we are smart people and have found a way out of this predicament.
Instead of relying on algorithms, which we can be accused of manipulating for

our benefit, we have turned to machine learning, an ingenious way of disclaiming


responsibility for anything. Machine learning is like money laundering for bias.
It's a clean, mathematical apparatus that gives the status quo the aura of logical
inevitability. The numbers don't lie.
Of course, people obsessed with control have to eventually confront the fact of
their own extinction. The response of the tech world to death has been
enthusiastic. We are going to fix it. Google Ventures, for example, is seriously
funding research into immortality. Their head VC will call you a "deathist" for
pointing out that this is delusional.
Such fantasies of control come with a dark side. Witness the current anxieties
about an artificial superintelligence, or Elon Musk's apparently sincere belief that
we're living in a simulation. For a computer programmer, that's the ultimate loss
of control. Instead of writing the software, you are the software.
We obsess over these fake problems while creating some real ones.
In our attempt to feed the world to software, techies have built the greatest
surveillance apparatus the world has ever seen. Unlike earlier efforts, this one is
fully mechanized and in a large sense autonomous. Its power is latent, lying in
the vast amounts of permanently stored personal data about entire populations.
We started out collecting this information by accident, as part of our project to
automate everything, but soon realized that it had economic value. We could use
it to make the process self-funding. And so mechanized surveillance has become
the economic basis of the modern tech industry.
SURVEILLANCE CAPITALISM
Surveillance capitalism has some of the features of a zero-sum game. The actual
value of the data collected is not clear, but it is definitely an advantage to collect
more than your rivals do. Because human beings develop an immune response to
new forms of tracking and manipulation, the only way to stay successful is to
keep finding novel ways to peer into people's private lives. And because much of
the surveillance economy is funded by speculators, there is an incentive to try
flashy things that will capture the speculators' imagination, and attract their
money.
This creates a ratcheting effect where the behavior of ever more people is tracked
ever more closely, and the collected information retained, in the hopes that
further dollars can be squeezed out of it.
Just like industrialized manufacturing changed the relationship between labor

and capital, surveillance capitalism is changing the relationship between private


citizens and the entities doing the tracking. Our old ideas about individual
privacy and consent no longer hold in a world where personal data is harvested
on an industrial scale.
Those who benefit from the death of privacy attempt to frame our subjugation in
terms of freedom, just like early factory owners talked about the sanctity of
contract law. They insisted that a worker should have the right to agree to
anything, from sixteen-hour days to unsafe working conditions, as if factory
owners and workers were on an equal footing.
Companies that perform surveillance are attempting the same mental trick. They
assert that we freely share our data in return for valuable services. But opting out
of surveillance capitalism is like opting out of electricity, or cooked foodsyou
are free to do it in theory. In practice, it will upend your life.
Many of you had to obtain a US visa to attend this conference. The customs
service announced yesterday it wants to start asking people for their social media
profiles. Imagine trying to attend your next conference without a LinkedIn
profile, and explaining to the American authorities why you are so suspiciously
off the grid.
The reality is, opting out of surveillance capitalism means opting out of much of
modern life.
We're used to talking about the private and public sector in the real economy, but
in the surveillance economy this boundary doesn't exist. Much of the day-to-day
work of surveillance is done by telecommunications firms, which have a close
relationship with government. The techniques and software of surveillance are
freely shared between practitioners on both sides. All of the major players in the
surveillance economy cooperate with their own country's intelligence agencies,
and are spied on (very effectively) by all the others.
As a technologist, this state of affairs gives me the feeling of living in a forest that
is filling up with dry, dead wood. The very personal, very potent information
we're gathering about people never goes away, only accumulates. I don't want to
see the fire come, but at the same time, I can't figure out a way to persuade other
people of the great danger.
So I try to spin scenarios.
THE INEVITABLE LIST OF SCARY SCENARIOS
One of the candidates running for President this year has promised to deport

eleven million undocumented immigrants living in the United States, as well as


block Muslims from entering the country altogether. Try to imagine this policy
enacted using the tools of modern technology. The FBI would subpoena Facebook
for information on every user born abroad. Email and phone conversations would
be monitored to check for the use of Arabic or Spanish, and sentiment analysis
applied to see if the participants sounded "nervous". Social networks, phone
metadata, and cell phone tracking would lead police to nests of hiding
immigrants.
We could do a really good job deporting people if we put our minds to it.
Or consider the other candidate running for President, the one we consider the
sane alternative, who has been a longtime promoter of a system of extrajudicial
murder that uses blanket surveillance of cell phone traffic, email, and social
media to create lists of people to be tracked and killed with autonomous aircraft.
The system presumably includes points of human control (we don't know because
it's secret), but there's no reason in principle it could not be automated. Get into
the wrong person's car in Yemen, and you lose your life.
That this toolchain for eliminating enemies of the state is only allowed to operate
in poor, remote places is a comfort to those of us who live elsewhere, but you can
imagine scenarios where a mass panic would broaden its scope.
Or imagine what the British surveillance state, already the worst in Europe, is
going to look like in two years, when it's no longer bound by the protections of
European law, and economic crisis has driven the country further into
xenophobia.
Or take an example from my home country, Poland. Abortion has been illegal in
Poland for some time, but the governing party wants to tighten restrictions on
abortion by investigating every miscarriage as a potential crime. Women will
basically be murder suspects if they lose their baby. Imagine government agents
combing your Twitter account, fitness tracker logs, credit card receipts and
private communications for signs of potential pregnancy, with the results
reported to the police to proactively protect your unborn baby.
We tend to imagine dystopian scenarios as one where a repressive government
uses technology against its people. But what scares me in these scenarios is that
each one would have broad social support, possibly majority support. Democratic
societies sometimes adopt terrible policies.
When we talk about the moral economy of tech, we must confront the fact that we
have created a powerful tool of social control. Those who run the surveillance

apparatus understand its capabilities in a way the average citizen does not. My
greatest fear is seeing the full might of the surveillance apparatus unleashed
against a despised minority, in a democratic country.
What we've done as technologists is leave a loaded gun lying around, in the hopes
that no one will ever pick it up and use it.
CONCLUSION
The first step towards a better tech economy is humility and recognition of limits.
It's time to hold technology politically accountable for its promises. I am very
suspicious of attempts to change the world that can't first work on a local scale. If
after decades we can't improve quality of life in places where the tech lite
actually lives, why would we possibly make life better anywhere else?
We should not listen to people who promise to make Mars safe for human
habitation, until we have seen them make Oakland safe for human habitation. We
should be skeptical of promises to revolutionize transportation from people who
can't fix BART, or have never taken BART. And if Google offers to make us
immortal, we should check first to make sure we'll have someplace to live.
Techies will complain that trivial problems of life in the Bay Area are hard
because they involve politics. But they should involve politics. Politics is the thing
we do to keep ourselves from murdering each other. In a world where everyone
uses computers and software, we need to exercise democratic control over that
software.
Second, the surveillance economy is way too dangerous. Even if you trust
everyone spying on you right now, the data they're collecting will eventually be
stolen or bought by people who scare you. We have no ability to secure large data
collections over time.
The goal should be not to make the apparatus of surveillance politically
accountable (though that is a great goal), but to dismantle it. Just like we don't let
countries build reactors that produce plutonium, no matter how sincere their
promises not to misuse it, we should not allow people to create and indefinitely
store databases of personal information. The risks are too high.
I think a workable compromise will be to allow all kinds of surveillance, but limit
what anyone is allowed to store or sell.
More broadly, we have to stop treating computer technology as something
unprecedented in human history. Not every year is Year Zero. This is not the first
time an enthusiastic group of nerds has decided to treat the rest of the world as a

science experiment. Earlier attempts to create a rationalist Utopia failed for


interesting reasons, and since we bought those lessons at a great price, it would
be a shame not to learn them.
There is also prior art in attempts at achieving immortality, limitless wealth, and
Galactic domination. We even know what happens if you try to keep dossiers on
an entire country.
If we're going to try all these things again, let's at least learn from our past, so we
can fail in interesting new ways, instead of failing in the same exasperating ways
as last time.

Вам также может понравиться