Вы находитесь на странице: 1из 180

05.

Émile durkheim on suicide & society

So, the fact that we have society at all is kind of amazing. Think about it: People with
different interests, different amounts of money, members of different subcultures, races,
and sexual orientations, somehow all manage to hold together, in this thing we call
society. A thing that, at least kind of, works. But it doesn't just hold together.

Society has to somehow endure periods of intense change without falling apart. Political
change, technological change, population growth, economic crises – all these Things can
be massively disruptive. Sometimes we might even worry that the fabric of society won't
be able to take the stress. And it’s these questions of how society holds together, and how
to understand when it goes wrong, that Émile Durkheim, one of the founders of
sociology, tried to answer.

You know who knows a thing or two about social disruptions? France. Émile Durkheim
lived in France from 1858 to 1917, which means that he lived almost his entire life under
France's Third Republic, founded in 1871. But, despite being the third republic, it was the
first stable republic in France's history.

Between 1800 and 1871, France was governed by two republics, two monarchies, and
two empires. But the turmoil wasn't just political. France was also dealing with major
economic, technological, and cultural changes, as industrialization took hold, and the
traditional authority of the Catholic Church weakened.

Given all this, it should be no surprise that Durkheim was concerned with the question of
what kept societies together, so that he could make sure that his didn't fall apart again.
And this was the task of sociology, as he understood it. Sociology was to be a truly
scientific study of society.
With it, we could understand its normal and abnormal functioning, we could diagnose
how it was changing, and we could deal with the consequences. To Durkheim, sociology
was to society what biology and medicine were to the human body. He actually thought
of society as a kind of organism, made up of different parts, which all had to function
well together in order for that organism to be healthy.

This basic understanding of society in terms of structures that fit together, and which
function either well or poorly, makes Durkheim the founder of the structural functionalist
paradigm that we discussed in episode 2. Now, if sociology was to be a true science, then
it needed well-defined methods. And Durkheim focused a lot of his effort on this
problem.

He was committed to sociology as an empirical endeavor. And his ambitious book, called
“Suicide” is really the first piece of sociological work to use statistical methods as its
primary mode of argument. Durkheim was also the first in the field to think in terms that
we now consider standard in sociology. Like, thinking about the problem of
operationalizing variables, and puzzling over how intangible concepts, like social
integration or solidarity, can be reflected in things that we can actually measure.

And beyond this question of method lies an even bigger question: If sociology is a
science, then what does it study? Durkheim thought that any science needed a well-
defined object of study. And the object for Durkheim was the social fact.

In his book, “Rules of Sociological Method,” he defines social facts as consist[ing] of


manners of acting, thinking and feeling external to the individual, which are invested with
a coercive power by virtue of which they exercise control over him.

OK, there are three things to highlight in this definition. First is the fact that it’s really
broad. Social facts include everything from political systems, to beliefs about right and
wrong, to suicide rates, to holiday celebrations, and architectural styles.
Second, notice that social facts are external to the individual. This might seem a little
confusing; I mean, how can a way of thinking be external to a person? But what
Durkheim means here is that social facts have a life outside of you or me. For instance, if
you give gifts at Christmas, think for a second about why. That’s not something that you
came up with on your own. Giving gifts at Christmas wasn’t your idea. It’s a social fact,
with an existence that’s external to you. If you don’t celebrate Christmas, the strength of
Christmas as a social fact in the US means

you’ve probably already experienced the Third thing I want to highlight: The idea that
social facts are powerful, and coercive, and they can make you do things You otherwise
wouldn’t. Don’t believe me?

Let’s go to the Thought Bubble. Imagine a hypothetical family at a hypothetical


christmas. None of them want gifts, and all of them have better things to do than spend
money buying gifts for anyone else. In fact, none of them are even that committed to
celebrating Christmas at all. And yet, come Christmas morning there’s a pile of presents
under the tree. And there’s a tree there in the first place! Why? Well, maybe no one was
willing to say that they didn't want a gift. Or maybe they all said that, but they each
bought gifts anyway, because they were afraid that the others would too.

The point is, the specific explanation for the behavior in this family doesn't really matter.
What's important is that we can see here the power of a social fact, even in a situation
where no one directly involved believes in it! If that's not an external coercive power, I
don't know what is. But this doesn’t just happen with gift giving at Christmas.

Social facts include all kinds of things. They help dictate how you interact with your
neighbors and how you relate to society. Social facts and their coercive power represent a
form of social cohesion, which points us Back to our original question: how societies
hold together and how they can go wrong.
Durkheim's answer to the question of social cohesion is what he called the common or
collective consciousness. The common consciousness is basically the collection of all the
beliefs, morals, and ideas that are the social facts in a given society. And, like with gift-
giving at Christmas, these beliefs aren’t necessarily held by everyone.

They’re just the beliefs that hold coercive power. They’re the ideas that people give life
to, in their interactions with one another. So, common consciousness holds a society
together. But what are the problems? What is social dysfunction? For Durkheim, if
society is an organism, then dysfunction must be thought of as a disease.

Now, you might think that something like crime would be a social dysfunction. But, by
Durkheim’s thinking, crime can't be a disease, because every society has it. So, you might
not like crime, but some amount of crime is normal. In the same way, you might wish
you didn't have to sleep, but that doesn't make sleeping a disease. It's just a normal part of
the way the human body works. And just like sleep, Durkheim argued that crime serves a
purpose. For example, he said that crime helps strengthen the common consciousness. To
him, crime and punishment were a kind of public lesson in right and wrong: When
someone is judged and punished, that shows us both society's morals and how strong
those morals are.

Crime can also point to possible changes in the common consciousness. When Rosa
Parks refused to give up her seat and move to the back of the bus, she committed a crime.
But her crime set off a city-wide bus boycott that resulted in the law being struck down.

So crime in and of itself isn’t necessarily a dysfunction, but, just like how sleeping 18
hours a day, every day might be a sign of disease, if the level of crime in a society
becomes excessive, it would eventually stop serving these functions, and the society
could no longer function normally. And that’s what social dysfunction is for Durkheim:
something that impedes the normal functioning of society. Since Durkheim is a structural
functionalist, social dysfunctions always have larger structural causes – they’re created
by some underlying problem with the social organism.

Durkheim applied this idea in his famous book on suicide. Now, it might be strange to
think of suicide as social at all, but Durkheim argued that there was actually a very strong
link between societal structure and people taking their own lives. And he found this link
in a dysfunctional aspect of his society: namely, in a lack of social Integration.

When Durkheim looked at the statistics on suicide in Europe over the 19th century, he
saw a massive increase, one that coincided with the shift from traditional to modern
society. Durkheim argued that traditional societies – like, those of feudal Europe – were
highly socially integrated. People knew their place in society, what that place meant, and
how they related to other people. But modern society, over the preceding century, had
suffered from a loss of social integration. The decreasing importance of religion, and of
other traditional ways of thinking, resulted in a smaller, weaker common consciousness
and a less intense communal life. As a result, people were less strongly bound to their
society. They didn’t necessarily feel they had a place in it and couldn’t understand how
they fit.

This, Durkheim argued, resulted in a dramatically increased suicide rate. Now, suicide is
certainly a personal act, motivated by personal feelings or psychological conditions. But
Durkheim showed how these personal feelings were not purely personal, and that they
were influenced by the structure of society. In this case, he argued that the values holding
society together were being pulled apart, and so people lost their sense of place. Feelings
of isolation or meaninglessness could be traced back to large social changes. And
Durkheim, in diagnosing the problem, believed he had a solution.

If a high suicide rate was a disease, Durkheim’s prescription was to strengthen social
organizations – especially those based around the workplace, because that’s where people
were spending more and more of their time. He figured that these organizations – sort of
like workers’ guilds – could help provide people with that sense of place that they were
lacking. Now, many sociologists today see that Durkheim’s work on suicide was
undermined by the poor quality of statistics at the time.

But still, he used those statistics in new ways, as evidence and tests for theories of
society. And you can see in his research how Durkheim tried to answer big questions.
Society is composed of social facts, and bound together by common consciousness. This
normal functioning can evolve, but it can also be disrupted by rapid change. And that,
Durkheim believed, is where sociology steps in. By studying society scientifically, and
understanding social facts, sociologists can diagnose the disease and prescribe the cure.

Today you learned about Émile Durkheim, and some of his major ideas. We talked about
social facts and how he framed sociology as a science. We introduced the idea of
common consciousness and tried to understand how it binds society together. And we
looked at suicide as an example of how Durkheim applied his concepts to a specific
social problem. But there are lots of other ways to understand the purpose of sociology,
and we'll see a very different understanding next week as we continue our whirlwind
tour of the founding theorists with a look at Karl Marx.

6. karl marx & conflict theory

You’ve probably heard of Karl Marx. He's remembered as the father of divisive political
movements, and his name is sometimes still thrown around in American politics as a kind
of slur. But I don't want to talk about that. I want to talk about Marx the philosopher.
Marx the scholar. In the 19th century, a time defined by radical inequality and rapid
technological and political change in Europe, Marx was concerned with one question:
What does it mean to be free? Starting from this question, Marx developed an entire
theory of history. And in doing so, he laid the foundation for the paradigm of conflict
theory in sociology, ultimately pushing the discipline to look at questions of power,
inequality, and how these things can drive societal change.

If Durkheim was concerned with social solidarity, with how society hangs together, Marx
was concerned with freedom. The question that Marx asked was "how can people be
free?" Because humans aren’t just naturally free. When you think about it, we're actually
incredibly constrained. Our physical bodies have all kinds of needs we have to meet in
order to survive, and they’re needs that we're not really adapted to meet. Like, if you take
a hummingbird and put it in the middle of a forest somewhere, it'll just go on about its
day, collecting nectar and living its life. But if you drop a person in the middle of the
woods, they’ll probably starve.

Compared to other animals, Marx thought, we're incredibly poorly adapted to the natural
world. In fact, the only way for us to survive in nature is to change it, working together to
remake it to fit our needs. This is labor, he said, and we must labor cooperatively in order
to survive. As we labor, we change the world around us, and gradually free ourselves
from our natural constraints. But what Marx saw was that just as we freed ourselves from
these natural constraints, we entangled ourselves in new social constraints. Let's go to the
Thought Bubble to explore this some more. Think about it like this. Ten thousand years
ago, basically everybody spent all day trying to get food.

In this "primitive communism," as Marx called it, people were strongly bound by natural
constraints, but socially very equal. Now compare that to the Middle Ages when, under
feudalism, you have an entire class of people, the nobility, who never spent any time
worrying about where their next meal would come from. But you also have the peasantry,
who still worked constantly, making food. In fact, they spent a lot of their time making
food for the nobility.
People were producing more than they needed to survive, but instead of that surplus
being equally distributed, society was set up so that some people simply didn't need to
labor at all, while others had to work harder. That's not a natural constraint anymore,
that's a social one. Working together allowed us to transcend our natural constraints,
Marx argued, but the way labor is organized leads to massive inequalities.

So, central to the question of freedom for Marx is the question of labor, how it's
organized and who it benefits, and how this organization changes over time. This focus
on labor gave rise to the perspective created by Marx and his longtime collaborator
Friedrich Engels – a perspective known as historical materialism.

Historical materialism is historical because it looks at change over time, and it's
materialism because it is concerned with these questions of material reality – that is, how
production is organized, and who has things like food, or money, and who doesn't. Now,
it's not that Marx didn't care about other things, like politics or religion.

But he felt that they were secondary to the production and control of resources. And I
don't mean secondary as in less important; I mean secondary because he thought that if
you wanted to understand those things, you had to understand the material reality they
were based on first. In this view, the economy – that is, the organization of labor and
resources in a society – was the foundation, and everything else – politics, culture,
religion, even families – was what Marx called the superstructure, which was built on top
of material reality.

So when Marx studied history, he didn't focus on wars and power struggles between
states. Instead, he saw historical development in terms of modes of production and
economic classes. Now, “modes of production” might sound like they’re about how stuff
is made, but Marx understood them as stages of history.
Primitive communism, feudalism, and capitalism are all modes of production. And modes
of production are all defined by a combination of forces of production and relations of
production. Forces of production are basically the technical, scientific, and material parts
of the economy – tools, buildings, material resources, technology, and the human labor
that makes them go.

In modern capitalism, the forces of production include things like factories, oil, and the
internal combustion engine. But they also include cultural or social technologies, like the
idea of the assembly line and mass production. The relations of production, meanwhile,
define how people organize themselves around labor. Do people work for wages, or does
everyone produce and sell their own goods? How does ownership or property work? Is
trade a central part of the economy? These are all questions about the relations of
production. And these questions are important because, if you think in terms of social
constraints and surplus, the relations of production specify how the surplus is taken from
the people who produce it, and who gets to decide how the surplus is used. And, in
capitalism, these relations aren’t all that clear-cut. For one thing, we don't have legally
defined classes.

In feudalism, being a lord or a peasant was a legal matter. If a peasant didn’t work, their
lord could legally punish them. But under capitalism there aren't any legal rules about
who labors and who doesn't. If you skip work you don’t get tossed in jail, you just get
fired.

But Marx was a historical materialist, so in his view, even in feudalism, classes weren’t
really defined by laws, they were actually defined by their place in the relations of
production. And when Marx looked at industrial capitalism
taking shape around him, he saw two main classes: the working class (or proletariat) and
the capitalists (or the bourgeoisie). The proletariat are defined by the fact that they don’t
own or control the means of production – that is, the materials you need to use in order to
labor and produce goods. One way of thinking about the means of production is as the
inanimate part – the actual, physical stuff – that makes up the forces of production. So
this includes everything from the land to stand on while you work, to the raw materials
you need, like trees, and coal, and iron ore, to the tools and machines you use. To
simplify things dramatically, the proletariat are defined by the fact that, while they work
in the factories and use resources to make things, they don’t own the factories or the
things they make.

The bourgeoisie are defined by the fact that they do own the factories and the things that
are made in them. They control the means of production and the products that come from
them. It’s this difference in who controls the means of production, Marx said, that leads
to exploitation in capitalism, in the form of wage labor. If the proletariat lack access to
the means of production, he argued, then they only have one thing they can sell: their
labor. And they must sell their labor. If they don't, they starve.

Now you might argue that, hey, they're being paid, right? Well, Marx would counter that
they’re only being paid enough to live on, if barely. However, Marx would also argue
that they're being paid less than the worth of what they produce. And it is that difference
– between the value of the wage and the value of what’s produced – which is the source
of surplus in capitalism. You know this surplus as profit. And the bourgeoisie get to
decide what to do with the profits. Because of this, Marx believed that the bourgeoisie
will always be looking to make profits as large as possible, both by driving down wages
and by driving up productivity. And this leads to one of the big problems with capitalism:
crises.

Specifically, crises of overproduction. Other modes of production had crises, too, but
they were caused by not having enough. In capitalism, for the first time in history, there
were crises of having too much. We reached a point where the forces of production were
so developed that we could produce far more than we needed. But the vast majority of
people couldn’t afford to buy any of it. And so we had crises where the economy
collapsed, despite the fact that there was more than enough to go around. Crises of
overproduction are an example of what Marx saw in every mode of production: the
contradiction between the forces of production and the relations of production.

Marx understood history as a series of advances in the forces of production – like, greater
coordination among capitalists, more technological complexity, and more organizational
innovation. But eventually, he said, those advances always stall, as the forces of
production run up against the limits created by the relations of production.

For example, in the early days of capitalism, the relations of production included things
like private ownership of property, competition among capitalists, and wage labor. And
these things allowed for explosive economic growth. But eventually, these very same
things became limitations on the forces of production – stuff like factories, technology,
and human labor.

That’s because capitalists drove wages down in pursuit of profit, and they competed with
each other, leading to a lack of coordination in the economy. So you wound up with a
population that couldn’t afford to buy anything, while at the same time being offered way
more goods than it would ever need. And, with the economy in shambles, there's no way
for the forces to keep developing – there’s no money to invest in new factories or new
technologies.

So the relations of production that created economic growth became precisely the things
that caused crises. Marx saw this as an impasse that all modes of production eventually
meet. So how do you get a society to move past it? Marx said, the way forward was class
conflict. History is a matter of struggling classes, he said, each aligned with either the
forces or relations of production. The bourgeoisie are aligned with the relations of
production, he said, because these relations are what allow them to extract surplus from
the workers. So they're quite happy with the situation as it stands.
But the proletariat want change. They want the further development of the forces of
production – of which their labor makes up a large part – and they want a complete
change in the relations of production. They want an end to exploitation and they want the
surplus to benefit them. After all, it was their labor that created the surplus. In short, they
want revolution.

And so this is Marx's model of history: a series of modes of production, composed of


forces and relations of production. These forces and relations develop together until they
eventually come into conflict, leading to a revolution by the oppressed class and the
institution of a totally new set of relations, where the workers benefit from the efforts of
their labor. Plenty of theorists followed in Marx’s wake, taking his idea of historical
materialism and expanding it to better deal with some of the areas that Marx had left out.

Particularly interesting here is the work of the Italian theorist Antonio Gramsci, who
wrote in the years preceding World War II. One of the big questions implicit in Marx’s
theory is just how the bourgeoisie manages to stay in power so effectively. And Gramsci
answered this with the theory of hegemony. He argued that the ruling class stays in
power, in part, through hegemonic culture, a dominant set of ideas that are all-pervasive
and taken for granted in a society. While they’re not necessarily right or wrong, these
ideas shape everyone's understanding of the social world, blinding us to the realities of
things like economic exploitation.

But hegemonic ideas don’t need to be economic ones. They could just as easily be beliefs
about gender, or race. And this points to possibly Marx’s biggest impact. While Marx’s
model of history is specific to economic conflict, we can see in it the essence of the
broader sociological paradigm of conflict theory. Conflict theory is the basic idea of
looking at power dynamics and analyzing the ways in which struggles over power drive
societal change, as all kinds of groups, not just workers and owners, fight for control over
resources.
Marx’s ideas gave rise to a host of conflict theories in sociology, including Race-Conflict
Theory, Gender-Conflict Theory, and Intersectional Theory. These theories give us ways
to understand power, control, and freedom in modern society, and we’re going to be
looking at them over the next couple of weeks. But for today, you learned about Karl
Marx, historical materialism and Marx’s basic perspective on history. You also learned
about modes of production, their development, and how they fit into Marx’s overall
theory of historical development, along with class struggle and revolution. And finally,
we saw how Marx’s ideas gave rise to Gramsci’s idea of hegemony, and to conflict
theories more generally.

07. dubois & race conflict

Two bachelor degrees. PhD from Harvard University. Two-year fellowship to study in
Berlin. Professor of sociology and history at two different universities. Author of
countless books. Activist and co-founder of a key civil rights organization. Editor and co-
founder of a magazine. And a poet to boot. Pretty good resume, yeah? What if I make it a
bit more impressive? That PhD from Harvard? First Harvard PhD granted to an African
American. The civil rights organization? The NAACP. That magazine? The Crisis, the
longest running black publication in the United States, in print since 1910. This resume
belongs to William Edward Burghardt DuBois, whom you might know better as W.E.B.
Dubois. He was one of the earliest American sociologists, as well as one of the first
proponents of race-conflict theory. And his studies of the lives of African Americans
during the Jim Crow era of American history – and the oppression they faced – are the
cornerstones of how sociologists study race.

W.E.B. Dubois was born in a small town in Massachusetts in 1868. 1868 – that’s five
years after the Emancipation Proclamation. Three years after the end of the American
Civil War. And the same year that the 14th amendment was passed. At this time, race was
considered a biological construct. Slavery, and later the Jim Crow laws – laws in the
South that enforced racial segregation – were framed as natural consequences of the
supposed, natural inferiority of Blacks to Whites.

We, of course, now know that this was not just wrong, but deeply harmful. And more
than that – the idea that race itself is a purely biological, immutable quality is also
understood today as being simply untrue. Instead, race is thought of as a socially
constructed category of people, who share biological traits that society has deemed
important. Yes, human beings vary a lot in how we look – our skin color, our facial
features, our body shapes,
our hair texture. But those visual markers only become a “race” when members of society
decide that specific markers constitute a specific racial group.

This is why the concept of race often changes, across cultures and times. For example,
when Dubois was alive, Irish and Italian Americans weren’t considered ‘white,’ either.
But today, try telling some Boston Southie guy or an Italian grandma from Pittsburgh that
they’re not white. See what they say. Did something change about Irish and Italian
Americans biologically?

Of course not. It’s how society saw them that changed. And it’s that last bit – what race a
person is seen as, and how they’re treated as a result – that ends up being a huge
determinant of a person’s social outcomes. Dubois began to consider his race as a part of
his identity, when he moved to the South to go to college, and then spent several years in
Europe.

He saw how differently black people were treated in different places, and was
disillusioned about how Americans treated him based on his skin color. He can describe
this disillusionment much better than I can: “One ever feels his twoness,” he wrote, “an
American, a Negro; two souls, two thoughts, two unreconciled strivings; two warring id.”
This quote reveals a really critical underlying thread in much of Dubois’ work – the idea
of double-consciousness.

Dubois argued that there are two competing identities as a Black American – seeing one’s
self as an American and seeing one’s self as a Black person while living in white-centric
America. Living as a member of a non-dominant race, he said, creates a fracture in your
sense of identity within that society. These feelings are what fueled Dubois’ work, which
focused on the disparities and conflicts between people of different races – what we now
call race-conflict theory.

Today, questions of race and identity are studied by sociologists who work on racial
identity theory, which looks at how individuals come to identify as a certain race. Dubois
didn’t only research racial identity, though –he also looked at the everyday lives of black
and white Americans and wrote extensively about how and why their lives differed so
drastically in post-slavery America. Let’s go to the Thought Bubble to look at one of
Dubois’ early studies of these disparities. In 1896, the University of Pennsylvania hired
Dubois to do a survey on Black communities in Philadelphia.

His work eventually became ‘The Philadelphia Negro,’ the first published study of the
living conditions of African Americans. Dubois went knocking on doors, asking people
questions about themselves and their families. And there were an awful lot of doors. All
told, Dubois collected data on 9,675 African Americans. He focused on one specific ward
of Philly – the 7th ward, a historically Black neighborhood that attracted families of all
classes, from doctors and teachers, to the poor and destitute. He sat in thousands of
parlors, asking questions about age, gender, education, literacy, occupations, earnings,
crime, and documented the ways in which African-Americans differed from Philly’s
white residents.

For example, the Black population turned out to be much younger than the White
population and had a higher proportion of women. It also had lower literacy rates, higher
rates of poverty and crime, and a higher concentration of workers in the service industry
than in manufacturing or trade. Mortality rates were higher, as was the frequency
of illness. But here’s what made Dubois’ report especially unique: He concluded that
much of the dysfunction within Black communities came from their inferior access to
things like education and more lucrative jobs.

The reason that the black population had higher rates of death and illness, he said, was
because of occupational hazards, and poverty, and less access to health care. It’s hard to
express just how radical Dubois’s conclusions were at the time. The problems in black
communities were not due to racial inferiority, Dubois argued, but to racial prejudice.
And that was completely different from how many Americans thought at the time.

So, race doesn’t exist in a vacuum. It doesn’t just imbue you with certain essential
qualities. Instead, race matters because of the power that society gives it. For another
example, let’s stick with Philly and use the labor unions there in the 1890s. Because of
prejudice against Black workers, and beliefs about their abilities and morals, trade labor
unions didn’t allow Black workers to join.

And because they couldn’t join unions, many Black workers couldn’t get manufacturing
or trade work – which paid much better than service work. And because they couldn’t get
these jobs, Black communities had more men out of work, higher rates of poverty, and
more criminal behavior. Which then allowed the white workers and unions to justify their
decision to not allow black workers into their union. The prevailing beliefs about race
and racism ultimately reinforced themselves. This is what’s now known as racial
formation theory, a theory formalized by modern sociologists Michael Omi and Howard
Winant.

Racial formation refers to the process through which social, political, and economic
forces influence how a society defines racial categories – and how those racial categories
in turn end up shaping those forces. Omi and Winant argue that the concept of race came
about as a tool to justify and maintain the economic and political power held by those of
European descent.

Another modern look at these issues can be seen in the work of sociologist William Julius
Wilson. He explores why Black and White Americans tend to have such different
outcomes in terms of income, education, and more. And he argues that class, not race, is
the determining factor for many Black Americans. But the reasons that these class gaps
exist to begin with, come from the structural disadvantages that date back to Dubois’
time. Dubois continued to research the ways in which prejudice, segregation, and lack of
access to education and jobs were holding back African Americans.

A strong advocate of education and of challenging Jim Crow laws, he clashed with
another leading black intellectual of the time, Booker T. Washington, who advocated
compromise with the predominantly white political system. Over time, DuBois grew
frustrated with the limits of scholarship in affecting change, so he turned to direct
activism and political writing.

In 1909, he co-founded the National Association for the Advancement of Colored People
or the NAACP, and was the editor and intellectual driving force behind its magazine, The
Crisis. The NAACP fought against lynching, segregation of schools, voting
disenfranchisement, and much more. It used journalism as one of its most powerful
tools, publishing the records of thousands of lynchings over a thirty year period.

And it used lawsuits, targeting voter disenfranchisement and school segregation in


decade-long court battles. And, after DuBois’ time, it went on to become part of many of
the landmark moments in the fight for civil rights, including the Brown vs. Board of
Education case, the Montgomery Bus Boycott and the March on Washington.

Modern sociologists continue Dubois’ work on racial politics, asking the question: How
is race intertwined with political power, and the institutional structures within a society?
Sociologist Eduardo Bonilla-Silva, for example, argues that we now have what he calls
“racism without racists.” What he means is: Explicitly racist views have become less
socially acceptable, so fewer people are willing to say that they don’t think Black and
White Americans should have equal rights. But, as Bonilla-Silva points out, that doesn’t
mean racism is a thing of the past. Instead, he says, structural racism – the kind that’s
entrenched in political and legal structures – still holds back the progress of racial
minorities.

Take, for example, the fact that the median wealth of white Americans is 13 times higher
than the median wealth of black Americans. Now, you could look at that and say, well,
black people just aren’t as good at saving as white people. After all, it’s not like there’s
anything legally preventing them from making or saving more money.

But that completely ignores the ways in which wealth builds up over generations. Past
generations of Black Americans were unable to build wealth, because they had far less
access to higher incomes, banking services, and housing. These ideas about how the
structures of power interact with race may have their origins in Dubois’ work,
but they continue today. And so do his studies of racial resistance.

Researchers of racial resistance ask: How do different racial groups challenge and change
the structures of power? Sometimes racial resistance is easy to see in society. Think the
Civil Rights movement of the 1950s and 60s, or Black Lives Matter today. But
sociologists can also look at more subtle forms of resistance, too, like resistance against
racial ideas and stereotypes. For example, sociologist Patricia Hill Collins has written
about the different relationships that black and white women have had with marriage and
staying home to raise a family. In the feminist movement of the 1960s and 70s, one of its
key issues was the exclusion of women from the workforce.

Entering the workforce was seen as a form of resistance. But Black women have, for
most of American history, been forced to work, or needed to work to help support their
families. For them, Collins argues, joining the workforce is not resistance. Instead,
staying at home to care for their families can be an act of resistance against society’s
expectations for Black women. All of these modern fields of study within race-conflict
theory – racial identity, racial formation, racial politics, and racial resistance – they all
have their origins in the work of one sociologist: W.E.B. Dubois. Today we talked about
W.E.B. Dubois, one of the founders of sociological thought and the founder of race-
conflict theory. We talked about race and how our understanding of how we define race
has changed over time.

We talked about Dubois’ idea of ‘double-consciousness’ and how it relates to the modern
day field of racial identity. We introduced the idea of racial formation and used Dubois’
survey of African Americans in Philadelphia to look at how economic, political, and
social structures affect how we perceive different races – and vice versa.

And finally, we looked at the activist side of Dubois’ life as co-founder of the NAACP
and editor of the Crisis, and discussed how modern day sociologists study racial politics
and racial resistance. Next time, we’ll take a look at some of the sociologists who were at
the forefront of a different type of conflict theory: gender-conflict theory.

08. harriet martineau & gender conflict theory

Where my ladies at? Seriously, we’ve spent a lot of time learning about the origins of
sociology, and all of the founders we’ve talked about so far have been men. That’s
because, when sociology was becoming an academic discipline, women didn’t have the
same access to education. In fact, it was considered improper in the 19th century for
women to write articles and give talks to the public. And this continued for decades, with
some of the top universities not allowing female students until the 1970s. Which sucks.
But it also raises an important question: Why do women and men get treated differently?
This is a question that sociologists can answer! Or, well, we can at least try to answer it.
Gender-conflict theory applies the principles of conflict theory to the relations among
genders. Specifically, it looks at how social structures perpetuate gendered inequalities.
Now, the functionalist approach has historically held that gender inequalities are a natural
result of each gender taking on the tasks they’re best suited for. But many modern
sociologists don’t share this view. Economic and political power structures that reinforce
traditional gender roles often cause more dysfunction than function. Restricting access to
education by gender is a great example of this dysfunction: Denying women access to
quality education makes our society worse by squashing the half the world’s potential!
Sociology’s understanding of society wouldn’t be complete without the women and
feminists who started the conversation about gender as an academic field of study. First
stop: sociology’s forgotten founder, Harriet Martineau.

Harriet Martineau was the first female sociologist, born in 1802 in England. Unlike Marx
or Durkheim or Weber, who are hailed as the forefathers of sociology and get entire
chapters devoted to their theories, Martineau typically gets, at most, a couple of sentences
in a textbook. Martineau started out kind of like the Crash Course of her time – bringing
research to the masses in easily digestible bites. She wrote a best-selling series called
Illustrations on the Political Economy, which used fables and a literary style of writing to
bring the economic principles of Adam Smith to the general public.

She was a favorite of many of the leading intellectuals of the time. Even Queen Victoria,
who loved Martineau’s writing so much that she invited Martineau to her coronation. But
this was just the start. Martineau decamped for the United States and spent two years
travelling the country, observing social practices. She went from North to South, from
small towns to Washington DC, sitting in on sessions of Congress, a Supreme Court
session, and a meeting with President Madison.
She then captured her observations in two books, Society in America and How to
Observe Morals and Manners. The first was a set of three volumes that identified
principles that Americans professed to hold dear, like democracy, justice, and freedom.
Then she documented the social patterns that she observed in America, and contrasted the
values that Americans thought they held, with the values that were actually enshrined in
their economic and political systems. Martineau’s observations included some of the first
academic observations of American gender roles, and she dedicated much of the third
volume to the study of marriage, female occupations, and the health of women. Despite
the title of her second book – How to Observe Morals and Manners – it was not a guide
to etiquette.

It was a treatise on research methodology, describing how to do cross-cultural studies of


morals and moral behavior. Martineau talked about interviewing, sampling, bias in
observation, the problem of generalizing from individuals to a whole society – many of
the hallmarks of modern research. She wrote about class, religion, suicide, nationalism,
domestic life, gender, crime, health – and this was all before Marx, before Durkheim,
before Weber. And her English translations of Comte’s work on positivist sociology were
so good that Comte himself told her: “I feel sure that your name will be linked with
mine.” But of course, Comte was wrong. Soon after her death, Martineau’s work was
forgotten.

It wasn’t until the 1970s, when feminist scholars began to revisit her work, that the full
extent of her influence on sociology began to be realized. That’s right, feminist scholars.
Now, I know for many people feminism is a
loaded term. And I want to make sure we’re clear about the historical and sociological
context for feminism as I’m using it here. Here, we’re defining feminism as the support
for social equality among genders. This is in opposition to patriarchy, a form of social
organization in which institutional structures, like access to political power, property
rights, and occupations, are dominated by men.
So feminism isn’t just associated with activism; it’s also a scholarly term. Feminist theory
is one school of thought in the study of gender. And over time, feminism has gone
through many different forms, often categorized as waves.

Let’s go to the Thought Bubble to look at what’s known as feminism’s first wave. In the
19th and early 20th century, the first wave of feminism focused on women’s suffrage –
or, the right to vote – and other legal inequalities. That’s because, in the 19th century, all
property and money that a woman had legally belonged to her husband. Imagine that. Not
being able to earn a salary that was your own, not being able to own land, not being able
to write a will. And on top of that, you can’t vote, which makes it a little hard to change
these things. It was these issues that prompted the start of the Women’s Rights
Movement, which began with a meeting of 300 like-minded women – and
a few men – in Seneca Falls, New York in 1848.

Early feminists Elizabeth Cady Stanton and Lucretia Mott organized the meeting to put
forth a manifesto on women’s rights, which became known as the Declaration of
Sentiments. This convention was the spark that set off the women’s suffrage movement
in the United States. It took many years of activism – court cases, speeches, protests, and
hunger strikes – until women finally won the right to vote in 1920. The first wave of
feminism didn’t only affect legal issues. It was also where many of the ideas about
societal roles of gender first got their start. Take Charlotte Perkins Gilman, for example.
You might recognize her as the author of the short story “The Yellow Wallpaper.”

But she was also a sociologist and social activist. Early in the 20th century, she published
papers and books on society’s assumptions about gender, focusing on marriage,
childbearing, and the assumed roles of women as housekeepers and men as breadwinners.
She wrote: “There is no female mind. The brain is not an organ of sex.
Might as well speak of a female liver.” Notice how she worded that – the brain is not an
organ of sex. Sex refers to biological distinctions between females, males, and intersex
individuals. But gender refers to the personality traits
and social roles that society attaches to different sexes. Think about it this way: Do men
and women act the same way across all cultures and time periods? If gender arose only
from biological differences between men and women, we would expect to see all cultures
defining femininity and masculinity in the same ways.

But we don’t. From the work of anthropologist Margaret Mead in the 1930s, to the
research done today by economists Uri Gneezy and John List, scientists have found that
gender roles change among societies, and over time. And this idea – the idea that gender
has societal origins – has formed the backbone of the second wave of feminism. Books
like The Second Sex by Simone de Beauvoir and The Feminine Mystique by Betty
Friedan argued against the idea that women were a lesser sex, who should be resigned to
taking care of children and the home.

The second wave of feminism focused on female participation in the labor force, equal
pay, reproductive rights, sexual violence, educational equality, and divorce. This was the
era of Title IX, the legalization of contraception and abortion, no fault divorce laws, and
the Equal Pay Act. But it was also an era of divisiveness within the feminist movement,
with many feeling that women in positions of power focused on issues most relevant to
white, upper middle class women. These divisions led to what’s known as the third wave
of feminism, starting in the 1990s, which has focused on broadening the definition of
feminism to encompass issues of race, class, sexuality, and other forms of disadvantage.
The ideas evoked by the third wave are nicely represented by author and feminist bell
hooks:

In her book “Ain’t I a Woman,” hooks writes: “The process begins with the individual
woman’s acceptance that American women, without exception, are socialized to be racist,
classist and sexist, in varying degrees ....” That’s a heavy statement. Most people don’t
think of themselves as racist or sexist. But one of the underlying ideas behind third wave
feminism is the acknowledgement of the structures of power that create inequality across
gender, race, class, and other dimensions of disadvantage. There’s a term that’s used a lot
in modern day feminism, which maybe you’ve heard used recently: intersectionality. So
what is intersectionality? You add a little race-conflict theory in with gender-conflict
theory, and a smidge of Marx’s theories about class conflict – and you get
intersectionality, the analysis of how race, class, and gender interact to create systems of
disadvantage that are interdependent.

The term intersectionality was coined by race and gender theorist Kimberlé Williams
Crenshaw. She wrote that the experience of being a black woman couldn’t be understood
just by understanding the experience of a black person, or the experience of a woman
independently. Instead, you have to look at how these identities intersect. How you – yes,
you, in particular, you – see society and see yourself is gonna be wrapped up in the
identities you have. I, as a cisgender white woman, will have a different experience in the
world as a result of my own interlocking identities.

And when it comes to our understanding of gender in this societal mix, we have to thank
Harriet Martineau, whose work was one starting point from which the waves of feminism
unfolded. Today we learned about Harriet Martineau and gender-conflict theory. We also
explored the three waves of feminism, as well as intersectionality. Next time, we’ll look
at another important figure in sociology Max Weber.

09. max weber & modernity


Take a second and imagine life just over five hundred years ago. Say you’re in Europe, at
the tail end of the Middle Ages. If you had to name the biggest change between then and
now, what would you choose? Maybe the internet, or industrialization, or the incredible
advances in health and medicine. Maybe you'd think back to Marx or Durkheim and say
that it was the shift from feudalism to capitalism. These are all good answers. But Max
Weber had a different one. The most important change wasn’t technical, or economic, or
political. The biggest change, he said – the one that best distinguishes the modern world
from the traditional one – was a difference in the way we think.

Like most of the theorists we've studied so far, Max Weber lived at the end of the 19th
century, and the turbulent changes of that time influenced him as it did all the others. He
lived during the formation of the first German national state, and watching this process
first-hand made him concerned with understanding modern society. So in his work,
Weber examined some of the defining characteristics of the modern world, which
sociologists have now spent over a hundred years studying and arguing about: He focused
on ideas like rationalization, bureaucracy and social stratification.

And when he saw where modernity was heading, he was really worried. But to
understand what he was so worried about, you need to understand how we got to
modernity in the first place. And for Weber, the real defining features of the transition
from traditionalism and modernity were ways of thinking. Not modes of production or
social integration, but ideas.

To think traditionally is to take the basic set-up of the world as given. In other words,
traditionalism sees the world as having a basic order, and that order is the way things
ought to be. We can see this very clearly in feudalism and divine-right monarchies: The
monarch is understood as having been anointed by God, and you owe them your
allegiance regardless of whether they're good or bad at their job. The question of whether
or not they deserve the position never even comes up. But if traditionalism takes things
for granted, modernity doesn't. In modernity, everything is up for grabs. What Weber saw
when he looked at history was that societies, and people, were becoming more rational.
They were undergoing a process of rationalization.
And Weber's definition of rationality included three specific things: calculability,
methodical behavior, and reflexivity. Calculability means that, if we know the inputs, we
can know the outputs. Just think of a bowl made in a factory versus one you make
yourself: Every single bowl comes out of the factory exactly the same, whereas no two
bowls you make by hand are gonna quite match. Now, the reason that we know the
outputs, if we know the inputs, is because there’s methodical behavior involved – a
procedure to follow. In the factory, the method is in the machines.

So the results are going to be the same, no matter who’s pulling the levers. Finally,
thinking rationally for Weber meant thinking about what you're doing, in other words,
thinking reflexively. You're constantly looking for new ways to improve the process, for
new and better and more efficient ways to make bowls. So, traditional society is the
society of individual artisans, each with their own process, which is how it’s always been
done. But modern society is the society of explicit instructions and standardized,
methodical, procedures which are always being reflected on and improved.

So what caused this massive shift in how people think? What kicked off this process of
rationalization? Weber gave here what might seem like an unlikely answer: religion. In
his book The Protestant Ethic and the Spirit of capitalism, Weber argued that the
transition from traditionalism to modernity began with the Protestant Reformation.

For hundreds of years, the Catholic Church dominated Medieval Europe until, in 1517, a
German priest named Martin Luther denounced corruption in the church. This set off a
series of new religious movements that radically opposed Catholic dogma. This was the
Protestant Reformation. And it’s in these new movements that Weber saw the origin of
modern rationality. Catholicism, after all, was the basis for the traditional worldview.

Everything in medieval life – from the structure of the social order to the way you farmed
– was the way it was, because God willed it. By contrast, in Lutheranism, you still have a
divinely sanctioned place in the world, but for the first time, the question of how well you
are performing your role became important. This idea – of personal responsibility –
opened the way for another important figure in Weber’s view of history: John Calvin.

Calvin didn’t believe that God could possibly be concerned with anything that one
measly little human might do. Instead, Calvin believed in predestination, the idea that
your fate, whether you’re saved or damned, has already been set by God, from the
beginning of the universe. And there's nothing you can do to change it: you're either one
of the “elect” or you’re not. So, how do you know? Through what Calvinists called a
"proof of election.”

And here’s where personal responsibility really comes in: The proof that you were saved
was to be found in how you lived your life. So the point of your life was no longer that it
was divinely appointed – it became a matter of how well, or how much, you work. And,
by extension of this logic, success itself became proof of election: If you’re financially
successful, then that was a sign that you were blessed by God!

Suddenly you didn’t just work until your needs were met, as you did in traditional
society. Now the work was an end in itself, and you worked to accumulate as much
wealth as possible, because wealth proved you were saved. This is the sociological
consequence of the Protestant Reformation that Weber studied and understood: It
transformed a communal, traditional society into an individualistic, capitalist society –
one that was focused on economic success.

And I don’t know if you’ve noticed, but modern capitalism is nothing if not rational.
Think about it: You must work methodically in your calling. You must constantly be
reflecting on your work, in order to work more efficiently and productively. And you use
profit as a calculable measure of your success. So rationalization gets its start in religion,
around questions of how we work. But Weber spent his career showing how all of society
came to be organized rationally.
In fact, if you've ever been to the DMV, you've seen what Weber argued was one of the
biggest impacts of rationalization in society: the rise of bureaucracy. Bureaucracy is a
key part of the transition from the traditional to the modern state, and Weber identified
six traits that make it both extremely rational and very efficient: It’s composed of a
hierarchy of positions with an extremely clear chain of command. This hierarchy is made
up of a variety of very specialized roles and is held together by formal, written
communications.

The people in the bureaucracy accomplish their work with technical competence,
according to detailed rules and regulations, and they do it without regard to the particular
people they're serving – in other words, they do it impersonally. And I can’t think of a
better example of all these traits than the bureaucracy you see at the DMV.

The workers do their jobs competently and according to the rules, and they treat you just
like they would treat anyone else, without regard for your personal characteristics, that
is, impersonally. But there isn't only this change in the way the state works, there's also a
change in the way the state is obeyed. In a traditional society, Weber believed that the
state ruled through traditional legitimacy: people followed the king because that's how
it had always been done.

But the modern state works differently: It rules through a combination of what we call
legal-rational and charismatic legitimacy. Legal-rational legitimacy is essentially a belief
in the system itself. You follow the rules because they are the rules.

So, the DMV employee doesn't ask for a certain piece of paperwork because that's how
it's always been done. He does it because that's how the procedure instructs him to do it.
If the manual were rewritten tomorrow, he’d do it that way instead. But there's a problem
with legal-rational legitimacy, and with bureaucracy in general: If it's all about following
the rules, well, somebody needs to make the rules. That's where charismatic legitimacy
comes
in.

You follow the commands of a charismatic leader because of the extraordinary


characteristics of that person. So the modern state is an apparatus of rules which are
ultimately directed by a group of charismatic leaders. In the US, for instance, when we go
to the polls every four years, we're making a choice about who’s going to direct the
bureaucracy, and we make that choice based on the characteristics of the people running.
And here’s another one of Weber’s major contributions to sociology: The idea that the
people who run to be leaders of our bureaucracies, do so with the support of political
parties. For Weber, political parties were a key example of social stratification, or the
way that people in society are divided according to the power they hold. Weber didn’t
think that society was divided purely based on economic classes, or your relationship to
the means of production, like Marx did.

He argued that the system was more complicated, and consisted of three elements. Like
Marx, he included class, but he didn’t think that classes had unifying interests. Weber
also included political parties, defined broadly as groups that seek social power. And
finally, he included status groups, defined by social honor, which includes things like
respect and prestige. All three of these things, Class, Power, and Status, affected a
person’s place in society. More importantly, each of these elements of Social
Stratification could vary independently. So there could be a poor priest, say, who is high
in social prestige, but of a low class. Or a lottery winner, who is of a high class but low in
status. Or a bureaucrat in a political party, who has some measure of political power, but
not necessarily money or status. And then there are those who can turn their fame, or
status, into political power.

Unlike Marx, Weber didn’t take a particularly critical stand on stratification in society.
But that doesn’t mean he didn’t see its problems. For Weber, rationalization was the
defining feature of the modern age, and he was deeply worried about it. Remember,
rationalization is about three things: calculability, methodical behavior, and reflexivity.
But it's really easy to lose reflexivity – to stop reflecting on your work or your role – and
instead become locked in a calculated routine that becomes meaningless and unthinking.
Weber worried that the systems that rationalization built will leave behind the ideas that
built them, and that they’ll simply roll on forever, eaninglessly, under their own
momentum.

He worried that we'll become locked in what he called an "iron cage" of bureaucratic
capitalism, from which we can’t escape; our lives will become nothing but a series of
interactions based on rationalized rules with no personal meaning behind them. This
worry about meaning, and the concern for ideas and how they shape our reality,is one of
the big influences that Weber handed down to future sociologists.

On the micro level, these ideas were picked up by what’s known as the symbolic
interactionist paradigm and theorists like Erving Goffman. Meanwhile, theorists like
Talcott Parsons and Jürgen Habermas took on the more macro version of these questions,
looking at processes like rationalization and bureaucratization and culture more generally.
We’ll talk more about culture, and what it is, next time.

But for now, you learned about Max Weber and his understanding of the modern world.
We talked about rationalization and the transition from traditional to modern society. We
discussed bureaucracy, legitimacy, and social stratification in the modern state. And we
saw why Weber was so worried about the modern world.

10. symbols, values & norms

You’re about to cross a street. What do you do? If there are no cars coming, do you stay
at the crosswalk, waiting for the light to change? Or do you just go for it? Do you look
left first before you cross, or do you look right? Or maybe you just dart across the street,
shouting, ‘Hey I’m walking here!’ No matter what you do in this situation, what you do is
going to depend on culture. Now you may be thinking, how can something like crossing
the street be a cultural phenomenon? Isn’t culture, like, opera and galas and fancy art
openings with tiny hor d’oeuvres?

Or maybe you think culture is bigger than all that, that culture is your heritage, traditions
that have been passed down for generations, like Quinceañeras, Bar Mitzvahs, or Sweet
Sixteen parties. The fact is, all of these things – street-crossing, fine arts, and traditional
rites of passage – they are all part of culture.

Culture is the way that non-material objects – like thoughts, action, language, and values
– come together with material objects to form a way of life. So you can basically break
culture down into two main components: things and ideas. When you’re crossing the
road, you can see markers of your culture in the things around you – the street signs, the
width of the road, the speed and style of the cars. This is material culture, the culture of
things. Books, buildings, food, clothing, transportation. It can be everything from iconic
monuments like the Statue of Liberty to something as simple as a crosswalk sign that
counts down how many seconds you have to cross the street.

But a lot of the culture that’s packed into crossing the street is non-material, too. We
interpret the color red to mean stop – because our culture has assigned red as a symbol for
stop and green for go. And if you grew up in a country where cars drive on the right side
of the road, your parents probably taught you to look left first before crossing.

This is non-material culture, the culture of ideas. It’s made up of the intangible creations
of human society – values, symbols, customs, ideals. Instead of the Statue of Liberty, it’s
the idea of liberty and what it means to be free. For our purposes as sociologists, we’ll
mainly be focusing on this second type of culture and its three main elements: symbols,
values and beliefs, and norms. Symbols include anything that carries a specific meaning
that’s recognized by people who share a culture. Like a stop sign. Or a gesture. If I do this
[holds up one hand, palm out, then just 1 finger], you probably know that I mean: hold on
a sec. Non-verbal gestures like this are a form of language, which is itself a symbolic
system that people within a culture can use to communicate.

Language is more than just the words you speak or write – and it’s not just a matter of
English or French or Arabic. The type of language you use in one cultural setting may be
entirely different than what you’d use in another. Take how you talk to people online.
New linguistic styles have sprung up that convey meaning to other people online,
because internet culture. See, there’s one right there! If you’re internet fluent, me saying
‘because’ and then a noun makes perfect sense, as a way of glossing over a complicated
explanation. But if you’re not familiar with that particular language, it just seems like bad
grammar. Whether it’s written, spoken or non-verbal, language allows us to share the
things that make up our culture, a process known as cultural transmission.

And one view of language is that it not only lets us communicate with each other, but that
it also affects how people within a culture see the world around them. This theory, known
as the Sapir-Whorf hypothesis, argues that a person’s thoughts and actions are influenced
by the cultural lens created by the language they speak.

Let’s go to the Thought Bubble to see an example of the Sapir-Whorf hypothesis in


action. What gender is the moon? For English speakers, this question might just conjure
images of the man in the moon, but in many languages, nouns have genders. And in some
languages, the moon is feminine, like the Spanish ‘la luna’.

But in others, the moon is masculine, like the German ‘der mond.’ And this affects how
Spanish and German people perceive the moon! In one study, Spanish and German
people were asked to rate objects – which were gendered in their language – with
reference to certain traits.
Like, is the moon beautiful? Is the moon rugged? Is the moon forceful? The study found
that for those whose language used a masculine article, objects were more strongly
associated with stereotypically masculine traits, like forcefulness.

Another study found that when a name was assigned to an object, and the name matched
the gender of the word for it, it was easier for people to remember the name. Like, “Maria
Moon” tended to be remembered more readily by Spanish-speakers than by German
speakers.

Now, I should mention that the Sapir-Whorf hypothesis is one that researchers are
divided on. Benjamin Lee Whorf – the American linguist who helped shape this theory –
did his original research on indigenous languages like Hopi and Inuit. And since then,
anthropologists have argued that some of his findings don’t hold up.

For example, Whorf famously claimed that because the Hopi language describes time
differently, the Hopi people think of time differently. But anthropological evidence about
the Hopi people suggests otherwise. And Whorf’s study led to a strange, and false,
stereotype that Hopi people, quote, “have no sense of time.” Sociology is an evolving
field, and academic disagreements like this are just one reason that we study language
and how it shapes our society.

But if language helps us communicate, shape, and pass on culture, the next element of
culture is what helps us organize culture into moral categories. Values are the cultural
standards that people use to decide what’s good or bad, what’s right or wrong. They serve
as the ideals and guidelines that we live by. Beliefs, by contrast, are more explicit than
values – beliefs are specific ideas about what people think is true about the world. So for
example, an American value is democracy, while a common belief is that a good political
system is one where everyone has the opportunity to vote.
Different cultures have different values, and these values can help explain why we see
different social structures around the world. Western countries like the United States tend
to value individualism and stress the importance of each person’s own needs, whereas
Eastern countries like China tend to value collectivism and stress the importance
of groups over individuals. These different values are part of why you’re more likely to
see young adults in the US living separately from their parents and more likely to see to
multi-generational households in China.

Cultural values and beliefs can also help form the guidelines for behavior within that
culture. These guidelines are what we call norms, or the rules and expectations that guide
behavior within a society. So giving up your seat for an elderly person? Great. Picking
your nose in public? Gross. These are two ways of talking about norms. A norm simply
relates to what we think is “normal”, whether something is either culturally accepted, or
not. And we have three main types of norms! The first are what we call folkways.
Folkways are the informal little rules that kind of go without saying. It’s not illegal to
violate a folkway, but if you do, there might be ramifications – or what we call negative
sanctions. Like, if you walk onto an elevator and stand facing the back wall instead of the
door.

You won’t get in trouble, but other people are gonna give you some weird looks. And
sometimes, breaking a folkway can be a good thing, and score you some positive
sanctions from certain parts of society. Like, your mom might ground you for getting a
lip ring, but your friends might think it’s really cool. Another type of norm are mores,
which are more official than folkways and tend to be codified, or formalized, as the stated
rules and laws of a society.

When mores are broken, you almost always get a negative sanction – and they’re usually
more severe than just strange looks. Standing backward in the elevator might make you
the office weirdo, but you’ll probably get fired if you come into work topless, because
there are strict rules about what kinds of clothing – or lack thereof – are
appropriate for the workplace. Hawaiian shirts – probably not. No shirt? You’re fired.
But mores aren’t universal.

You may get fired for showing up without a shirt at work, but men can lay on the beach
shirtless, or walk down the street with no problem. For women, these norms are different.
In the United States, cultural norms about women’s bodies and sexuality mean that it’s
illegal for women to go topless in public. But then in parts of Europe, social norms are
more lax about nudity, and it’s not uncommon for women to also be shirtless at the
beach. The last of type of norm is the most serious of the three: taboo. Taboos are the
norms that are crucial to a society’s moral center, involving behaviors that are always
negatively sanctioned.

Taboo behaviors are never okay, no matter the circumstance, and they violate your very
sense of decency. So, killing a person: taboo or not? Your first instinct might be to say,
yes, killing is awful. But, while most cultures agree that life is sacred, and murder should
be illegal, it’s not always considered wrong. Most societies say it’s okay to kill in
times of war or in self defense. So what is a taboo? Cannibalism, incest, and child
molestation are common examples of behavior we see as taboo.

Yes, you can kill someone in self-defense, but if you pull a Hannibal Lector and eat that
person, you’re going to jail, whether it started as self-defense or not. So don’t do that.
Ever. Norms like these and many others help societies function well, but norms can also
be a kind of constraint, a social control that holds people back.

Some norms can be bad, like ones that encourage unhealthy behavior like smoking or
binge drinking. But not all norms have clearly defined moral distinctions – like the way a
culture’s emphasis on competition pushes people toward success, but also discourages
cooperation. And that’s the tricky thing about culture. Most of the time you don’t notice
the cultural forces that are shaping your thoughts and actions, because they just seem
normal.
That’s why sociologists study culture! We can’t notice whether our values and our norms
are good or bad unless we step back and look at them with the analytical eye of a
sociologist. Today we learned what culture is and the difference between material and
non-material culture. We learned about three things that make up culture: symbols, values
and beliefs, and norms.

We looked at how language influences culture through the Sapir-Whorf hypothesis and
discussed the three types of norms – folkways, mores, and taboos – which govern our
daily life.

11. cultures, subcultures, and countercultures

How many cultures are there in the world? We’ve talked a lot about the things that make
a culture a culture – things like norms and symbols and languages. But we haven’t really
discussed how you lump all those little things together and say, yes, these are the things
that belong together – these things are culture A, and these other things are culture B. So,
what are the rules of culture? Well, culture isn’t just about nationality, or the language
you speak. You and another person can live in the same country and speak the same
language, and still have totally different cultural backgrounds. Within a single country,
even within a single city, you see lots of different cultures, and each person’s cultural
background will be a mishmash of many different influences. So, there really isn’t – and
never will be – a single, agreed-upon number of cultures that exist in the world. But that
doesn’t mean we can’t recognize a culture, and understand cultural patterns and cultural
change, and think about how different cultures contribute to the functioning of society.

Are you more likely to spend your free time at a football game, or at a modern art
gallery? Do you watch NCIS or True Detective? Do you wear JC Penney or J Crew?
These distinctions – and many more like them – are just one way of distinguishing
between cultural patterns, in terms of social class. Because, yes, Class affects culture,
and vice versa. So one way of looking at culture is by examining distinctions between
low culture and high culture.

And OK, yeah, those are kinda gross sounding terms. But I want to be clear: High culture
does not mean better culture. In fact, so-called low culture is also known as popular
culture, which is exactly what it sounds like: Low or popular culture includes the cultural
behaviors and ideas that are popular with most people in a society. High culture,
meanwhile, refers to cultural patterns that distinguish a society’s elite. You can sort of
think of low culture versus high culture as the People’s Choice Awards versus the Oscars.

The Hunger Games probably weren’t gonna be winning Best Picture at the Oscars. But
they were massive blockbusters, and the original movie was voted the best movie of 2012
by the People’s Choice Awards. By contrast, the winner of Best Picture at the Oscars that
same year was The Artist, a black and white silent film produced by a French production
company. Very different movies, very different types of culture. Now, you can also look
at how different types of cultural patterns work together. The Hunger Games and The
Artist may appeal to different segments of society, but ultimately, they both fit into
mainstream American media culture.

Mainstream culture includes the cultural patterns that are broadly in line with a society’s
cultural ideals and values. And within any society, there are also subcultures – cultural
patterns that set apart a segment of a society’s population. Take, for example, hipsters!
They make up a cultural group that formed around the idea of rejecting what was once
considered “cool,” in favor of a different type of cultural expression. Yeah, your beard
and your fixed-gear bike, or your bleach blonde hair and your thick-framed glasses –
they’re all part of the material culture that signifies membership in your own specific sub-
culture.
But, who decides what’s mainstream and what’s a sub-culture? I mean, the whole hipster
thing has gone pretty mainstream at this point. Typically, cultural groups with the most
power and societal influence get labelled the norm, and people with less power get
relegated to sub-groups. The US is a great example of this. In large part because of our
history as a country of immigrants, the US is often thought of as a “melting pot,” a place
where many cultures come together to form a single combined culture. But how accurate
is that? After all, each subculture is unique – and they don’t necessarily blend together
into one big cohesive culture just because we share a country.

And more importantly, some cultures are valued more than others in the US. For
example, everyone gets Christmas off from school, because Christian culture holds a
privileged role in American society. That might not seem fair, if you’re a member of a
sub-culture that isn’t folded into mainstream culture. So, it's not really a melting pot if
one flavor is overpowering all the other flavors. And this brings me to another subject:
How we judge other cultures, and subcultures. Humans are judgmental. We just are. And
we’re extra judgmental when we see someone who acts differently than how we think
people should act. Ethnocentrism is the practice of judging one culture by the standards
of another. In recent decades, there’s been growing recognition that Eurocentrism – or the
preference for European cultural patterns – has influenced how history has been recorded,
and how we interpret the lives and ways of people from other cultures.

So what if, rather than trying to melt all the cultures into one, we recognize each
individual flavor? One way to do this is by focusing research on cultures that have
historically gotten less attention. For example, afrocentrism is a school of thought that re-
centers historical and sociological study on the contributions of Africans and African-
Americans. Another option is expanding and equalizing your focus. Instead of looking at
behavior through the lens of your own culture, you can look at it through the lens of
multiculturalism – a perspective that, rather than seeing society as a homogenous culture,
recognizes cultural diversity while advocating for equal standing for all cultural
traditions.
In this view, America is less a “melting pot” and more like a multicultural society. Still,
the ways in which cultures and subcultures fit together – if at all – can vary, depending on
your school of thought as a sociologist. For example, from a structural functionalist
perspective, cultures form to provide order and cohesiveness in a society. So in that view,
a melting pot of cultures is a good thing. But a conflict theorist might see the interactions
of sub-sultures differently. Prioritizing one sub-culture over another can create social
inequalities and disenfranchise those who belong to cultures that are at odds with the
mainstream.

It’s hard to encourage individual cultural identities without promoting divisiveness. In the
US at least, it’s a constant struggle. But sometimes, sub-groups can be more than simply
different from mainstream culture – they can be in active opposition to it. This is what we
call a counter-culture. Counter-cultures push back on mainstream culture in an attempt to
change how a society functions. Let’s go to the Thought Bubble to take a trip back to one
of the biggest counter-cultural periods of the 20th century: the 1960s. In the United
States, the 1960s were rife with countercultures.

It was a time of beatniks, and hippies, of protests against the Vietnam war, and of
protests for civil rights and women’s liberation. These movements were often led by
young people and were seen as a rebellion against the culture and values of older
generations. This was the era of free love, where people embraced relationships outside
of the traditionally heterosexual and monogamous cultural norms. Drug use – especially
the use of psychedelic drugs – was heavily associated with this sub-culture and was
celebrated in its popular culture – think Lucy in the Sky with Diamonds or the Beat
authors’ books about acid trips. But this counter-culture was also a push back
politically against mainstream culture.

Many cornerstones of the politics of the American left have their origins in the counter-
culture of the 1960s: anti-war, pro-environmentalism, pro-civil rights, feminism,
LGBTQ equality. From the Stonewall riots to the Vietnam war protests, ‘60s counter-
culture was where many of these issues first reached the public consciousness.

So, counter-cultures can often act as catalysts for cultural change, especially if they get
big enough to gain mainstream support. But cultures change all the time, with or without
the pushback from sub-cultures and counter-cultures. And different parts of cultures
change at different speeds. Sometimes we have what’s called a cultural lag, where some
cultural elements change more slowly than others.

Take how education works, for example. In the US, we get the summer off from school.
This is a holdover from when this was a more agricultural country, and children needed
to take time off during harvest. Today, there’s no real reason for summer vacation, other
than that’s what we’ve always done. So how does cultural change happen?

Sometimes, people invent new things that change culture. Cell phones, for example, have
revolutionized not just how we make phone calls, but how we socialize and
communicate. And inventions don’t just have to be material. Ideas, like about money or
voting systems, can also be invented and change a culture. People also discover new
things.

When European explorers first discovered tomatoes in Central America in the 1500s and
brought them back to Europe, they completely changed the culture of food. What would
pizza be without tomatoes?! A third cause of cultural change comes from cultural
diffusion, which is how cultural traits spread from one culture to another.

Just about everything we think of as classic “American” culture is actually borrowed and
transformed from another culture. Burgers and fries? German and Belgian, respectively.
The American cowboy? An update on the Mexican vaquero. The ideals of liberty and
justice for all enshrined in our founding documents? Heavily influenced by French
philosophers like Rousseau and Voltaire, and British philosophers like Hobbes and
Locke, as well as by the Iroquois Confederacy and its ideas of representative democracy.

Whether we’re talking about material culture or symbolic culture, we’re seeing more and
more aspects of culture shared across nations and across oceans. As symbolic
interactionists see it, all of society is about the shared reality – the shared culture – that
we create. As borders get thinner, the group of people who share a culture gets larger.

Whether it’s the hot dogs we get from Germany or the jazz and hip hop coming from
African traditions, more and more cultures overlap as technology and globalization make
our world just a little bit smaller. And as our society becomes more global, the questions
raised by two of our camps of sociology, structural functionalism and conflict theory,
become even more pressing. Are the structural functionalists right?

Does having a shared culture provide points of similarity that encourage cooperation and
help societies function?Or does conflict theory have it right? Does culture divide us, and
benefit some members of society more than others? In the end, they’re both kind of right.
There will always be different ways of thinking and doing and living within a society –
but culture is the tie that binds us together. Today, we learned about different types of
culture, like low culture and high culture. We looked at different ways of categorizing
cultures into sub-cultures. We contrasted two different ways of looking at cultural
diversity: ethno-centrism and multi-culturalism.

We discussed the role of counter cultures and explored how cultural change happens.
And lastly, we looked at a structural functionalist and a conflict theory perspective on
what cultures mean for society.

12. how we got here


Until about 12,000 years ago, the largest group of people ever assembled, the most
humans ever gathered in one place, was probably a crowd of about 100, tops. And there
were somewhere between one and ten million people on the entire planet back then.
Today, we have football stadiums that can fit a hundred people a thousand times over.

The city of Shanghai has a population of over 24 million. And there are almost 7.5 billion
people on Earth! How the heck did we get from there to here? That might sound like a
history question, and it is, partly. But it's also a sociology question. Because, if we want
to understand how we got from small groups huddled around a fire to cities of millions,
we need to understand what society is and how societies change as their populations
grow. And we need to understand how different kinds of societies shape the people who
live in them. Pretty much any question you can ask about society, you can answer with
the help of sociology.

As long as there have been humans, there have been societies. We're social animals, and
even when there were mere handfuls of us, we grouped together, forming the first
societies. Now, society can mean lots of different things:A few families who spend all
their time hunting deer and picking mushrooms can be a society. But so were the 70
million people of the Roman Empire. And so are the 1.2 billion people living in India
today. So we need a definition that's going to include all of these things. And
conveniently enough, we have one: a society is simply a group of people who share a
culture and a territory.

That's a good definition, but it doesn't really tell us much about the different kinds of
societies, or how we get from one kind to another. For that, we turn to the work of
American sociologist Gerhard Lenski. Lenski focused on technology as the main source
of societal change, through a process he called sociocultural evolution: the changes that
occur as a society gains new technology. Lenski then broke up human history into five
different types of societies, defined by the technology they used and the social
organizations that the technology helped create and sustain.
If you look back to early human history, say about 30-40 thousand years ago, you find a
lot of what Lenski called hunting and gathering societies. In these societies, people made
use of extremely basic tools to help them hunt animals and gather wild plants for food.
Now, if you think about how much you eat in a day, and imagine trying to gather up that
much food every day, it should be pretty clear that this is no easy task. So food was the
major concern in these societies, and they still exist today.

People in hunting and gathering societies spend almost all their time trying to make sure
they have enough food And they’re nomadic, following migrating animals and wild
harvests, so they don’t build permanent settlements. So, by their very nature, these
societies tend to be small; hunting and gathering can't support a group of more than 25 to
40 people effectively. And in order for hunting and gathering to support even that,
everyone has to work to find food, and everyone has to share their resources in order to
ensure the survival of the group. This means that these societies have very low inequality.

For the vast majority of human history, every single person lived in hunting and gathering
societies, up until about 12,000 years ago, when the domestication of plants and animals
led to new kinds of society: horticultural and pastoral societies. Pastoral societies are
based around the domestication of animals and are also nomadic, moving from place to
place to keep their herds fed. Horticultural societies, on the other hand are based on
cultivating plants. So, with horticultural societies we see the first human settlements, as
groups began to stay put, to remain close to reliable sources of food.

And we also see, for the first time, the accumulation of material surplus – that is, more
resources than are needed to feed the population. This is incredibly important because,
having a surplus allows a society to grow. And it also means that not everyone needs to
work on getting food and simply surviving. This, in turn, leads to the first real instances
of specialization in society, with separate political, religious, and military roles coming
about.
We also get real social inequality for the first time. And this same dynamic accelerates as
we move into agrarian society, as permanent settlements emerge based around
agricultural production. Starting about 5,000 years ago – with better farming techniques
like the animal-drawn plow – we get more food production and an even bigger material
surplus. From this came larger populations and larger settlements, with even more
specialization and even more inequality.

Remember serfs and nobles? Feudalism was an agrarian society. And you know what else
happens when societies reach this point? The family starts to become less important. In
other kinds of societies, things like education are handled almost entirely by the family.
But as societies grow and become more complex, those functions start to be taken up by
larger social institutions, like the church or schools. And now we finally start approaching
present day America, with industrial societies. These societies get their start with the
industrial revolution around 1750, as production began to shift from human and animal
power to machine power. This had a massive impact on food production, with new
technologies like the tractor and the combine producing huge surpluses that could support
even larger populations with even more specialization.

But the industrial revolution also marked a fundamental change in the organization of
society itself. Societies far larger than anything seen before meant a greater need to assert
centralized control over everything – from the production of goods, to transportation, to
agricultural production – in order to keep things running smoothly.

For the first time, human society moved away from a subsistence-based economy. As
mass production became possible, a capital-based economy emerged. As the surplus grew
and specialization increased, so did inequality, with factory workers spending 12 hour
days on one end, and incredibly wealthy “captains of industry” making enormous profits
on the other. It’s no coincidence that, soon after the industrial revolution, Marxism and
conflict theory emerged.
And the decreasing importance of the family continued as well, as more institutions
stepped into traditional family roles. Industrial societies were the first to have universal
public education, for instance. And, for the first time, the majority of health care and
caregiving were institutionalized, done outside the home in hospitals. The need to keep
production organized also meant an increasingly urbanized population.

Because, it’s easier to control the resources you need if they’re centralized. So people
moved from the countryside to urban centers, where the industrial jobs were. And all of
this keeps going in Lenski’s scheme of things, with specialization and technological
innovation continuing, until the development of the computer, a technology that gave
rise to the postindustrial society. In postindustrial societies, we still see specialization,
increased urbanization, and technological advances. But the defining change is that
postindustrial societies shift away from an economy based on raw materials and
manufacturing, to an economy based based on information, services, and technology.

This is how we got here. If you look at the most dynamic sectors of the US economy, you
see massive wealth being created in tech, finance, and service industries, but a steady
decline in manufacturing. That said, it's not as though Americans don't buy stuff. Apps
can do a lot of things, but they can't (yet) conjure a car out of the ether for you. So this is
a good chance to point out that these different types of society aren't isolated from each
other. You can't have a postindustrial society without having industrial societies
elsewhere to supply it with goods. This points again to increasing inequality – not just
within one society, but across societies.

So, in Lenski's understanding, societal change is driven by technological change. But, it’s
worth pointing out that not all of these changes are beneficial. Pollution, global warming,
and large-scale warfare are new problems that technology has brought us. And,
technology doesn’t solve fundamental societal problems. It has the potential to reorganize
society, but technology can’t tell us how to have peaceful or just societies.
In fact, just looking at Lenski's classifications, you can see that advancing technology
also advances inequality in society, making it increasingly unequal. So, we can't limit our
discussion of society to just looking at technology. But that’s okay, because the
sociocultural changes that Lenski talks about can also be understood using the work of
some old friends: Marx, Weber, and Durkheim. Marx, for example, might seem pretty
similar to Lenski at first:

If you think back to his theory of historical materialism, he certainly seems to put a strong
focus on technology and the economy as the driving forces of history. Remember? He
saw that changes in the forces of production are important in pushing the change from
one mode of production to another. But for Marx, you only get large-scale social change
through class struggle, which culminates in a revolution, overthrowing the old relations
of production and replacing them with an entirely new set.

So in Marx’s view, the transition between Lenski’s stages requires technological change,
but it also requires revolution. And we can also use Marx’s understanding of conflict to
compare Lenski’s stages with each other. In hunting and gathering societies, for example,
conflict and inequality are leveled by the lack of surplus and the need to share resources.
But that’s not the case in postindustrial society.

Max Weber, for his part, seems further away from Lenski than Marx, focusing not on
technology or revolution, but on ideas. The major transition that Weber talked about was
the shift from traditional to modern society, which he argued was really a matter of
rationalization. Now, it's not that Weber didn't appreciate the importance of technology.
But he argued that the transition from agrarian to industrial society, for instance, began
with a shift in ideas – like new techniques in accounting and ways of approaching social
organization.
And it was these ideas, combined with advances in technology, that produced the overall
change. So in this view, both ideas and technology were crucial for the emergence of
modern capitalism. And Durkheim, finally, took a different tack from either Marx or
Weber. He approached the transitions that Lenski talked about from the perspective of a
society’s underlying social structure. Specifically, Durkheim saw the history of society as
a long term change in solidarity, a change in what held societies together.

He argued that hunting and gathering societies were held together by similarity, what he
called mechanical solidarity. Durkheim argued that everyone in these societies had the
same skills and lived in basically the same way. But that changed as society developed
and specialization increased. With more specialization, people became more
differentiated, taking on different jobs, learning different skills, and living in different
ways. But, Durkheim argued, people also became more tightly integrated, because they
became more interdependent.

Factory workers needed farmers to make food so that they could eat, and farmers needed
factory workers to make their tools and other goods. Durkheim called this
interdependence organic solidarity. And so Lenski’s sociocultural evolution is, for
Durkheim, the story of a long transition from mechanical to organic solidarity.
Ultimately, all of these ways of looking at society and its changes, from the point of view
of technology, or conflict and revolution, or ideas, or underlying social structure, are
important for understanding what society is and how it works.

Each one of these perspectives sees things that the others miss, and each one is important
for the discipline of sociology. Today we learned about the society, what it is and how it
changes. We talked about Gerhard Lenski's classification of societies into five types, and
the technological changes that turn one into another. We returned to Marx and Weber,
and talked about how they understood societal change. And we also talked about
Durkheim's understanding of society and how social solidarity can be mechanical or
organic.
13. social development

Have you ever met a friend’s parents and realized that your friend was basically a mini-
me of their mom? Not just because they both have brown hair or a pointy nose. It’s how
they talk, the way they both like making silly puns, their attitudes and beliefs. Now the
question is: How much of that similarity is genetic, and how much is just a function of
that fact that your friend grew up with their mom, and pretty much learned how to be a
human being by watching her? This is the age old question: nature or nurture? Nature is
the part of human behavior that’s biologically determined and instinctive. When a baby
latches onto your finger and won’t let go, and it’s basically the cutest thing in the whole
world, it’s not because they learned to do that – it’s natural. A lot of human behavior,
however, isn’t instinctive – it comes instead from how you’re nurtured.

The nurture part of behavior is based on the people and environment you’re raised in.
And it’s this second part – the social environment that determines human behavior – that
sociologists tend to investigate and have many different theories about.

To a big extent, we develop our personalities and learn about our society and culture
through a social process – one known as socialization. Sounds legit, right? But what
happens if you don’t have people around you? Social isolation affects our emotional and
cognitive development, a lot.

To get a glimpse into how and why this is, let’s go to the Thought Bubble to look at
sociologist Kingsley Davis’s case studies on Anna. In the winter of 1938, a social worker
investigated a report of child neglect on a small Pennsylvania farm and found, hidden in a
storage shed, a five-year-old girl. That five-year girl was Anna.
She was unwanted by the family she was born into and was passed from house to house
among neighbors and strangers for the first six months of her life. Eventually, she ended
up being kept in a shed with no human contact other than to receive food.

Kingsley Davis observed Anna for years after her rescue and wrote about the effects of
this upbringing on her development. When Anna was first rescued, she was unable to
speak or smile, and was completely unresponsive to human interaction.

Even after years of education and medical attention, her mental development at age eight
was less than that of a typical two-year old. This is a story with both a sad beginning and
a sad ending. Anna died of a blood disorder at the age of 10 And Davis’ study of how
isolation affects young children was only one of many that have shown how a lack of
socialization affects children’s ability to develop language skills, social skills, and
emotional stability.

There are lots of different theories about how we develop personalities, cognitive skills,
and moral behavior, many of which come from our siblings in social science:
psychologists. Take Sigmund Freud. You’ve heard of him: Austrian guy? Liked cigars?
Invented the field of psychoanalysis? One of his main theories was about how
personalities develop.

He thought we were born with something called an id. You can think of the id as your
most basic, unconscious drive – a desire for food, comfort, attention. All a baby knows is
it wants THAT and it will scream until it gets it. But then we develop the ego and
superego to balance the id.

Ego is the voice of reason, your conscious efforts to rein in the pleasure-seeking id. And
your superego is made up of the cultural values and norms that you internalize and use to
guide your decisions. So if the id is the devil on your shoulder, the superego is the angel
on the other shoulder, and the ego is the mediator who intervenes when the angel and
devil start fighting.

Now, a lot of Freud’s work hasn’t stood the test of time, but his theories about how
society affects our development has influenced pretty much everyone who has
researched the human personality. This includes Swiss psychologist Jean Piaget, who
spent much of his career in the early 1900s studying cognitive development.

While researching ways to measure children’s intelligence, Piaget noticed that kids of
similar ages tended to make similar mistakes. And this, to Piaget, suggested that there
were four different stages of cognitive development.

First Stage: TOUCH EVERYTHING! Babies learn about the world by grabbing things
and sticking them in their mouths. This curious, slobbery interaction with the world is
what Piaget called the sensorimotor stage – the level of development where all
knowledge is based on what you can perceive with your senses.

Around age 2, a child enters the next stage, known as the preoperational stage. At this
point, kids have learned to use language and begin to ask questions to learn about the
world, rather than just grabbing stuff. Now they can think about the world and use their
imaginations – which leads to playing pretend and an understanding of symbols. But
thinking about the world is pretty much limited to how THEY think about the world.

Kids in the preoperational stage are pretty ego-centric; if they love playing with trains
and you ask them what their dad’s favorite thing to do is, they’ll probably say that he
loves playing with trains too. It’s not until they reach the concrete operational stage,
around 6 or 7, that they develop the ability to take in other people’s perspectives, and
begin to make cause-and-effect connections between events in their surroundings.
And in the formal operational stage, at about age 12, Piaget said, kids begin to think in
the abstract and use logic and critical thinking. Now, American psychologist Lawrence
Kohlberg later expanded on Piaget’s model of cognitive development to incorporate
stages of moral development.

Essentially, kids’ sense of what is “right” begins in what Kohlberg called the pre-
conventional stage, where right is just what feels good to them personally. Next, they
move to the conventional stage, where what’s right is what society and the people around
them tells them is right.

And then finally, children end up in the post-conventional stage, where they begin to
consider more abstract ethical concepts than just right or wrong. So at a young age, a
child doesn’t realize that grabbing the candy bar they want at the store is wrong – they
just want it.

But then, a combination of societal norms and being scolded by their parents convinces
them that stealing is wrong, no matter how much they want the candy bar. And over time,
they learn that morals have gray areas; stealing is wrong if it’s just for fun, but might be
considered less wrong if you’re stealing to feed your family.

Eventually, children reach a point where they’re able to think about things like freedom
and justice, and realize that societal norms about what’s right may not always line up
with these principles. Sure, laws against stealing candy may be just, but what about laws
that say only certain people can get married? Just because something is a law, is it right?

How you feel about that question may depend on your socialization. And on your gender.
Carol Gilligan, an American psychologist who started out as a research assistant and
collaborator of Kohlberg’s, explored how girls and boys experience these stages
differently.
She realized that Kohlberg’s original studies only had male subjects – which may have
biased his findings. When she expanded the research to look at both male and female
children, she found that boys tended to emphasize formal rules to define right and wrong
– what she called a justice perspective.

Whereas girls tended to emphasize the role of interpersonal reasoning in moral decisions
– what she called a care and responsibility perspective.

Gilligan argued that these differences stem from cultural conditioning that girls receive to
fulfill ideals of femininity. She thought that we socialize girls to be more nurturing and
empathetic, and that influences their moral interpretation of behavior. The next theory of
social development I want to focus on is from American sociologist George Herbert
Mead, who was one of the founders of the sociological paradigm we talked about a few
episodes ago, known as symbolic interactionism.

His work focused on how we develop a “self.” What makes up the you that is inherently
you? Are you born with some inherent spark of you-ness? According to Mead no!
Instead, he believed that we figure out who we are through other people. All social
interactions require you to see yourself as someone else might see you – something Mead
described as “taking on the role of others.”

In the first stage of development, according to Mead’s model, we learn through imitation
– we watch how others behave and try to behave like them. You see your mom smile at
your neighbor, so you smile too. And Mead observed that as kids got older, they moved
on to a new stage – play. Rather than just imitating your mom, you might
play at being a mom, taking care of a doll. Assuming the role of “mommy” or “daddy” is
a kid imagining the world from their parent’s perspective.

The next stage of development is the game stage, where children learn to take on multiple
roles in a single situation. What does that have to do with games? Well, games use rules
and norms, and require kids to take on a role themselves, and develop that role in reaction
to the roles that others take on. Team sports are a great example of this.

When you’re playing soccer, you need to not only know what you’re going to do, but also
what your teammates and your opponents will do. If you were ever the kid who ended up
running the wrong way on the soccer field because you didn’t realize the ball had
switched possession, you know how important it is to anticipate what other people do.

The last stage, in Mead’s model, occurs when we learn how to take on multiple roles in
multiple situations. In this phase, we weigh our self and our actions not against one
specific role, but against a ‘generalized other’ – basically, a manifestation of all of our
culture’s norms and expectations.

Now, you might have noticed that all these theories focus on childhood. So, does that
mean that your personality is set once you hit 18? No, definitely no. As anyone over 18
will tell you, you keep growing well past high school.

And that’s why yet another theorist, German-born psychologist Erik Erikson, came up
with his own eight-stage theory of development, that goes all the way from infancy to old
age. He based these stages on the key challenge of each period of life. When you’re a
toddler, for example, your biggest challenge is getting what you want – or as Erikson puts
it, gaining autonomy, which helps you build skills and confidence in your abilities.

But once you’re a young adult, you’ve got plenty of autonomy. Now a bigger challenge is
developing intimate relationships. Falling in love, finding friends – there’s a reason
that’s the focus of every 20-something sitcom. And his list goes on. Every life stage from
when you’re born to when you die features different expectations that inform what we see
as markers of social development.
Moving out, getting married, having kids – they’re all societal markers of social
development as an adult. But whether you feel like one or not, adulthood will come for us
all – and it’s your socialization that will determine how exactly you perform the role of
“adult.” Next week, we’ll talk about the different agents of socialization that shape who
we really end up being.

Today we learned about social development, starting with the role of nature and nurture
in influencing a person’s development. We talked about social isolation and the
importance of care and human interaction in early years for proper emotional and mental
development. Then, we talked about five theories of development: Freud’s Id, Ego, and
Superego; Piaget’s Four Stages of Cognitive Development, Kohlberg and Gilligan’s
theories of moral development; Mead’s theory of self; And Erik Erikson’s life stage
theory.

14. socialization

What do you, as you’re watching me right now, have in common with a toddler who’s
being read a bedtime story? I’ll give you a clue. It’s also something you have in common
with the kids in The Breakfast Club. As well as with a soldier going through boot camp.
Give up? You’re all being socialized. It’s also the title of the episode. You probably saw
that. Each of us is surrounded by people, and those people become a part of how we act
and what we value.

This is known as socialization: the social process through which we develop our
personalities and human potential and learn about our society and culture. Last time, we
talked about the HOW of socialization, how we learn about the social world. And no
matter which of the many theories out there that you like best, the answer seems to be
that we’re socialized by interacting with other people.

But which people? What we didn’t talk about last week was the WHO of socialization:
Who do we learn about the social world from? What people, and what institutions, have
made you who you are today?

Socialization is a life-long process, and it begins in our families. Mom, Dad,


grandparents, siblings – whoever you’re living with is pretty much your entire social
world when you’re very young. And that’s important, because your family is the source
of what’s known as primary socialization –your first experiences with language, values,
beliefs, behaviors, and norms of your society.

Parents and guardians are your first teachers of everything – from the small stuff like how
to brush your teeth to the big stuff like sex, religion, the law, and politics. The games they
play with you, the books they read, the toys they buy for you, all provide you with what
French sociologist Pierre Bourdieu called cultural capital – the non-financial assets that
help people succeed in the world. Some of this cultural capital may seem fairly innocuous
– I mean, is reading Goodnight Moon really making that big of an impact on a toddler?
Yes, actually.

It teaches the “value” of reading as much as it helps the child begin to recognize written
language. The presence of books in the home is associated with children doing well in
school. Another important form of socialization that starts in the home is gender
socialization, learning the psychological and social traits associated with a person’s sex.

Gender socialization starts from the moment that parents decide on a gendered name and
when nurses put a pink or a blue hat on the baby. Other group memberships, like race and
class, are important parts of initial socialization. Race socialization is the process through
which children learn the behaviors, values, and attitudes associated with racial groups.
Racial discrimination is partly the result of what parents teach their children about
members of other races.

And class socialization teaches the norms, values, traits, and behaviors you develop based
on the social class you’re in. This may help explain why more middle- and upper-class
children go to college. Not only can the families afford to send them, but these children
are expected to attend. They grow up in a home that normalizes college attendance. Now,
gender, race, and class socialization are all examples of anticipatory socialization – that’s
the social process where people learn to take on the values and standards of groups that
they plan to join .

Small children anticipate becoming adults, for example, and they learn to play the part by
watching their parents. Gender socialization teaches boys to “be a man” and girls to “be a
woman”. But children also learn through secondary socialization – that’s the process
through which children become socialized outside the home, within society at large. This
often starts with school. Schools are often kids’ first introduction to things like
bureaucracies, as well as systems of rules that require them to be in certain places at
certain times, or act in ways that may be different from what they learned at home.

Not only do schools teach us the three r’s – reading, ‘riting and ‘rithmetic – but they
come with what sociologists call a hidden curriculum – that is, an education in norms,
values, and beliefs that are passed along through schooling.

Take, for example, a spelling bee. Its main goal is to teach literacy and encourage kids to
learn how to spell. But something as seemingly benign as a spelling bee can have many
hidden lessons that stick with kids, too. For example, it teaches them that doing better
than their peers is rewarding – and it enforces the idea that the world has winners and
losers. Another hidden curriculum of school in general is to expose kids to a variety of
people.
When your only socialization is your family, you just get one perspective on race, class,
religion, politics, et cetera. But once you go out into the world, you meet many people
from many backgrounds, teaching you about race and ethnicity, social class, disability,
gender and sexuality, and more. School becomes not just a classroom for academic
subjects, but also for learning about different kinds of people. And, of course, schools are
also where kids are exposed to one of the most defining aspects of school-age life: peer
groups.

Peer groups are social groups whose members have interests, social position, and usually
age in common. As you get older, your peer group has a massive impact on the
socialization process. Let’s go to the Thought Bubble to to see just how big that impact
can be. In the late 1950s, American sociologist James Coleman began studying teenagers
– how they interacted and how their social lives affected their education. He interviewed
teens in 11 high schools in the Midwest, asking them questions about what social group
they identified with and who else they considered members of their group.

Based on these interviews, Coleman identified four main social categories. And, uh, the
names of these categories will probably sound familiar to you: They were nerds, jocks,
leading crowd, and burnouts. Basically, he discovered the 1950s version of The Breakfast
Club. And with these social categories came social prescriptions – behaviors that were
expected of people in those groups.

Coleman found that certain things were important to the members of certain groups, like
being a good dancer or smoking or having money or getting good grades. He also tested
the students’ IQs and assessed their grades. And surprise! It turned out that who you hung
out with affected how well you did in school.

In some of the schools, getting good grades was considered an important criterion for the
“leading group” – aka the popular kids– but in other schools, it wasn’t. And in the
schools where good grades were not a sign of popularity, students who scored high on IQ
tests actually did worse on their exams than similarly smart students at schools where
good grades made you popular. Thanks Thought Bubble! Now, Coleman’s study might
seem like common sense – of course you and your friends are gonna be pretty similar.
Don’t we choose to be friends with people who are like us?

Well, not entirely. Coleman’s study showed that we don’t just pick peer groups that fit
into our existing traits – instead, peer groups help mold what traits we end up with. OK,
so far, we have family, schools, and peers as the main forces that influence someone’s
socialization. But what about me? Yes, me, Nicole Sweeney. Am I part of your
socialization?

Or more precisely, are youtube videos considered a form of socialization? Short answer:
yes! Long answer: The media you consume are absolutely a part of your socialization.
TV and the internet are huge parts of Americans’ lives. And how we consume our media
is affected by social traits, like class, race, and age. A teenager or twenty-something in
2017 is much more likely to watch online media, like Netflix or youtube, than television.
And low-income Americans watch much more TV than their higher-income counterparts.
The media we consume also impact us dramatically.

The American Academy of Pediatrics, for example, has said there are connections
between excessive television viewing in early childhood and cognitive, language and
social emotional delays. But TV can also influence the attitudes of viewers, especially
young ones. For example, studies have found that kids exposed to Sesame Street in
randomized-controlled trial settings, reported more positive attitudes toward people of
different races – most likely a result of the program’s wide variety of characters from
different racial and ethnic backgrounds. TV also affects us well beyond childhood. One
recent study found that MTV’s “16 and Pregnant” may have acted as a cautionary tale,
helping to change teen girls’ attitudes toward birth control and contributing to declining
rates of teen pregnancy.
So far, the types of socialization we’ve talked about have been fairly subtle — but there
are also more intense types of socialization. Total institutions are places where people are
completely cut off from the outside world, and face strict rules for how they must behave.
First coined by sociologist Erving Goffman, the term “total institution” refers to places
like the military, prisons, boarding schools, or psychiatric institutions that control all
aspects of their residents’ lives – how they dress, how they speak, where they eat, where
they sleep. And in a total institution, residents undergo resocialization, where their
environment is carefully controlled to encourage them to develop a new set of norms,
values, or beliefs.

They do this by, basically, breaking down your existing identity and then using rewards
and punishment to build up a whole new you. Think about every boot camp movie
you’ve ever seen. All soldiers are given the same haircut and uniform, expected to reply
to questions in the same way, put through the same grueling exercises, and humiliated by
the same officer.

This process re-socializes the soldiers to put extreme value on their identity within the
group, making them more willing to value self-sacrifice if their unit is in danger. So
whether you’re GI Jane training for a reconnaissance team or Molly Ringwald trying to
maintain her queen-bee status in the leading crowd, the you that you are has been
powerfully shaped by people and institutions. Now, think back on your own life – who
has been the biggest influence on YOUR socialization? Who do you think that you
yourself have influenced? Hard questions to answer, maybe, but definitely worthwhile –
and hopefully a little easier now that you’ve learned how sociologists think about it.

Today, we learned about five different types of socialization. We talked about


anticipatory socialization from your family, like gender norms, that prepare children for
entering society. We discussed the “hidden curriculum” in schools. We learned about
peer groups through a look at James Coleman’s study of teenage social groups.
We explored the role of media in socialization. And finally, we talked about total
institutions and how they can act as a form of re-socialization.

15. Social Interaction & Performance

You're daydreaming in class when the teacher calls on you and asks you a question. You
don't know the answer, so you look desperately around the room for help. Finally, one of
your classmates whispers it to you. So you say the answer, and the moment of terror is
over.

You go back to daydreaming, the teacher goes back to teaching, and everyone's happy. A
lot of stuff just happened there. Stuff that raises many questions. Like, why are you
worried about giving the right answer? Why are you worried about answering the
question at all?

And why does your classmate help you out, when it could get them in trouble? If you
want the right answers to these questions, we need to talk about social interaction. And
we also need to talk about reality. Because, according to some sociological theories, the
reality of your social world – in your classroom and beyond – is basically a huge, life-
long stage play.

Social interaction is simply the process by which people act and react in relation to
others. Whenever people converse, or yell, or fight, or play sports, that’s social
interaction. And any place you find social interaction, you're going to find social
structure.
Social structure consists of the relationships among people and groups. And this
structure gives direction to, and sets limits on, our behavior. Because our relationships
establish certain expectations of everyone involved, depending on the social setting.

This is really obvious in a classroom: The teacher teaches and the students learn, because
that’s the expectation for that relationship, in that setting. But if you run into a teacher,
say, at the mall, you both behave differently – and probably awkwardly – because the
expectations for your interaction in that social setting have changed.

Now, this still doesn't tell us why these relationships work the way they do. But it does
tell us where to look. If our interactions are a matter of expectations, then we need to
understand how those expectations are set, and for that we need to talk about social
status.

Status is a position that a person occupies in a society or social group. It's part of their
identity, and it defines their relationships with other people. So, the status of “teacher”
defines how a teacher should relate to their students.

But statuses aren't just professions: gender, race, and sexual orientation are all social
statuses, as are being a father, or a child, or a citizen. And all the statuses held by a single
person make up that person's status set.

That status set can tell us a lot about a person, because statuses exist in a hierarchy, with
some statuses being more valued than others. So if I tell you that someone is a white
middle aged male CEO, then you can make some pretty reasonable guesses about his
education, wealth, and the power he holds in society.

And you've probably noticed that there are different kinds of statuses; for example
"white," "middle-aged," and "male" are pretty different from the status of "CEO." The
first three are all ascribed statuses. Ascribed statuses are those in which a person has no
choice; they're either assigned at birth or assigned involuntarily later in life.

Race, for instance, is an ascribed status assigned at birth, while the ascribed status of
“middle-aged” happens at a point later in life. CEO, on the other hand, is an achieved
status – it’s earned, accomplished, or obtained with at least some effort on the person’s
part. Professions, then, are achieved statuses. So is being a student, or a parent.

Beyond this difference, there’s also the fact that some statuses are more important than
others. A master status is the status others are most likely to use to identify you. This can
be achieved, like “professor,” or ascribed, like “cancer patient.” And as that example
shows you, a master status doesn’t need to be positive or desirable. In fact, it doesn’t
even need to be important to the person who holds it.

It just needs to be important to other people, who use the status as their primary way of
locating that person in the social hierarchy. Also, statuses tend to clump together in
certain ways. Most CEOs are college educated, for example, but they aren’t always.

And a mismatch or contradiction between statuses is called a status inconsistency. When


we talk about PhD students working as baristas, we’re really bringing it up in that way,
because there's a status inconsistency between PhD and barista.

At least in the industrialized world, service workers aren't "supposed" to be highly


educated. Now that's all very interesting, you might say, but we still haven't said that
much about social interaction. And you're right: status gets us started, but if we want to
get into how people behave, then we need to talk about roles.

If status is a social position, then roles are the sets of behaviors, obligations, and
privileges that go with that status. So a person holds a status, but they perform a role.
Keep that word in mind: Perform. Now, since a person can have multiple statuses, they
can have multiple roles too.

But a single status often has multiple roles that go with it. For example, a teacher's role in
the classroom is to teach and lead students. But in the faculty lounge, the status of teacher
has another role; acting as a colleague to other teachers, or as an employee to the
principal – roles that require a whole different bunch of behaviors than those found in the
classroom.

But all of the roles attached to the single status of “teacher” make up that status' role set.
All statuses have role sets. And various role sets can sometimes demand contradictory
behaviors of the person who holds that set. When the roles attached to different statuses
create clashing demands, that’s known as role conflict. Parents who work, for instance,
often need to decide between the demands of their jobs and the demands of their families,
which can lead to role conflict.

And even the roles within a single status can create contradiction, in what we call role
strain. A student who has responsibilities for class, but also for basketball, and orchestra,
and the yearbook committee, experiences role strain as they try to balance the competing
obligations of these roles, all within the context of their status as a student.

Now sometimes, whether it’s because of conflict, strain, or other reasons, people just
disengage from a certain role, in a process called role exit. This can be voluntary, like
quitting your job, or involuntary, like getting dumped.

In either case, it's rarely as simple as just walking out the door, because roles are a part of
who we are. So exiting a role can be traumatic, especially without preparation, or if the
exit isn't by choice. Now, we've been talking about roles as though they’re prescriptive,
or that they totally determine our behavior.
But they don't! Roles are guidelines, expectations that we have for ourselves and that
others place on us. We may or may not internalize those expectations, but even if we do,
our behavior still isn’t completely controlled. But why do statuses come bundled with
roles in the first place?

Why can't I just not perform my role? The answer is complicated, but part of it is that,
well, reality itself is socially constructed. I mean, there's nothing in the laws of physics
that says that some people are teachers, and that those people get to ask questions, and
students have to answer them.

But that doesn't mean these statuses aren't real and don't have real roles attached to them.
One good way of thinking about this is known as the Thomas Theorem, developed by
early 20th century American sociologists William Thomas and Dorothy Thomas.

It states, "If people define situations as real, they are real in their consequences." In other
words, statuses and roles matter, because we say they do. The perception creates the
reality. So the reason you can't just not perform your role is that, even if you don't think it
matters, everyone else does think it matters!

So the student who refuses to answer a question gets in trouble, while the teacher who
refuses to teach and just hangs out drinking wine with their feet up on their desk gets
fired. If you have the status of a teacher, people expect, even demand, that you do the
things teachers are expected to do.

How you feel about your status doesn’t really enter into it. And we know who's a teacher
and who's a student based on our background assumptions, our experiences, and the
socialization that teaches us about norms in various situations.

So, this is how your reality becomes socially constructed – you, and everyone around
you, uses assumptions and experiences to define what’s real. By interacting with the
people around you, and expecting certain behaviors in the context of roles, you actually
create the social reality that shapes those interactions that you’re having.

The fact that this happens in interaction is really important, because your social reality is
not just about you. It's about everyone you're interacting with, and their expectations, too.
It's about maintaining a performance.

And this idea of performance is really central to a sociological understanding of how


people interact. It’s the key to what’s known as the dramaturgical analysis of social
interaction. This approach, pioneered by Canadian-American sociologist Erving
Goffman, understands social interaction as if it were a play performed on stage for an
audience.

By Goffman’s thinking, people literally perform roles for each other, and the point of
social interaction is always – at least partly – to maintain a successful interaction that’s in
line with expectations. That is, to satisfy the audience.

In order to do this, people need to carefully control the information others receive about
them, in a process called impression management. Like, if you're out on a first date,
you’re not gonna talk about how your last relationship ended, because you don't want to
create a bad impression.

But impression management isn't merely a matter of what you say and don't say. It's also
a matter of what you wear and what you do. That is to say, it's a matter of what Goffman
referred to as props and nonverbal communication.

Props, as you know, are just objects that performers use to help them make a certain
impression: So if you want to look professional, you wear a suit. If you want to look
studious, make sure you're reading a book. And the setting can be a prop too: Being the
one standing at the front of the classroom is like 50% of what it takes to look like a
teacher.

And nonverbal communication includes body language – like standing up straight in


order to look respectable, and maintaining or averting eye contact – as well as gestures,
like waving hello to your friend. Together, props and nonverbal communication are both
examples of what Goffman called sign vehicles: things we use to help convey
impressions to people we interact with.

Those vehicles are important aspects of the performance, but really the most fundamental
distinction is the one between what’s part of the performance and what isn't –in other
words, what the audience sees, and what they don't. Goffman called this frontstage and
backstage. Frontstage is where the audience is and where the performance happens,
while backstage is where the performer can drop the performance and prepae.

Often the things we do backstage would totally ruin the performance we're trying to
maintain frontstage. A teacher cursing floridly while grading papers would be considered
backstage: important preparation for teaching is happening, but if any of her students –
that is, the audience – saw her, it would totally ruin the performance, because it defies
expectations of how teachers are supposed to act.

And not all performances are one-person shows. The students, for instance, are all on
what Goffman calls a team; they’re all working together to give a performance
collectively for the teacher. This doesn't mean they're all friends, or that they even like
each other.

It just means that they all need to work together to pull it off the show of being a good,
attentive class. This is why your classmate whispers the answer to you: They’re helping
you maintain the class’s performance of attentiveness by acting as a teammate.
And the teacher goes on teaching. It's important to understand that, in Goffman’s
analysis, the performances that everyone does all the time aren't necessarily adversarial:
The students perform for the teacher, and the teacher performs for the students, but
everyone involved wants the performance to go smoothly. You may not ever win an
Oscar.

But according to dramaturgical analysis, your social interactions are where your statuses,
roles, and all of the expectations that they entail, come together for you to give, literally,
the performance of your life. And that performance is the stuff of social reality.

Today we learned about social interaction. We talked about statuses, how you come to
have them, and how they can conflict. Then we talked about how statuses impact your
behavior by determining what roles you have. We explored why those roles matter by
talking about the socially constructed nature of reality.

Finally, we learned about the theory of dramaturgical analysis and how we can
understand social interaction as in terms of theatrical performance.

16. Social Groups

“If all your friends jumped off a bridge, would you jump too?" It’s the lament of many an
exasperated parent, but it’s also a kind of profound sociological question. Because, when
you're talking to your parents, the answer's always no.

But, with the right group of friends, you might be quite happy to take a dive in the water.
The thing is, you're a different person when you're a part of a group, and you're a
different person in different groups. A family, a group of friends out for a swim, a
business meeting, and a choir are different kinds of groups.
And the same person can be a member of all of them. So if we want to understand how
these groups are different, and even how they're similar, we need to talk about what social
groups are, and why they matter, both to the people who are a part of them, and to the
people who aren't.

The choir, the meeting, the friends, and the family are all examples of social groups. A
social group is simply a collection of people who have something in common and who
believe that what they have in common is significant.

In other words, a group is partly defined by the fact that its members feel like they're part
of a group. This is obviously a pretty broad definition. But it does have its limits, and you
can see these limits if you compare social groups to aggregates and categories. An
aggregate is a set of individuals who happen to be in the same place at the same time.

All the people passing through Grand Central Station at 1:00 on a Friday afternoon are an
aggregate, but they aren't a group, because they don't share a sense of belonging.
Categories, meanwhile, consist of one particular kind of person across time and space.
They’re sets of people who share similar characteristics.

Racial categories are a simple example. So the sense of feeling like you belong to a group
is a defining feature of a group. But it also helps you differentiate kinds of groups,
specifically between primary and secondary groups. Primary groups are small and tightly
knit, bound by a very strong sense of belonging. Family and friendship groups are
primary groups. They’re mutually supportive places where members can turn for
emotional, social, and financial help. And as far as group members are concerned, the
group is an end-in-itself. It exists to be a group, not for any other purpose. Secondary
groups, however, are the reverse. These are large and impersonal groups, whose
members are bound primarily by a shared goal or activity, rather than by strong emotional
ties.
A company is a good example of a secondary group: Employees are often loosely or
formally connected to one another through their jobs, and they tend to know little about
each other. So there’s a sense of belonging there, but it's much more limited. That's not to
say that coworkers never have emotional relationships.

In fact, secondary groups can become primary groups over time, as a set of coworkers
spends time together and becomes a primary group of friends. And while a gang of
friends and a company clearly have a lot of differences, they also have at least one major
similarity: They're both voluntary – if you belong to that group, it’s because you choose
to join.

But there are also plenty of involuntary groups, in which membership is assigned.
Prisoners in a prison are members of an involuntary group, as are conscripted soldiers.
Now that we understand a little bit about what groups are, we can start to study how they
work – beginning with group dynamics, or the way that individuals affect groups, and
groups affect individuals.

If we want to think about how individuals affect groups, a good place to start is with
leadership. Not all groups have formally assigned leaders, but even groups that don't,
often have de facto leaders, like parents in a family. A leader is just someone who
influences other people in the group. And there are generally two types of leadership:

an instrumental leader is focused on a group's goals, giving orders and making plans in
order to achieve those goals. An expressive leader, by contrast, is looking to increase
harmony and minimize conflict within the group. They aren't focused on any particular
goal, they’re just trying to promote the wellbeing of the group’s members.

And just as leaders may differ in what they’re trying to do, so too can they go about doing
it in different ways. I’m talking here about leadership styles, of which we have three.
Authoritarian leaders lead by giving orders and setting down rules which they expect the
group to follow.

Such a leader earns respect, and can be effective in a crisis, but at the expense of affection
from group members. Democratic leaders on the other hand, lead by trying to reach a
consensus. Instead of issuing orders, they consider all viewpoints to try and reach a
decision. Such leaders are less effective during a crisis, but, because of the variety of
different viewpoints they consider, they often find more creative solutions to problems.

And they’re more likely to receive affection from their group’s members. Finally, laissez-
faire leaders do the least leading. They’re extremely permissive, and mostly leave the
group to function on its own. This means lots of freedom, but it’s the least effective style
at promoting group solidarity and least effective in times of crisis.

So, leadership is one way that individuals affect groups, but groups also affect
individuals. You can see this especially clearly in group conformity, where members of a
group hew to the group’s norms and standards.

Basically, group conformity is the reason that you do jump off the bridge with your
friends. And this has been demonstrated in some fascinating experimental results. Let’s
go to the Thought Bubble to learn about perhaps the most famous – or infamous –
experiment on conformity.

The Milgram Experiment was run by American psychologist Stanley Milgram in 1974,
and it was presented as an experiment in punishment and learning, with two participants.
One participant was the teacher, who read aloud a series of word pairs and then asked the
other participant, the student, seated in another room, to recall them.
The student was strapped to a chair and wired up with electrodes. For each wrong answer,
the experimenter, who was standing beside the teacher, instructed the teacher to deliver a
painful electric shock to the student.

With each wrong answer, the intensity increased, from an unpleasant few volts up to 450
volts, a potentially deadly shock. But the experiment was not about punishment or
learning. The student was actually an actor, a confederate of the experimenter, and the
shocks were not real. The experiment was designed to test how far the teacher would go
in conforming to authority.

At some point in the experiment, the confederate would feign extreme pain and beg the
teacher to stop. Then he fell silent. If at any point the teacher refused to issue the shock,
the experimenter would insist that he continue.

In the end, 65% of participants went all the way, administering the presumably deadly
450 volt shock. And this is usually given as proof that people tend to follow orders, but
there’s a lot more to it than that.

If the experimenter gave direct orders to the teacher, like “You must continue, you have
no other choice,” that resulted in non-compliance. That’s when the teacher was more
likely to refuse. The prods that did produce compliance were the ones that appealed,
instead, to the value of the experiment – the ones that said administering the shocks was
necessary for the experiment to be successful and worthwhile.

So in this instance, the value of the experiment, of science, was a strongly held group
value, and it helped convince the subjects to continue, even though they might not have
wanted to.
This idea of group values points us to another important concept in understanding
conformity: the idea of groupthink. Groupthink is the narrowing of thought in a group, by
which its members come to believe that there is only one possible correct answer.

Moreover, in a groupthink mentality, to even suggest alternatives is a sign of disloyalty to


the group. Another way of understanding group conformity is to think about reference
groups. Reference groups are groups we use as standards to judge ourselves and others.

What’s "normal" for you is determined partly by your reference groups. In-groups are
reference groups that you feel loyalty to, and that you identify with. But you can compare
yourself to out-groups, too, which are groups that you feel antagonism toward, and which
you don't identify with.

And another aspect of a social group that can affect its impacts and dynamics is its size.
And here, the general rule is: the larger the group, the more stable, but less intimate, it is.
A group of two people is obviously the smallest and most intimate kind of group, but it’s
also the least stable.

Because, if one person leaves, there’s no group anymore. Larger groups are more stable,
and if there are disagreements among members, other members are around who can
mediate between them. But big groups also are prone to coalitions forming within them,
which can result with one faction aligning against another. The size of a group matters in
other ways, too, for instance in terms of social diversity.

Larger homogenous groups tend to turn inward, concentrating relationships within the
group instead of relying on intergroup contacts. By contrast, heterogenous groups, or
groups that have more diversity within them, turn outward, with its members more likely
to interact with outsiders.
Finally, it’s worth pointing out that social groups aren’t just separate clumps of people.
There's another way to understand groups, in terms of social networks.

This perspective sees people as nodes that are all socially interconnected. You can
imagine a "circle of friends" who are all connected to each other in different ways, some
with strong connections in a clique or subgroup, while some are connected by much
weaker ties.

And you can follow the ties between all of the nodes outward, to friends-of-friends and
acquaintances who exist on the periphery of the network. Networks are important,
because even their weak ties can be useful. Think of the last time you were networking,
following every connection you had to, say, land a job interview.

Regardless of whether you think about groups as networks and ties, or as bounded sets,
it's clear that they have important impacts on people, both inside and outside. If you just
looked at society as a bunch of individuals, you’d miss all the ways that groups impact
our lives – by acting as reference groups, by influencing our decisions through
group conformity, and much more.

And groups are important for how society itself is organized. So next time, we're gonna
talk about one big part of that: formal organizations and bureaucracy. For now, we’ve
learned about social groups. We talked about what social groups are and the different
kinds of groups.

Then we discussed group dynamics: how individuals affect groups and how groups affect
individuals. We learned about leadership, group conformity, reference groups, and the
impacts of group size. And finally, we talked about groups as networks and why
networks matter.
17. Formal Organizations

Every year we consume immense quantities of goods that make their way from one side
of the planet to the other. But have you ever wondered how this enormous flow of goods
is organized, so that stuff made in China is available in stores in rural Montana? It
happens because formal organizations make it happen.

Our world is structured by formal organizations, and life as we now know it can’t exist
without them. But it's not all rainbows and butterflies and incredibly extensive trade
networks, because within these organizations lurk some pretty substantial dangers.

OK so when I talk about formal organizations, what does that mean? Like, I gotta wear a
cocktail dress or a black tie? And is there, I dunno, a casual organization that I can be part
of instead? Where people just wear flip flops and maybe go floating?

Well, sociologists think of formal organizations as groups that are organized to achieve
goals efficiently. Now, that’s obviously incredibly broad. That’s why formal
organizations are so diverse – they include everything from the IRS to Google to your
local PTA.

But you can understand these kinds of groups a bit better, by thinking about the three
main types of formal organizations: Utilitarian organizations, for example, serve some
function for their members. Businesses pay their employees, and schools teach their
students – and also pay their teachers.

Normative organizations, sometimes called voluntary associations, are organizations that


people join as volunteers. They're called "normative" because people join them to pursue
some goal that they think is morally worthwhile. This includes charities like the Red
Cross, but also political parties and religious organizations. Finally, coercive
organizations are ones where you don’t have a say in whether you’re a member or not.

People are coerced into joining these organizations, often as punishment – as in prisons –
or treatment – say, through involuntary commitment into a psychiatric hospital.

Now, these are all modern-day examples, but formal organizations have been around
basically forever: They built the pyramids, they collected taxes across the Roman Empire,
and they helped organize monasteries and convents.

But, there is a major difference between those formal organizations and modern ones.
And that difference is between traditional and rational worldviews. This goes back to our
friend Max Weber, who you (hopefully) remember from episode 9.

A traditional world view takes the basic set-up of the world as given: The way previous
generations did things – their values and techniques – is thought to be basically the right
way. A rational world view, on the other hand, sees everything as up for grabs, and tries
to find the most efficient way to accomplish a given task through critical thinking and
calculation.

It's the transition from traditional to rational worldviews, or the rationalization of society,
that ushered in modernity for Weber, and we can see this rationality at work in an
especially pervasive kind of formal organization: the bureaucracy. A bureaucracy is an
organization that’s been rationally constructed to do things efficiently.

And according to Weber, every bureaucracy has six main things in common, which we
introduced in episode 9, but to recap: Its members each have specialized roles that fit
together in a hierarchy – that is, a clear chain of command linked by formal, written
communications.
Members of a bureaucracy also complete their work with technical competence, treating
colleagues and customers without regard for their individual, personal traits: That is,
everyone’s treated impersonally. And all of this functions according to detailed rules and
regulations, literally "by the book." These six traits make bureaucracies extremely
efficient at what they do.

But the very things that make bureaucracies effective also cause their share of problems.
One problem is just that, sometimes, bureaucracies are not actually that efficient. Since
bureaucracies are strictly hierarchical and rule-based, those rules can sometimes get in the
way.

This is what people mean when they talk about bureaucratic "red tape." And that focus on
rules leads to another kind of inefficiency: bureaucratic ritualism. In this case, the rules
become a kind of end in themselves, ultimately interfering with the organization's goals.

Faith in the rules can become just as damaging as faith in tradition ever was. And a
group’s goals can even shift so much that it comes down with what’s known as
bureaucratic inertia. This is where an organization’s ultimate
goal becomes just to perpetuate itself, to keep existing.

This can be an intentional choice, made by those who have the most to lose if the
organization were to disappear. But it can also happen just because its members believe
that the organization does good work. And it's not just that bureaucracies aren't always
efficient, or that their goals change: Their hierarchies create another problem,
the problem of oligarchy, or the rule of the many by the few.

Now, of course a hierarchical organization is going to be an oligarchy. After all, in a


hierarchy, the people at the top make decisions for everyone else to carry out. This is
partly what Weber thought helped make bureaucracies so efficient. But in a lot of
bureaucracies, like democratic governments, the people at the top of the hierarchy are
elected, or are appointed by people who are elected.

So even if they're giving orders, it should still be considered a democratic organization,


right? Well, here's where another sociologist, Robert Michels, saw a problem. He called it
the iron law of oligarchy. He argued that regardless of how democratic a bureaucracy is
in theory, in practice, it always tends toward pure oligarchy.

The people at the top may be elected, but because of their position of power, they're
actually insulated from the people who elected them. And even if the people in power are
voted out, those who replace them just become like the ones they replaced, because of the
way the system is structured.

Finally, what Weber saw as the worst danger of bureaucracy was something called
bureaucratic alienation. Bureaucracies are supposed to run with machine-like efficiency,
consistency, and calculability. But people aren’t machines.

So, bureaucracies can be dehumanizing to those who work in them and those they serve.
When a clerk at the DMV can’t help you because you didn’t follow some obscure rule,
they’re doing their job right. And if they exercise their personal, reasoned judgement and
ignore the rule, they’re doing their job wrong.

To be a good bureaucrat is not to think for yourself, but to be a good cog in the machine.
Despite all these problems, we still see rationalization all over the place, including in the
workplace, through what’s known as scientific management.

Scientific management is a system devised by American engineer Frederik Taylor, who,


in the early 1900s, sought to make industry work more efficiently. He came up with a
process in which management closely observes and systematizes how workers do their
jobs, so that those jobs could be refined over time, making them more efficient.
Taylor’s ideas were incredibly influential, and in time, companies across the US adopted
them. So, from governments to corporations, the bureaucratic model of formal
organizations has taken hold to be a regular part of modern life.

But in addition to the problems that bureaucracy can cause, American formal
organizations in particular have faced many challenges over the last century, due to
changes in what's called the organizational environment.

Simply put, that’s just the environment in which organizations exist and operate. The
organizational environment includes things like technology, political and economic
trends, and population patterns. And in the US, one the main challenges that formal
organizations have faced is the growing recognition that many of them were either racist
or sexist or both.

Following the civil rights and feminist movements in the 1960s and 70s, hiring practices
that excluded people who weren't white men came under increasing fire. While
excluding job candidates on the basis of race or gender might seem to go against Weber's
idea of technical competence, it's important to understand that that's not how those doing
the hiring saw it.

Often, they believed that those who were excluded were simply incapable of doing the
work. Until they were forced to do otherwise by changes in the organizational
environment, many were quite happy to go on excluding large swaths of their talent
pools.

More recently, another challenge has come in the form of big economic shifts, and the
changing nature of work itself. As the US transitions from an industrial to postindustrial
economy, there are fewer manufacturing jobs, and more jobs based around creating and
processing information. Frederik Taylor’s vision of work – as a series of discrete, rote
tasks – might have been a good way to improve assembly lines.
But applying it to things like programming, design, or marketing just hasn’t worked. As a
result, we’ve seen a lot of important changes in the organizations that house these of
newer kinds jobs:

Workers have more creative freedom, and more flexibility around when and where they
work, and organizational hierarchies have flattened. Compared to old style bureaucracies,
average employees in the information industry have a much more direct connection to
their leadership. But not everything has changed in the American workplace.

While big changes have taken place in high-skill jobs that require creative work, lower-
skilled jobs, which are now mostly in the service sector, are just as amenable to Taylor's
scientific management as ever. And rationalization is alive and well in many other
aspects of society, too.

If anything, it continues to spread. Sociologist George Ritzer has called this the
McDonaldization of Society: the process by which the principles of the fast food
restaurant have come to dominate the whole of society. Few things, after all, are more
hyper-rational than fast food.

The whole industry is based on the principles of efficiency, predictability, uniformity, and
control. The point of fast food is that you get your food quickly, and just as you expect it.
And that’s because the way the food is made is precisely controlled.

And Ritzer argues that these principles are having an growing impact in our society at
large. In education, for instance, we see more emphasis on standardized tests and tightly
controlled curricula that run students through the system in 4 years flat. All of this brings
us to what Ritzer calls the irrationality of rationality. As we saw with bureaucratic
ritualism and alienation, rational systems are often unreasonable, impersonal.
The reliance on rules and procedures can leave no room for independent judgment,
basically denying people their status as independent thinking beings. So, rational formal
organizations are a necessary part of our world. They’re how goods from all over the
world end up on shelves here in Montana.

But they're also a part of modern life that deserves our close attention. Today we learned
about formal organizations. We talked about the historical process of rationalization and
its impact on organizations in the form of bureaucracy. We discussed how organizations
change in response to their organizational environment.

Finally, we talked about the negative consequences of rationalization in organizations.

18. Deviance

A person holding up a convenience store and a pacifist at a protest might seem like polar
opposites. But they actually have something in common. So do an American vegan
preparing a meal at home, and a white-collar criminal committing tax fraud, and a
runaway slave.

They're all social deviants. We've spent a lot of time so far talking about how society fits
together, and how it functions. But we can’t cover that in any meaningful way without
also talking about the people who don't fit. We have to talk about who’s normal and
who’s deviant...and how they get to be that way.

Now, you might think that calling pacifists and vegans and runaway slaves deviant
is...rude, but in sociology, deviance isn't an insult. Deviance simply means being non-
normative. Different. So while this does include some things that we might think of as
bad or harmful, like, crime, it also includes things we might just think of as outside the
mainstream.

So if eating a burger is a traditional "all-American" cultural activity, then being vegan in


America is deviant. But there's something important to notice here. I didn't say being
vegan in a society where most people eat meat is deviant, because deviance is not just a
matter of numbers.

Deviance is anything that deviates from what people generally accept as normal. For
instance, red hair is statistically uncommon, but it’s not considered deviant. Dying your
hair bright purple – that is deviant and might earn you some strange looks from some
people.

And strange looks from strangers are a form of social control, attempts by society to
regulate people's thoughts and behaviors in ways that limit, or punish, deviance.
Specifically, the strange looks are what are known as negative sanctions, negative social
reactions to deviance.

The opposite, naturally, are positive sanctions – affirmative reactions, usually in response
to conformity. Once you start looking, you begin to see forms of social control, both
positive and negative, everywhere: a friend making fun of your taste in food or a teacher
congratulating you on a good paper.

Or someone commenting loudly on your bright purple hair. Sanctions all. These are all
examples of informal norms, or what sociologists call folkways. You won’t be arrested
for violating a folkway, but breaking them usually results in negative sanctions.

But not all norm violations are informally sanctioned. Formal sanctioning of deviance
occurs when norms are codified into law, and violation almost always results in negative
sanctions from the criminal justice system – the police, the courts, and the prison system.
So given the power of formal sanctions, why does anyone do deviant things? This is a big
question. Before we get to the sociological perspective, we need to mention some of the
biological and psychological views of deviance that have been influential in the past.

Spoiler alert: Historically, these explanations have been insufficient in helping us


understand non-normative behavior. For example, the earliest attempts at scientific
explanations for deviance, and crime in particular, are biologically essentialist
explanations. They were based on the idea that something about a person's essential
biology made them deviant.

In 1876, Cesare Lombroso, an Italian physician, theorized that criminals were basically
subhuman, throwbacks to a more primitive version of humanity. He went so far as to
suggest that deviants could be singled out based on physical characteristics, like a low
forehead, stocky build, and prominent jaw and cheekbones, all of which he saw as
reminiscent of our primate cousins. Another scientist, U.S. psychologist William
Sheldon, also found a relationship between general body type and criminality.

In the 1940s and ‘50s, he studied body types and behavior and concluded that men who
were more muscular and athletic were more likely to be criminally deviant. We know
today that the idea that physical features somehow correspond to criminality is just
no...it’s wrong.

But later work by Eleanor and Sheldon Glueck appeared to confirm William Sheldon’s
basic findings on male muscularity and criminal aggression. However, they refused to
ascribe their results to a biological explanation. They countered that a simple correlation
between body type and criminality could not be taken as causal evidence.

Instead, they argued this was an example of a self-fulfilling prophecy: People expect
physically strong boys to be bullies, and so they encourage aggressive behavior in such
boys. Large boys who have their bullying behavior positively sanctioned are encouraged
to continue being aggressive, and some eventually grow up and engage in aggressive
criminal behaviors.

Psychological approaches, by contrast, place almost all the explanatory power in a


person’s environment. While some elements of personality may be inherited,
psychologists generally see personality as a matter of socialization. So they see deviance
as a matter of improper or failed socialization. A classic example of this strain of
psychological explanation is found in the 1967 work of Walter Reckless and Simon
Dinitz.

They studied boys who lived in an urban neighborhood known for its high rate of
delinquency. Using the assessment of the boys’ teachers, they grouped the youths into
"good boys" and bad boys, and then interviewed them to construct psychological profiles.
They found that the so-called "good boys" had a strong conscience, were good at coping
with frustration, and identified with conventional cultural norms.

The "bad boys," on the other hand, were the opposite on all counts. Following the boys
over time, Reckless and Dinitz found that the "good boys" had fewer run-ins with the
police. And they attributed this to the boys’ ability to control deviant impulses.

This idea that deviance is essentially a matter of impulse control is called containment
theory, or having a personality that contains deviant actions. And containment theory has
received support in recent research, including a 2011 study on 500 male fraternal twins
that assessed their self-control, resilience, and ability to delay gratification.

Researchers found that the brother who scored lower on these measures in childhood was
more likely to be criminally deviant in adulthood. Now, while we've seen that there's
clearly value in both biological and psychological approaches, they’re each also
fundamentally limited. For example, both kinds of explanations link criminal deviance to
individual factors – either of body or of mind – while leaving out other important factors,
like peer influence or what opportunities for deviance different people might be exposed
to.

Plus, biological and psychological explanations only understand deviance as a matter of


abnormality. Both approaches begin by looking for physical or mental irregularities,
whereas more recent research suggests that most people who do deviant things are both
biologically and psychologically normal – or, to use a better word, let’s say: typical.

Finally, neither biology nor psychology can answer the question of why the things that
are deviant are considered deviant in the first place. Even if you could 100% prove that a
certain abnormality caused people to be violent, not all violence is considered a form of
deviance. Think boxing. And here's where we can turn to a sociological approach, which
sees deviance and criminality as the result of how society is structured. And here, the
approach is based on three major ideas.

First is the idea that deviance varies according to cultural norms. In other words, nothing
is inherently deviant: Cultural norms vary from culture to culture, and over time and
place, so what’s deviant now might have once been quite normal. Slavery is an obvious
example. Not only was race-based slavery normal in 19th century America, rejecting it
was considered deviant.

So deviant, in fact, that physician Samuel Cartwright wrote about a disorder he called
drapetomania to explain the supposed mental disorder that caused slaves to flee captivity.
The second major principle sociologists draw on is the idea that people are deviant
because they’re labeled as deviant.

What I mean here is that it’s society's response that defines us, or our actions, as deviant.
The same action can be deviant or not, depending on the context: Sleeping in a tent in a
public place can be illegal, or it can be a fun weekend activity, depending on where you
do it.

And, as the Gluecks argued, labeling people can become a self-fulfilling prophecy: When
society treats you as a deviant, it’s very easy to become one. Deviance doesn't even
necessarily require action. Simply being a member of a group can classify you as a
deviant in the eyes of society.

The rich may view the poor with disdain for imagined moral failures, or we can return
again to racism and slavery, which imagined African Americans as deviant by nature.
And the last major sociological principle for understanding deviance is the idea that
defining social norms involves social power. The law is many things, but Karl Marx
argued that one of its roles is as a means for the powerful elite to protect their own
interests.

This is obvious in the case of something like fugitive slave laws, which applied a formal
negative sanction to deviating from the norms of slavery. But we can also see it in things
like the difference between a campaign rally and a spontaneous protest. Both are public
political speech, and both may block traffic, but they draw resoundingly different
reactions from police. So these are three foundational ideas about the sociological
perspective on deviance.

But I want to stress that they only begin to define a perspective. Sociology clearly
understands deviance in a different way than biology and psychology do, but if you really
want to dive into more detailed sociological explanations, you'll need to wait until next
week, when we look at the major theoretical explanations for crime and deviance. Today
we learned about social deviance. We discussed biological and psychological approaches
to explaining deviance, what they can bring to the table, and their inherent limitations.
Then we finished by turning to the sociological perspective and talking about the social
foundations of deviance.
19. Theory & Deviance

As we noted last week, an armed robber and a pacifist have something in common:
They're both social deviants. But they're obviously also really different. It's hard to
imagine that some people resort to armed robbery for some of the same reasons that other
people reject violence. That’s why there are many different theories of deviance that can
give us some perspective on how and why both the armed robber and the pacifist become
deviant. Through sociology, we can explore how the deviance of these two very different
people relates to society at large.

To understand where deviance comes from, we have to go back to the three major
sociological paradigms. And, as you might expect, structural functionalism, symbolic
interactionism, and conflict theories each offer a different perspective on the matter. Way
back in episode 5, we touched on Emile Durkheim’s structural-functionalist approach to
deviance.

His basic insight was that, since deviance is found in every society, it must serve some
function. And Durkheim argued that deviance serves four functions in particular: First, he
said, deviance helps define cultural values and norms. Basically, we can only know
what’s good by also understanding what’s not good. He also argued that society's
response to deviance clarifies moral boundaries. This means that when society reacts to
deviance, it’s drawing a line, saying that when behaviors cross a certain moral threshold,
they can be sanctioned, either formally or informally.

So this can range from a bank robber being sent to jail, to someone being made fun of for
the way they dress. Durkheim also said that these reactions bring society together. By
reacting in similar ways to something that seems not-normative, we’re basically affirming
to each other that we’re an “us,” and the deviants are “them.” And this isn’t necessarily a
bad thing. In the more serious instances of deviance – like, school shootings, for example
–you see people uniting around that moral boundary that’s been breached, and supporting
each other.

The spontaneous outpourings of outrage, grief, and charity that you see in response to
school shootings are all examples of this pattern in action. And finally, Durkheim pointed
out that deviance can actually encourage social change. We talked in episode 5 about
Rosa Parks’ civil disobedience, which was by definition deviant, and it was a factor
setting off major changes in American society, in the form of the Civil Rights Movement.

Now, while deviance might be necessary, some societies can have more or less of it than
others. To help explain the difference, American sociologist Robert Merton proposed, in
the 1930s and ‘40s, what he called strain theory. Merton argued that the amount of
deviance in a society depends on whether that society has provided sufficient means to
achieve culturally defined goals. In the US, financial success is one of the strongest
culturally defined goals.

And the means of achieving it include things like getting an education. So what we call
“the American Dream” – the idea of working hard to achieve financial stability – is a
prime example of what Merton called conformity: achieving culturally set goals by way
of conventionally approved means. Go to school, get good grades, graduate, get a good
job. Work hard. Get rich. Success. Right? Well, of course, even if wealth is your goal,
this approach isn't an option for a lot of people. Many who are raised in poverty, for
instance, lack a realistic path to prosperity.

And if you don’t have access to the means – like money for an education or good-paying
job opportunities – then the goal will be elusive, too. So one response to the lack of
acceptable means is to use unacceptable means – that is, deviant ones. Merton called this
innovation, but here, innovation means something a little different from what you’re used
to. Merton used it to describe deviant solutions that people come up with to reach their
goals. In this case, it could include everything from petty thievery to organized crime.

The goal is still financial success, but the illegitimate means used to get there make it
deviant. Now, you might also respond in the opposite way, by giving up on the goal – in
this case, economic success – and instead committing totally to following the rules. You
might decide that you may never be rich, but at least you’re not going to be deviant.

Merton called this ritualism, a deep devotion to the rules because they are the rules. Of
course, your other option is to reject the whole system altogether – the means, the goals,
all of it. In this kind of response, which Merton labeled retreatism, a person basically
“drops out” of society, rejecting both the conventional means and goals.

Merton classed drug addicts and alcoholics in this group, because he saw these addictions
as a way of escaping the pressures of the goals and means. But rejection can also be
constructive: Rebellion is a rejection of goals and means, but in the context of a
counterculture – one that supports the pursuit of new goals according to new means.

The artist who doesn’t want financial success, but instead pursues recognition from their
peers is an example of this. So the structural functionalist perspective on deviance
provides some useful ways of thinking about how deviance works on a macro scale. But
it works on the assumption that everyone who does deviant things will be treated as
deviant. The other paradigms of sociology call this into question: They point out that
social status impacts how deviance is punished. Or whether it’s punished at all. For
example, a symbolic interactionist understands deviance through what’s known as
labeling theory – the idea that things like deviance and conformity are not so much a
matter of what you do, but how people label it.
Let’s go to the Thought Bubble to see how labels can make a deviant. Imagine a student
skipping school. This is an example of primary, or minor, deviance. On its own, the
transgression isn’t going to affect the student’s self-concept.

That is, it’s not going to cause her to think of herself, or label herself, as a deviant. And if
she’s an otherwise good student, then her teacher might just write it off as a one time
thing, and the fact that she cut classes would just remain a minor, primary deviance. But
if the teacher responds more strongly, and punishes her, then that same infraction of the
rules can escalate into secondary deviance.

In this case, a strong sanction could make the student start to think of herself as a truant.
And this can lead to what Erving Goffman called a stigma:

a powerfully negative sort of master status that affects a person’s self-concept, social
identity, and interactions with others. One of the most powerful effects of stigma is that it
leads to more labeling, especially of what a person has done, or might still do. For
example, a stigmatized student could be the subject of retrospective labeling, where her
past is reinterpreted, so that she’s suddenly understood as having always been
irresponsible.

Likewise, she could be subjected to prospective labeling, which looks forward in time,
predicting her future behavior based on her stigma. Thanks Thought Bubble. As you can
see, the whole process of labeling can be extremely consequential. And it affects not only
how we think of ourselves, but also who responds to deviance, as well as how they
respond, and how the deviant person is understood in society.

Drug abuse, for instance, has largely been understood as a moral failing. But it’s
increasingly being seen as an illness. And as that perception has changed, so too have the
people who respond to drug abuse. Instead of just being a job for law enforcement, today,
instances of drug abuse often involve both police and medical professionals.
And instead of getting jail time, in some places, violators are given medical and
psychological treatment. In other words, how people respond is beginning to change. And
finally, instead of being judged as personally culpable for some moral failing, addicts are
increasingly seen as suffering from a disease, freeing them, in part, from some degree of
personal responsibility for their behaviors.

So the very way in which they’re understood is also evolving. There are a couple other
symbolic interactionist approaches to deviance that don’t focus on the power of labels.
Differential association, for example, argues that who you associate with makes deviance
more or less likely. And control theory focuses on a person’s self-control as a way of
avoiding deviance, as well as their ability to anticipate and avoid the consequences of
their actions.

All of these symbolic interactionist approaches highlight the interpersonal responses to


deviance. But a Conflict Theory approach links deviance to social power. If we look at
society, we find that the socially deviant are not necessarily the most dangerous. Rather, a
conflict-theory perspective points out that they are often the most powerless. Conflict
theory can explain why this is so in a few different ways:

For one thing, conflict theory posits that norms and laws reflect the interests of the
powerful. So the powerful can defend their power by labeling as deviant anything that
threatens that power. For instance, in capitalist societies, deviant labels are often applied
to those who interfere with the way capitalism functions.

And since capitalism is based on the private control of wealth, stealing is clearly labeled
as deviant. But there are also different rules for when the rich target the poor: Petty
thieves are treated as deviant in a way that corporate criminals are not, even though they
both steal from other people.
An employee taking goods out of the backroom is hauled in by the police, while the boss
who withholds overtime pay often doesn’t even pay a fine. And this is the case, according
to conflict theory, because the powerful are able to defend themselves against labels of
deviance, so deviant actions are less likely to lead to a deviant label and thus reactions to
that deviance.

Finally, conflict theory points out that norms have an inherently political nature, but the
politics tend to be masked by the general belief that if something is normative, it must be
right and good. So while we may take issue with how a law is applied, we much more
rarely ask whether the laws themselves are just or not. Conflict theorists see these
explanations at work wherever the inequality of social power can be found – across
gender, among races, and between groups of different socioeconomic status.

Ultimately, structural functionalism, symbolic interactionism, and conflict theory all give
us useful tools for understanding deviance. Each of these paradigms is powerful, and
we'll be making use of all three next week, when we look specifically at crime.

Today we learned about how the three major paradigms in sociology approach deviance.
We talked about structural functionalism and how deviance can fulfill a function in
society. Then we turned to symbolic interactionism and looked at how deviance is
constructed. Finally, we discussed conflict theory and how deviance is connected to
power and inequality.

20. Crime
Over the last few weeks, you’ve heard me say many times that deviance isn't necessarily
criminal. But of course, sometimes it is. Understanding crime sociologically means we
need to answer some basic questions: Like, what is the nature of crime? Who commits
crimes and why? And how does society respond to it? You’ll see pretty quickly that these
questions are actually all tangled together. And you can’t untangle them.
It might not surprise you to learn that the literal definition of crime is the violation of
criminal laws. And the FBI's Uniform Crime Report, a major source of data on crime in
the U.S., tracks many different kinds of crime. There are crimes against the person, which
include murder, aggravated assault, rape, and robbery, and crimes against property, which
include burglary, larceny-theft, auto-theft, and arson. But there's also a third kind of
crime, not generally tracked in major crime indices, often called victimless crimes.

They include things like illegal drug use, prostitution, and gambling. But the name is
misleading, because many of these cases have serious negative consequences for the
people involved. Data from the FBI show that in the US in 2015, there were about 1.2
million violent crimes and about 8 million property crimes. Raw numbers aren't terribly
helpful, though, so we can turn these into crime rates – in the case of 2015, that would be
372.6 violent crimes per 100,000 people and 2,487 property crimes per 100,000 people.
Those numbers are about half what they were in 1991, when crime rates peaked after a
steady upward trend from about 1960.

These numbers allow for some useful comparisons, but it's important to realize that they
can’t capture the whole picture. Because, crime statistics are based on police reports, so
they only include crimes that are reported to the police. And not all crimes are reported.
So researchers sometimes conduct victimization surveys, which ask representative
samples of the population if they have had any experiences with crime. And one such
survey from 2015 suggests that fewer than half, about 47%, of violent crimes were
reported to police, and just 35% of property crimes were. So what can we say about
who’s committing these crimes?

Well, based on government data, sociologists have put together a kind of demographic
picture, but it only shows us who's being arrested for crime, not necessarily who’s
committing it. To begin with, the average arrestee is young and male: people between the
ages of 15 and 24 make up about 14% of the population, but accounted for 31.8% of all
arrests in 2015. And while men are about half the population, they made up about 62% of
arrests for property crimes and 80% of arrests for violent crimes.

And, while FBI data don’t assess social class, we know from other sources that those of
lower social class are more likely to be arrested. But again, that's not the whole picture,
because, as we talked about last time, wealthy mericans aren't likely to be seen as
criminally deviant in the same way that the poor are. This brings us to race and ethnicity,
where disparities in arrests are clear: despite making up only 13.3% of the population,
African Americans make up 26.6% of arrests. There are a number of reasons for this.

First, race and ethnicity are closely linked to wealth and social standing, and as we just
saw, people of lower social class are more likely to be arrested. Second, the data don’t
include many crimes that are more commonly committed by whites, like drunk driving,
embezzlement, and tax fraud. Finally, African Americans, and people of color generally,
are overcriminalized: They’re more easily assumed to be criminal and treated as such by
both the police and the public at large.

For example: A study of pedestrian stops in New York City found that African
Americans and Hispanics are disproportionately likely to be stopped, even when
controlling for race-specific arrest rates – that is, the rate at which those racial and ethnic
groups are arrested. And this rate itself isn't entirely fair: despite the fact that black people
and white people use drugs at similar rates, black people are far more likely to be arrested
for it.

A 2009 Human Rights Watch report found that in 2007, black people were 3.7 times
more likely to be arrested for drugs than white people. And studies have shown that the
racial composition of a neighborhood has an influence on perceptions of crime in that
neighborhood. Larger African American populations, for example, have been found to be
associated with increased perception of crime, even when controlling for the actual crime
rate.
And this brings us to our third question: how society responds to crime.
Overcriminalization, after all, isn't a matter of who commits crimes, but of how society
imagines who criminals are. Society’s main institutional response to crime comes from
the criminal justice system, which is composed of three parts in the US: the police, the
courts, and the system of punishment and corrections.

The police are the main point of contact between the criminal justice system and the rest
of society. There are about 750,000 police officers in the United States, and it’s their
personal judgement that makes for the actual application of the law. And, in exercising
this judgement, police officers size up a situation according to a number of factors.

The severity of the situation, the suspect's level of uncooperativeness, and whether the
suspect has previously been arrested all make an arrest more likely. Officers also take the
wishes of the victim into account. Likewise, the presence of observers makes an arrest
more likely, because making an arrest moves the encounter to the police station, where
the officer is in control.

Finally, the suspect's race plays a role, as officers are more likely to arrest non-white
suspects because of a long-standing association of non-whiteness with criminality –
which is the cultural basis for overcriminalization.

And the effects of this can be clearly seen not only in the data on overcriminalization that
I mentioned before, but in studies of race and perceptions of threat. And race shouldn't be
understood as an independent factor here; the other factors are also all seen through race.
So when a police officer assesses how threatening or uncooperative a suspect is, non-
white suspects are viewed as more threatening and more uncooperative, even given the
same behavior.
The point here is that policing has a lot of aspects to it that are surprisingly subjective.
Given this problem, we might expect the courts to help correct them by accurately
adjudicating guilt and innocence. And sometimes they do.

But in practice, how well they do their job is often a matter of who the defendant is and
the economic resources that they have access to. Let’s go to the Thought Bubble to see
how people with less money are affected differently by the criminal justice system:

The first problem is bail. Bail allows people to be released from jail after an arrest by
guaranteeing, usually with a deposited sum of money, that they’ll show up for their day in
court. But in practice, it just keeps defendants without money behind bars until their court
date. A date which may be a long time in coming.

The Sixth Amendment guarantees the right to a speedy trial, but many jurisdictions in the
US are heavily overburdened. There are just too many cases. So those who can’t afford
bail may wait months, even years, before their case is heard. And defendants who can’t
afford to hire lawyers are represented by public defenders, who are, to varying degrees,
underpaid and overworked.

They often simply can't give their clients adequate representation, frequently leading to
harsher sentences for the poor. Together these make the last issue, plea bargaining, much
worse. Plea bargaining is basically a negotiation in which the prosecution offers
concessions on the legal punishment in exchange for the defendant's guilty plea.

In theory, this is a useful tool for quickly resolving simple cases and easing the burden on
the courts. But while plea bargaining may be a negotiation, the parties aren't on even
footing. A poor defendant, stuck in jail because they can’t make bail, represented by a
public defender without the resources to adequately defend them, and facing
the threat of a long jail sentence, is strongly incentivized to take a plea bargain, regardless
of their actual guilt or innocence.
Those convicted of criminal deviance are then moved through the last part of the criminal
justice system, the system of punishment and corrections. And this brings us,
unavoidably, to mass incarceration. Mass incarceration refers to the growth of the
incarcerated population over the past several decades, and the social, political, and
economic conditions that caused it.

Here’s what that looks like in terms of the numbers: Today there are over 2.3 million
people imprisoned in the United States. For some context, while the US has about four
and a half percent of the world's population, it has nearly a quarter of the world's
incarcerated population. And the US has the highest incarceration rate in the world, with
693 people out of every 100,000 behind bars. This is more than 5 times higher than the
rate in most other countries. But it hasn't always been like this: the incarcerated
population has increased by 500% over the past 40 years. And this increase has only a
limited relationship to actual crime rates.

Like I mentioned, crime rates dropped dramatically in the 90s, but prison populations
continued to rise. Mass incarceration is a consequence of political choices, namely
"tough-on-crime" policies, like mandatory minimum sentences. And mass incarceration
falls hardest on the poor, and on people of color: Despite making up only 37% of the US
population, non-whites make up 67% of the prison population. In 2015, the incarceration
rate for white men was 457 per 100,000. The rate for hispanic men was more than twice
as high – 1,043 per 100,000 –and the rate for black men was nearly six times higher
(2,613 per 100,000).

So, are these “tough-on-crime” policies effective? Well, there are a couple ways to think
about the purpose of punishment. One approach to punishment is retribution, which is
about making the offender suffer as the victim suffered, as a kind of moral vengeance. In
the U.S., a more favored approach is deterrence, which tries to reduce crime by making
the prospect of getting caught sufficiently awful.
Yet another approach is societal protection, designed to render an offender incapable of
further criminal offense, usually through long prison sentences or capital punishment.
And finally, rehabilitation views punishment as an opportunity to reform offenders and
return them to society as productive citizens.

In practice, rehabilitation is hard to accomplish, because the prison system has limited
resources and because severe limitations are placed on convicted felons that go beyond
the criminal justice system. Felons are often barred from social welfare programs, for
example, and face extensive legal discrimination in hiring and housing.

The fact that reintegration into society is so difficult leads to high rates of recidivism, or
re-offense that leads to incarceration. A study by the National Institute on Justice of
prisoners from 30 states estimated that within three years of release, two-thirds (67.8%)
of them were re-arrested. Five years after release, three-fourths (76.6%) had been re-
arrested. So these approaches to punishment don’t appear to work as deterrence.

Now, long sentences succeed in removing offenders from society, but that removal itself
can have damaging effects, with communities of color being particularly impacted.
Incarceration puts stress on families, destabilizes neighborhoods as residents cycle in and
out of prison, and leads to increasing numbers of people with limited employment
prospects, partly because employers can legally refuse to hire those with criminal records.

So when we talk about crime, we can’t look at any of these questions in isolation:
Defining crime based on FBI data misses how these definitions are applied in the real
world. And only paying attention to the demographics of offenders overlooks the
conditions that create those statistics. Likewise, looking at society’s response
alone misses how that response answers the other two questions. It’s all tangled. Today
we learned about crime in the US. We looked at the legal definitions of crime and used
FBI data to get an idea of the amount and kinds of crime.
We put together a demographic picture of who gets arrested, and we talked about why
that’s not necessarily who commits crime. And we talked about society’s response to
crime in the criminal justice system, and how that response ends with mass incarceration.

21. Social Stratification

Imagine two people. Two extremely wealthy people. One of them inherited their money,
acquiring it through the luck that comes with being born to owners of immense amounts
of property and wealth. And the other person worked for what they have.

They started at the bottom, and through years of hard work and clever dealing, they built
a business empire. Now: which one would you say deserves their wealth? Sociologically,
the interesting thing here isn't your answer, not really. It's the fact that different societies
in different times and places have different answers to this question. Because the question
of what it means to deserve wealth, or success, or power, is a matter of social
stratification.

Social stratification is what we’re talking about, when we talk about inequality. It's a
system by which society categorizes people, and ranks them in a hierarchy. Everything
from social status and prestige, to the kind of job you can hold, to your chances of living
in poverty, are affected by social stratification. That’s because, one of the first principles
of social stratification is that it’s universal, but variable.

It shows up in every society on the planet, but what exactly it looks like – how it divides
and categorizes people, and the advantages or disadvantages that come with that division
– vary from society to society. Realizing that social stratification exists in every society
brings us to another principle: that stratification is a characteristic of society, and not a
matter of individual differences. People are obviously all different from each other, so we
might assume that stratification is just a kind of natural outcome of those differences, but
it's not.

We know this because we can see the effects of social stratification on people,
independent of their personal choices or traits: For example, children of wealthy families
are more likely to live longer and be healthier, to attend college, and to excel in school
than children born into poverty. And they’re also more likely to be wealthy themselves
when they grow up. And this highlights another key principle of social stratification: It
persists across generations.

So, stratification serves to categorize and rank members of society, resulting in different
life chances. But generally, society allows some degree of social mobility, or changes in
position within the social hierarchy. People sometimes move upward or downward in
social class, and this is what we usually think of when we talk about social mobility.

But more common in the United States is horizontal mobility, changing positions without
changing your standing in the social hierarchy. This generally happens when a person
moves between jobs that pay about the same and have about the same occupational
prestige.

Like stratification itself, social mobility isn't just a matter of individual achievement;
there are structural factors at play, too. In fact, we can talk specifically about structural
social mobility: when a large number of people move around the hierarchy because of
larger societal changes. When a recession hits, and thousands of people lose their jobs
and are suddenly downwardly mobile, that's structural mobility.

But stratification isn't just a matter of economic forces and job changes. Which brings us
to another aspect of social stratification: It isn't just about economic and social
inequalities; it’s also about beliefs. A society’s cultural beliefs tell us how to categorize
people, and they also define the inequalities of a stratification system as being normal,
even fair.

Put simply: if people didn't believe that the system was right, it wouldn’t last. Beliefs are
what make systems of social stratification work. And it’s these beliefs about social
stratification that inform what it means to deserve wealth, or success, or power. These
four principles give us a better understanding of what social stratification is, but they still
haven't told us much about what it looks like in the real world.

So, sociologists classify stratification systems as being either closed or open. Closed
systems tend to be extremely rigid and allow for little social mobility. In these systems,
social position is based on ascribed status, or the social position you inherit at birth.

On the other hand, open systems of stratification allow for much more social mobility,
both upward and downward. Social position tends to be achieved, not ascribed. Now,
these terms are pretty theoretical, so let’s look at some examples of more closed or open
systems, as well as societies that fall in the middle.

The archetypal closed system is a caste system. Of these, India's caste system is probably
one of the best known. And while it’s a social system of decreasing importance, it still
holds sway in parts of rural India, and it has a strong legacy across the country. Let’s go
to the Thought Bubble: The traditional caste system contains four large divisions, called
varnas: Brahman, Kshatriya, Vaishya, and Sudra.

Together these varnas encompass hundreds of smaller groups called jatis at the local
level. The caste system in its traditional form is a clear example of an extremely rigid,
closed, and unequal system. Caste position not only determined what jobs were
acceptable, but it also strongly controlled its members’ everyday lives and life outcomes.
The system required endogamy, or marriage within your own caste category. And in
everyday life, the caste system determined who you could interact with and how, with
systems of social control restricting contact between lower and higher castes. And this
whole system was based on a set of strong cultural and religious beliefs, establishing
caste as a right of birth and living within the strictures of your caste as a moral and
spiritual duty.

We see a variation of the caste system in feudal Europe with the division of society into
three orders or estates: the nobility, the clergy, and the commoners. Again, a person's
birth determined his social standing; commoners, for instance, paid the most taxes and
owed labor to their local lord. So they had little expectation that they’d rise above their
station.

The whole social order was justified on the belief that it was ordained by god, with the
nobility ruling by so-called divine right. Both caste systems use ancestry and lineage as a
main principle of social stratification, but race has also been used as the main distinction
in closed social systems. The South African system of apartheid, for instance,
maintained a legally enforced separation between black people and white people for
decades.

Apartheid denied black people citizenship, the ability to own land, and any say
whatsoever in the national government. The Jim Crow laws of the American South were
another example, as was slavery before that.

In contrast with caste systems, class systems are the archetypal open systems. They aren't
based solely on ascribed status at birth. Instead they combine ascribed status and personal
achievement in a way that allows for some social mobility.

Class is the system of stratification we have in American society. The main difference
between caste and class systems is that class systems are open, and social mobility is not
legally restricted to certain people. There aren't formally defined categories in the same
way there are in the Traditional Indian Caste system.

Being in the “under-class” in the U.S. is not equivalent to being an “untouchable” from
India. In class systems, the boundaries between class categories are often blurred, and
there’s greater opportunity for social mobility into and out of class positions. The
American system of stratification is founded on this very idea, in fact: that it’s possible,
through hard work and perseverance, to move up the social hierarchy, to achieve a higher
class standing.

And this points to another difference in systems of stratification: Instead of ancestry,


lineage, or race being the key to social division, the American system has elements of a
meritocracy, a system in which social mobility is based on personal merit and individual
talents. The American dream is that anyone, no matter how poor, can "pull themselves up
by their bootstraps"

and become upwardly class mobile, through nothing but hard work and gumption. The
American system is certainly more meritocratic than feudal Europe or traditional India;
but the idea of meritocracy is as much a justification
for inequality as it is an actual principle of stratification.

In an open, class-based system of stratification, it’s easy to believe that anyone who’s not
upwardly mobile deserves their poverty. Because a meritocratic class system is supposed
to be open, it’s easy to ignore the structural factors that influence class standing. But just
as the Indian caste system and feudal estate system placed their limits on certain groups,
the American class system limits just how far hard work can take some people.

The US class system tends to reproduce existing class inequalities, because the
advantages that you start with have an incredibly powerful impact on where you can end
up. This is part of the reason that the US is still stratified along race and gender lines.
That said, these inequalities are no longer explicitly enshrined in the law, which is an
example of the greater openness of class systems.

Because of this openness, class systems also have a greater likelihood of opportunity for
individuals to experience status inconsistency: a situation where a person’s social position
has both positive and negative influences on their social status Stratification isn’t just a
matter of one thing after all. When we talk about socioeconomic status, for instance,
we’re including three things: income, education, and occupational prestige.

An example of status inconsistency is an adjunct professor who’s very well educated, but
earns a low income. There’s an inconsistency among these different aspects of their
social status; low income tends to decrease social status while at the same time, a high
level of education and the societal respect for the occupation of college professor
improves social status.

All these comparisons between closed and open systems might make it sound like they’re
totally different: a system is either one or the other. But really they’re two poles on a
spectrum. Not every society is strictly a caste system or a class system. Modern Britain,
for instance, is a good illustration of a mixed system of stratification.

It still maintains a limited caste system of nobility as a legacy of the feudal system of
estates, which survives alongside, and helps reinforce, a class system similar to what we
have in the U.S. And some systems of stratification even claim that its citizens are
entirely equal, as the Soviet Union did. Following the Russian Revolution of 1917,
the USSR was established as a theoretically classless society.

But inequality is more than just economic. And Soviet society was stratified into four
groups, each of which held various amounts of political power and prestige: apparatchiks
or government officials, intelligentsia, industrial workers, and the rural peasantry. So, like
I mentioned before, stratification is universal, but variable. If you want to study a society,
one of the things that you need to look at is that way that it’s stratified, and whether, and
how, social mobility occurs. Today we learned about social stratification.

We talked about four basic principles of a sociological understanding of stratification. We


discussed open and closed systems of stratification, and finally we talked about examples
of different kinds of stratification systems, including caste systems and class systems.
Next time we'll talk more about the why and how of stratification by looking at different
sociological theories of stratification.

22. Why is there Social Stratification

If you asked a medieval peasant whether they liked working two days a week for their
lord, while they barely made enough food for themselves, they’d probably say no. And if
you ask a factory worker today whether they like making a tiny fraction of what their
company’s CEO makes, they’d probably have a similar answer. But, even though huge
numbers of people don’t want inequality, it still exists, and it has for a long time.

And the systems of stratification that we talked about last week don’t really help explain
this: They can tell us about how this inequality happens, but they can’t tell us why. If we
want to answer that question, we'll have to return, once again, to our old friends the three
sociological paradigms: structural functionalism, social conflict theory, and symbolic-
interactionism.

Let’s start with clarifying something pretty important about how sociologists understand
inequality: Even if the peasant and the factory worker both dislike the inequality in their
lives, they might still believe that it’s fair. The peasant might say that it’s simply their
place in the world to toil for their lord, and the factory worker might say that the CEO
surely deserves his wealth. And this happens because of their societies’ ideology.

For our purposes, an ideology is a set of cultural beliefs and values that justify a
particular way of organizing society. Ideology also includes strongly held beliefs about a
society’s patterns of inequality. Ideology can help explain why inequality never goes
away, but it doesn’t on its own explain why we have unequal societies in the first place.
For that, we have to turn to our three paradigms. From a structural functionalist
perspective, we have social stratification because...

well, you know the basic story of structural functionalism by now, so say it along with
me: We have stratification because it's functional for society. This is the basic argument
of what’s known as the Davis-Moore Thesis. Put forward in 1945 by Kingsley Davis and
Wilbert Moore, it argues that society assigns greater economic and social rewards to
those jobs that are more important to society.

This guarantees that difficult jobs will be filled, the thinking goes, and will draw people
away from easier and less important work. So, the more important a job is for the proper
functioning of society, the more a society rewards it, which promotes the effective
functioning of that society – and also a system of social stratification.

Davis and Moore basically argue that, without unequal rewards, few people would want
the jobs that require the years of training or personal sacrifice that typically come with
long work hours. Think medical doctor. Without the unequal rewards to motivate people,
we’d have a lot of lifeguards sunning themselves on the beach and not very many ER
docs. But there are some serious problems with this idea.

To begin with, Davis and Moore don't talk about how their thesis actually works in
society: They only talk about why inequality might be functionally useful. And this leads
to another problem: Not all jobs that are important are necessarily hard to learn, or come
with high pay.

Garbage collecting, for instance, is extremely important for the smooth functioning of
society. But it's not a particularly high-paid, socially valued job. And this mismatch
works the other way, too: Not all highly paid jobs are functionally important. For
instance, ask yourself who is more functional for society: a high school teacher or a
famous actor?

Now think about who gets paid more. Finally, Davis and Moore also ignore the fact that
not all paths are equally open to all people. If inequality is functional for society because
it motivates hard work, then society should reflect this by being meritocratic – a society
in which everyone can work hard and get ahead. But as we’ve already seen, this is not
the social reality.

The structural nature of inequality, or the ways in which a society is organized to the
advantage some groups over others, can be a cause of individual success or failure, no
matter how hard a person works. Now, while Davis and Moore don’t really deal with the
impact of inequality, social-conflict theory very much does.

For Karl Marx, stratification is based on different relations to the means of production. At
the simplest level, one class controls the means of production, which allows them to
extract labor from the other class, which controls only their own labor. Marx believed
that, as capitalism progressed, the inequality between the bourgeoisie and the proletariat
would get worse until, eventually, the proletariat would unite and overthrow the
bourgeoisie.

And in doing that, he thought, they’d ultimately derail the whole capitalist system and all
the inequality that came with it. But one of the central criticisms of the social-conflict
understanding of stratification is that the proletariat revolution never happened in
Western Europe, or the United States. If inequality was so bad for workers, why did the
revolution not happen? Well, German sociologist Ralf Dahrendorf argued that Marx
wasn’t wrong about conflict perse, but he saw that the conflict that Marx observed had
changed in several ways that prevented revolution from happening.

First, Dahrendorf said, the capitalist class in Europe has been too fragmented to serve as a
single target for revolutionaries. Rather than having just a few capitalists against an ever
increasing proletariat, we actually have more capitalists, of different kinds: business
owners, and executives, and people who own stocks.

More and more people are invested in capitalism as an economic system. And a
fragmented capitalist class makes it difficult for workers to focus their revolutionary
energies on any one group. In addition, he argued, greater worker organization, in the
form of unions, has allowed workers to fight for better working conditions, higher pay,
and greater control over their labor, resulting in an increased standard of living.

Greater legal protections for workers, like workers' compensation, unemployment


insurance, and Social Security have also worked to prevent the revolution that Marx
predicted. All of these structural changes, in turn, helped lead to greater job stability,
which makes workers less likely to push for revolutionary change. But Dahrendorf saw
that the ideology of capitalism plays a role here as well. Just as more people are
financially invested in capitalism, people are also ideologically invested in it.

But this isn't just a matter of whether people like the system or not: ideology determines
what people see as available to struggle over. Fighting for higher wages seems
reasonable, but abolishing wage labor does not. There are more criticisms of Marx than
just an absent revolution, and one of the more fundamental ones was made by none other
than Max Weber. Specifically, Weber argued that Marx's focus on economic
stratification was too simplistic: Weber pointed out that there are other kinds of conflict
to consider.
Weber argued that stratification actually occurs along three dimensions: economic class,
social status, and social power, or what sociologists refer to as socioeconomic status. This
view adds more complexity and nuance to the matter of stratification, but as with the
structural functionalist approach, it’s focused only on the macro perspective.

Marx’s theory, for example, is all about the long historical arc of class conflict, but it
doesn’t really tell us what that looks like in everyday life. For a more micro- or
individual-level view of inequality, sociologists turn to symbolic-interactionism. When
we first defined social stratification, we said that it involved putting people into
categories.

Symbolic-interactionism lets us understand how this actually works because, sure, what
class you're in might come down to how much money you make. But how can other
people tell what class that is in everyday interaction? It’s not like people walk around
with signs. Except that they kind of do, in the form of conspicuous consumption.

This is when the products that you buy make statements about your social position.
Buying a really nice bottle of wine for a dinner party or wearing designer sunglasses isn’t
just about the thing itself, it’s also about sending a message to anyone who sees it, a
message that says I'm in the upper class.

The objects act as sign vehicles, carrying meaning just like a written word. To some
degree, all consumption is conspicuous consumption: Your tastes are shaped by your
social position, and you use them to define yourself just as others read your tastes to
judge you and your position. To see how this works, let’s go to the Thought Bubble.

Imagine you’re driving in your car with an acquaintance, and you want to put on some
music. The music you choose tells them something about you. Let’s say you put on some
really esoteric classical music. Obviously, this tells your friend that you like it and
hopefully that you think they’ll like it too.

But it also tells them that you are the kind of person who likes esoteric classical music.
Now, if it’s not obvious what this has to do with stratification, think about the
assumptions that your acquaintance is gonna make about you: that you come from a
particular background, one that’s allowed you to have access to certain kind of education
and upbringing, or that you’ve had years of music lessons.

They might readily assume that you’re the kind of person whose class standing allows
them to develop these musical tastes. To be clear, I’m not saying that these judgements
are true. Lots of people who like classical music are not, say, wealthy or well-educated.
I’m saying that assumptions like these tend to be widely held, and recognized.

So when you put on your music, your friend might recognize you as a person like them, if
they share your tastes. Or maybe they don’t recognize you as being like them, so they
judge you for being pretentious. And this isn’t because classical music is special
somehow: it’s true regardless of what kind of music you put on, and applies just as much
to the clothes you wear, the books you read, and all of your other tastes.

These are all ways in which people categorize you in the hierarchy of stratification.
They’re the signs you carry around that tell people where you fit in society and how to
interact with you. This kind of judgement and mutual recognition isn’t a minor thing. It’s
a powerful force for stratification. For instance, it can be extremely important
in getting a job. Hiring can often be an exercise in this kind of judgement, as managers
look for people who “fit the culture” and will get along well with the rest of the team.

And it’s not just about what you like, it’s also about how you like it. If you decide to start
telling people that you like opera because you want to seem upper class, but then you
show up to a performance in a T-Shirt and flip-flops, you’re probably not gonna get
anywhere.

There’s a ton of background knowledge and understanding behind tastes and preferences
that you can’t just conjure out of nowhere, and the difficulty of acquiring this knowledge
helps maintain stratification. So these three perspectives, structural-functionalism, social-
conflict, and symbolic-interactionism can help us better understand not just how
stratification works, but why we have it. Today we learned about different theories of
stratification.

We talked about ideology and how it helps stratification reproduce itself. We discussed
structural-functionalism: the Davis-Moore thesis and its problems. We talked about
Marx's understanding of classes and Weber's criticisms. And we saw how symbolic-
interactionism helps explain stratification in everyday life.

23. Social Stratification in the US

Last week, we introduced Max Weber’s three dimensions of social stratification:


economic class, social status, and power. But how do those three things actually interact,
in the real world? What does social stratification look like in the here and now? Or, I
guess I should say, what does stratification look like in the United States right now?

For all the non-Americans watching, don’t you worry – we will be getting to stratification
around the world later. But for today, we’re gonna focus on the United States and how
social inequality plays out here. Do we live up to the ideals that Jefferson laid out in the
Declaration of Independence? Are all men created equal? Spoiler alert: Sociology says,
not so much.
When you hear people talking about inequality in the US, chances are they’re talking
specifically about income inequality or wealth inequality. So let’s start there. First, we’re
gonna split the US into 5 even groups, 20% of the population in each. These are what
social scientists call quintiles. As of 2015, households in the US that make less than
$22,800 a year are the bottom quintile of the income distribution. The exact middle of the
distribution, or the median of household income, was about $56,000 a year.

And the top quintile? That’s every household making over $117,000 a year. Now, the
simple fact that these quintiles have different incomes tells us that we don’t have a
perfectly equal society. Perfect equality would mean the same income for everyone. So,
there’s clearly some income stratification in the US.

But how much? The households in the top quintile earn about 50% of all the income in
the United States. And about 20% of all income is earned by the top 5% alone. And the
bottom quintile? These households earn only 3.4% of the total income in the US. Those
are pretty huge differences! And they get even bigger when we talk about wealth.

First things first: Wealth is NOT the same thing as income. Income is the money you earn
from work or investments, whereas wealth is the total value of the money and other
assets you hold, like real estate and stocks and bonds. For the bottom quintile, average
household wealth levels are negative. That’s because part of wealth is debt – so if you
have more debt than positive assets, your wealth is negative.

So we’ve got an average net worth of about negative $6,000 for the bottom quintile and a
median household wealth level of about $68,000. Any guesses what the top might look
like? Well, the top quintile’s household income was about twice the median income. So,
maybe a good guess would be about what? $140,000 for wealth? Wrong. The average
net worth for those in the top quintile is 9 times that of the median – $630,000.
And that’s not even close to the wealth levels for the top 1%. According to the US
Census Bureau, the cut-off for entering the top 1% of the wealth distribution is $2.4
million. Wealth levels also tend to vary by demographic group. For example, women tend
to have lower individual wealth levels than men. Married couples accumulate more
wealth than people who are unmarried.

And race and wealth are closely linked – the median wealth level of a White household is
about 12 times that of a Black household. One of the main sources of wealth for
Americans is homeownership. White Americans are much more likely to own houses
than Black Americans, and the neighborhoods that White Americans live in tend to be
wealthier – even if you control for income.

Patrick Sharkey, an American sociologist who researches neighborhoods and wealth,


found that black families making $100,000 a year live in the types of neighborhoods that
a white family making only $30,000 would live in. So, where are these gaps coming
from? Well, for one thing, a lot of wealth is passed on through generations.

Let’s go to the Thought Bubble to talk about redlining, a practice from the not-so-distant
past that has made it harder for African-American families to accumulate and pass on
wealth. In the 1930s, the government founded the Federal Housing Agency, an office set
up to regulate mortgages and interest rates, to guard against the types of foreclosures that
happened during the Great Depression. The FHA began insuring mortgages, and since
these mortgages were backed by the government, they made getting a house less risky
and easier for many Americans.

Which is great! Except they didn’t make it easier for all Americans. The big banks that
issued the loans engaged in a practice called redlining, in which they’d literally draw a
red line line on a map around neighborhoods they considered too risky to invest in. If
your house was within that red line, the banks wouldn’t give you a loan.
And the neighborhoods that got redlined were the ones where minorities, particularly
Black families, lived. Redlining was outlawed in 1968, but because homeownership is
the major source of wealth for most Americans, neighborhood segregation and racial
wealth inequalities are the legacies of policies like redlining.

OK, so on the money front, the US isn’t looking all that equal. And money – particularly
wealth – doesn’t just influence economic class. It’s also a form of power, including
political power. A 2014 study by American political scientists Martin Gilens and
Benjamin Page looked at the relationship between the political views of those at the top
of the income distribution, and the laws that are actually passed. They found that the
views of those at the top have a significant, positive relationship with laws getting passed,
while the views of people at the middle of the income distribution have pretty much zero
correlation with what laws actually get passed.

But remember: Correlation doesn’t mean causation. The study doesn’t tell us WHY
higher income is related to more political influence. The economic elite’s influence could
come through donations to political candidates. But higher income is also correlated with
higher education levels – and more highly educated people tend to be more civically
active, too. Now, there are many factors besides income and wealth that influence social
stratification in the US.

For example, there’s occupation. If you’re an adult, probably the first question that
people ask you is, “What do you do?” That’s because the jobs we do – and what other
people think of those jobs – are a major part of our socioeconomic status. You probably
have opinions about what jobs are cool, what jobs are impressive – and what jobs you’d
never work at.

Those kinds of opinions are what the National Opinion Research Center, a think tank at
the University of Chicago, was interested in when they created a ranking of ‘occupational
prestige.’ Top of the list? Surgeons, College Presidents, Lawyers, Nuclear Physicists,
Astronauts. Smack dab in the middle, getting an average rating of 5, are jobs like
Purchasing Managers, Office Supervisors, IT technicians, Private Detectives.

That’s right, all you office managers out there, you are just as cool as Veronica Mars.
And the bottom of the list are jobs like busboys, parking lot attendants, and telephone
solicitors. You might notice there’s some similarities in the types of jobs in each
category. For one thing, jobs with occupational prestige typically pay pretty well.

We have overlapping spheres of stratification here – you get social and financial benefits
from being a lawyer. Busboy? Not so much. The other thing you might notice is that all
five of the most prestigious jobs require an advanced degree. And not just a college
degree – people in those jobs typically have some sort of post-grad education as well.

The jobs at the bottom of the prestige scale, however, don’t require a college degree, and
many don’t even require a high school diploma. So the kind of education you have ends
up being pretty important for where you end up in terms of social status. Most American
adults have a high school diploma – about 88% as of 2015.

But only 33% of Americans over the age of 25 have a bachelor’s degree, and even fewer
– 12% – have an advanced degree. And the issue of who goes to college is strongly
related to your socioeconomic background. For kids born in the 80s, 54% of those born in
the top fourth of the income distribution have a four-year college degree.

For those born in the bottom fourth, only 9% do. As with wealth inequality, race and
ethnicity are also linked with educational attainment. 22% of African Americans and
16% of Hispanic Americans have a four-year degree, compared to 36% of Non-Hispanic
White Americans and 53% of Asian Americans. Now so far, all I’ve done is give you
about a bunch of descriptive data about income, occupations, and education in the United
States that shows you that, yes, there is inequality in the United States.
But how should we think about inequality? Is it good or bad? You can argue that
inequality isn’t, in and of itself, a bad thing. Some jobs are harder than others. Doctors
take on many years of education so that they can save lives. And I’m perfectly fine with
heart surgeons getting paid more than I am. And it’s probably good to live in a society
where working hard and learning skills that benefit other people is encouraged.

This is the idea of a meritocracy – or social stratification based on personal merit. The US
is partially a meritocracy – social position does to some extent reflect individual talent
and effort. But a lot of socioeconomic status is also due to something outside of your
control – the status you’re born into. If you start out on the bottom of the ladder, the
likelihood that you’ll get to the top is low. The environment you’re in during the early
years of your life shapes your ability to succeed as an adult. There’s a large body of
research that suggests that kids born in low-income families start school behind their
peers, and those gaps only get wider as kids get older.

Inequalities in one generation are often reproduced in the next generation. And as we’ve
noted throughout this course, race and gender are closely linked to social status. There are
lots of factors that play into these inequalities across groups – differences in education
levels, differences in family composition, differences in occupation choice.

Understanding why we see these inequalities across demographic groups is one of the
foundational questions of sociology. And: that’s something we’ll be exploring more in
later episodes. Today, however, we talked about five ways that social stratification plays
out in the US: income inequality, wealth inequality, political power, occupational
prestige, and educational attainment. We discussed descriptive data about inequality in
the US and how it varies across race and gender. And finally we talked about the relative
role of merit in determining your socioeconomic status.
24. Social Class & Poverty in the US
Social Class in America can be hard to talk about. And not just because you may find it
awkward to discuss who’s poor and who’s rich, or who has more power and who has less.
As sociologists, the difficulty for us is in pinning down exactly what we mean by social
class. There isn’t just one definition of it, and the definition you use will depend on what
society you’re interested in. If we go by Marx’s definition, we have two classes: the
bourgeoisie, who own the means of production, and the proletariat, who do the labor. But
this might be too simplistic for our world.

If you own a small store, and you work there, which category do you belong in? Your
day-to-day life probably looks more like that of a retail employee than that of a CEO. But
Marx would put you in the bourgeoisie, because you own a business and hire workers. So
let’s try another definition, one that’s more in the tradition of our old friend Max Weber.
His theories were more about what kinds of opportunities a person’s class gives them.
The owner of a big company has different opportunities than the owner of a small shop.
But they’ll both have different resources available to them than someone who manages an
office, or somebody who works at a factory. So in this case, a social class can be defined
as a group that’s fairly similar in terms of income, education, power, and prestige in
society.

And we can use this definition to better understand the social classes that make up society
in the United States, and it can help us to answer some of the questions they raise. Like, is
there more than one kind of upper class? How can the middle class fit everyone who
thinks they belong in it? And what does poverty in America really look like?

Broadly speaking, American society can be split into five social classes: upper class,
upper middle class, average middle class, working class, and lower class The upper class
consists essentially of the capitalists in Marx’s system. This is the top of the income and
wealth distribution – those who earn at least $250,000/year and control much of the
country’s wealth. And as we learned last week – money talks. This group tends to wield a
lot of political and social power. But within the upper class, there are sub classes that
distinguish, by and large, between old money and new money.

The upper-upper class includes those who derive their wealth from inheritance rather than
work. People in this class may have jobs, but usually they take on more honorary
positions such as board members or heading up philanthropic organizations. But there’s
also a large part of the upper class whose wealth came from work.

Most of those we think of as wealthy – the Bill Gates, Oprah Winfreys, and Kanye
West’s of the world – fall into this group. After upper class comes the middle class.
Remember awhile back when we talked about how almost every American thinks that
they’re middle class? That’s way too many people to fit into the middle, which is why
sociologists split the mid-range of the income distribution into three groups.

Upper middle class families typically have incomes between $115,000 and $250,000 per
year and make up about 15% of income earners. About 2/3 of the adults here have college
degrees – and many have post-graduate degrees.

It’s almost a given that their kids will attend college when they grow up. Adults in this
sector tend to have jobs that are considered prestigious – doctors, lawyers, engineers, and
the like. Their families typically own homes in good school districts, and are able to
afford luxuries, like travel and multiple vehicles.

And it may not surprise you to learn that they’re wealthy, at least compared to most
Americans. This group is likely to have wealth from their home, strong 401Ks, and
financial investments. Now, families in the so-called average middle class make between
$50,000 and $115,000 and make up about 35% of income earners. Keep in mind, the
median family income in the US is $70,700.
So families in this group still tend to own their own homes, but the mortgages might be
more cumbersome. And they have some wealth, usually tied up in their home or a modest
retirement savings account. About half of this group is college-educated, though they’re
more likely to have attended public universities than private schools.

And average middle class jobs are typically so-called white collar jobs – think office
workers, teachers, middle-managers. In contrast, most blue-collar workers, or those
whose work is primarily based in manual labor, fall into the lower-middle class. About 30
percent of Americans are in this category, with incomes ranging from about
25 to 50 thousand dollars a year.

Lower middle class families are less likely to own their own homes and typically hold
little to no wealth. The most defining feature of this social class is the type of jobs that are
associated with it – namely, manual labor, which is why it’s often referred to as the
working class.

Factory work, construction, manufacturing, maintenance work – all of these jobs


generally fall under working class occupations. And while some working class jobs
require technical skills, they don’t usually require a college education. It’s important to
note that working class jobs are more sensitive to how the economy is doing, because
these jobs tend to be built around making stuff. When a recession hits, factories need
fewer workers to meet demands.

Or the plant’s owners might decide that it’s cheaper to use machines rather than workers
to produce their goods. And just as vulnerable to economic downturns, if not more so, is
the lower class. Lower class Americans are blue-collar workers at the bottom of the
income distribution. They make less than $25,000 a year and tend to work hourly jobs
that are part-time, with unpredictable schedules and no benefits, like health insurance or
pensions.
About 20% of Americans, or the bottom quintile, fall into this group. The majority of
these families don’t own their own homes and are more likely to live in neighborhoods
with higher rates of poverty, lower quality school districts, and higher crime rates.

In contrast to an upper-middle class family, whose children are likely to go to college,


only 9% of children born in the bottom income quartile complete a four-year college
degree. And the lower class also includes many Americans who are living in poverty. The
US government sets an income benchmark called the federal poverty level, a threshold
that’s used, in part, to determine who’s eligible for public assistance programs, like food
stamps or help with health care.

As of 2017, the federal poverty level for a family of four is $24,600. And 13.5% of
Americans live in households below that. The government arrives at this figure by
estimating the minimum annual pre-tax income that’s needed to pay food, shelter,
transportation, and clothing costs for a given household size. Of course, what’s poor in
the United States won’t be the same as in another country – the US federal poverty line is
a measure of relative poverty, based on a standard of living in the US.

Relative poverty is used to describe a lack of resources compared to others who have
more. But absolute poverty is a lack of resources that threatens your ability to survive.
The federal poverty level gives us an indicator for which Americans have the fewest
resources and lets us examine trends in groups that are the most economically vulnerable.

For example, groups that can’t work, like children, the severely disabled, and the frail
elderly, are particularly vulnerable to poverty. But many working Americans are
vulnerable to poverty, too – 12% of working-age adults in poverty work full-time, and
another 29% work part time.

These are the working poor. You can see how it’s quite possible to work full time and
still live in poverty, when you do the math. The federal minimum wage in the United
States is $7.25 per hour. A 40 hour work week for 50 weeks a year would net an income
of $14,500, which is well below the poverty line for a family of four.

It’s hard enough to pull yourself out of poverty on a low-wage income, which is partly
why more than half of families in poverty are headed by single mothers. Higher rates of
poverty among women, known as the feminization of poverty, is related to the increasing
number of women who are raising children on their own, and who work low-wage jobs.
But in addition to gender, you can also can look at poverty by race.

Contrary to popular belief, most poor Americans are not Black; in fact, two-thirds of the
poor in the US are white. Black Americans are, however, more likely to be poor than
white Americans: 24.1% of Black Americans, who make up about 13% of the total
American population, were living in poverty in 2015. Compare that to 11.6% of white
Americans, who make up about 77% of the total population. Now, the causes of poverty
are many. And it’s not easy to understand why some groups are more vulnerable than
others.

America likes to think of itself as a nation that values self-reliance, where anyone can
succeed. And this view is partly why some argue that poverty is the result of an
individual’s own failings, or of certain cultural attitudes. One of the most famous
proponents of this idea was Daniel Patrick Moynihan – former US senator, ambassador to
the United Nations, and, by trade, a sociologist. A report he wrote while Secretary of
Labor in the Kennedy administration, known as the Moynihan report, blamed high rates
of poverty among African Americans not on a lack of economic opportunity, but on
cultural factors in the Black community, like high rates of birth outside of marriage.

By contrast, American sociologist William Julius Wilson – who you might remember
from episode 7 – has provided a counter to this idea. Wilson has documented how Black
Americans are much more likely to face institutional barriers to achieving economic
success, and are more likely to live in areas where jobs are scarce.

He argues that in order to understand poverty, we have to look at wider economic and
social structures, as well as the history and culture of racism in the U.S. Next week, we’ll
talk more about how social class structures affects how Americans live their lives.

But for now, you learned about the five different social classes in the United States: the
upper class, the upper middle class, the average middle class, the working class, and the
lower class. And we discussed what poverty looks like in the United States.

25. The Impacts of Social Class


Class matters. You probably already know that. And not only because you’re a student of
sociology, but because you’re a person who lives in a society. But do you know how
much it really matters? Social class is huge determinant of many of the most fundamental
aspects of modern life – from your education, to your beliefs, as well as your values, your
occupation, your income, and not only how you live, but also how you die. So let’s talk
about how class plays out in the lives of Americans today.

Class starts to matter at the very beginning of your life. When we discussed socialization
a few episodes ago, we talked about anticipatory socialization, or learning to fit into a
group you’ll someday be a part of, like a gender or a race. And one type of anticipatory
socialization is class socialization, where parents convey to their children the values that
go along with being upper class or middle class or working class.

Let’s take a simple example. Suppose you’re a parent and your kid absolutely refuses to
eat broccoli. How do you respond? Do you make them clear their plate and say that they
shouldn’t waste food? Or do you allow them to make decisions for themselves about
what they eat? Now, you may be thinking, “What? How does eating broccoli have
anything to do with class?”

But how parents from different walks of life approach parenting can differ a lot by class,
as American sociologist Annette Lareau found in her research on parenting styles. Let’s
go to the Thought Bubble to look at how social class can affect what kind of parent you
are, or what kind you have.

In the 1990s, Lareau’s research focused on observing families of elementary school


students from upper-middle class and working class backgrounds. In doing this, she
realized that parents had very different approaches to how they educated and disciplined
their kids.

She found that upper-middle class parents tend to be very involved in their kid’s social
and academic lives. Think scheduled play dates, after school activities, checking their
homework assignments every night. The stereotype of a suburban helicopter mom isn’t
too far from the mark for some of these families.

By contrast, working class parents – who were more likely to have less time and money
to devote to these activities – were more likely to be hands off in structuring their kid’s
free time. These kids might be more likely to be playing with whoever is around their
neighborhood than going on playdates. Working class parents also tend to put a greater
emphasis on obedience and discipline compared to their upper middle class counterparts,
Lareau found.

While a working class parent might tell their kids to eat their broccoli “Because I said
so,” an upper middle class parent is more likely to talk through decisions with their
children in an effort to encourage autonomy. So, yes, a toddler’s distaste for broccoli and
their parents’ reaction to it, can tell us something about class. And these trends in
parenting aren’t the only difference in values and beliefs that we see across classes.

Political views tend to vary across class groups, too, with upper class Americans being
more likely to be fiscally conservative and socially progressive, and lower class
Americans being more likely to be the opposite. Even religion varies by class. Upper
income Americans are more heavily represented in liberal Protestant groups like
Episcopalians and Presbyterians, as well as Judaism, Hinduism and Atheism, whereas
lower income Americans are more likely to identify as Evangelical Protestants or
Catholics.

But beliefs and values aren’t the only thing that vary by social class. A large component
of class differences plays out through educational attainment and its consequences for
success later in life. Education is sometimes called the “Great Equalizer.” The more
people who have access to quality education, the more equal a society gets. Or so the
thinking goes. But whether you get a quality education varies by the social class you’re
born into.

So we might be concerned that education will have the opposite effect, and will actually
help pass inequalities from one generation on to the next. There are a few ways that social
class comes into play when we talk about education in the US. First, where do you live?
Income segregation, or the tendency for families of similar income levels to live in the
same neighborhoods, is incredibly common in the United States. If you’ve ever gone
apartment hunting in a big city, this might not come as a surprise to you. An apartment in
a “good” neighborhood, or an area with low crime, good schools, and better quality
housing, costs way more than a home where crime and pollution are higher and education
and job access is inconsistent.

One reason that access to education varies by class is that public schools in the US are
funded mainly at the local level, so kids who grow up in affluent neighborhoods tend to
have access to better schools, because those communities provide more funding. So,
living in a better neighborhood tends to mean access to better educational facilities, as
well as to technology like computers, good teachers, and a wider variety of classes and
extra-curriculars.

And that’s assuming you go to a public school. Upper class children are more likely to
attend private schools – and this trend continues when we get past high school. We
mentioned this last week – children who grow up working class or low-income are much
less likely to attend college, and those who do are much more likely to attend
public state schools or two-year community colleges. Among elite colleges, most students
don’t come from low-income families; they come from the very top of the income
distribution. A recent study of social class and college attendance found that 38 elite
colleges including five in the Ivy League – Brown, Dartmouth, Penn, Princeton, and Yale
– had more students who came from the top 1% than the entire bottom 60%of the income
distribution.

Some of this inequality in college access is helped along by the policy of preferential
admittance for so-called “legacy” students, whose parents or other family members
attended the college. Policies like this entrench class inequalities across generations by
making it less likely that those from lower socioeconomic classes will move up the
ladder.

Plus, the social networks formed within prestigious colleges often are the stepping stones
toward jobs and financial success later in life, which again makes it more likely that
inequality will get passed on to a new generation. And of course, political and economic
power tend to be concentrated among those at the top of the social class ladder.

Dreaming of being president when you grow up? Of the ten presidents who have held
office in the last 50 years, 6 attended an Ivy League school for either their undergrad or
postgrad studies. Every single one had at least a bachelor’s degree.
So education can seem less like the great equalizer in this case than the great barrier.
Without a college degree, there are jobs that are pretty much impossible to get. The jobs
that you can get without a college degree tend to come with lower prestige, lower pay,
and a greater risk of occupational dangers.

Which brings us to the last class difference we’ll be talking about today: health. Social
class affects how you live – but it also affects how you die. Mortality and disease rates
vary by social class, with upper class Americans living longer and healthier lives. A man
in the 80th percentile, or top of the income distribution, lives an average of 84 years,
while a man at the bottom, in the 20th percentile, lives an average of 78 years. Women
live longer than men typically. Yay for us!

But the income gap is still similar here, with women in the 80th percentile living about
4.5 years longer than those in the 20th percentile. Why the huge gap? Some reasons
might seem obvious – if you have more money, you can probably afford better health
care.

Or for that matter, afford any health care. Others are maybe less straightforward. For
example, low income Americans tend to eat less healthy food. Now, is that just a matter
of different choices made by different people, or is it a systematic pattern that links class
with eating habits? Well, oftentimes unhealthy foods are cheaper, both in terms of money
and time. Lower class Americans tend to have less leisure time and less money to spend
on cooking healthy meals.

After all, it takes a lot less time and money to pick up McDonald’s than to spend an hour
cooking a meal with expensive organic vegetables. Additionally, many low income
Americans live in what are known as food deserts, or neighborhoods without easy access
to fresh foods, like fruits and vegetables. Other systematic class differences come from
the occupations that different classes tend to hold. Upper and middle class Americans are
more likely to be in white collar, full time jobs, which generally have lower exposure to
dangerous materials and lower risks of accidents on the job. Not to mention more flexible
work schedules. Less danger and less stress = better health.

Plus, full-time jobs are more likely to provide benefit packages including health insurance
and paid sick days. It’s much harder to take care of your health if you can’t take the time
off work to go to the doctor or rest and recover. But that’s the reality for many working
class Americans. Class gaps in health outcomes are clearly about more than just having
the money to pay for better healthcare. It’s about occupation, neighborhood, income,
education, and all the different ways that advantages like these can overlap to determine
your life course.

That’s why social class matters; it gives us a way to identify the advantages and
disadvantages that different groups of people share, and understand the consequences of
those advantages and disadvantages. Today, we discussed three types of class differences
we see playing out in the United States.

First, the beliefs and values parents pass on to the next generation will vary by class.
Second, there are class gaps in educational attainment which help perpetuate inequality
across generations. And finally, Americans of lower socioeconomic status tend to have
worse health and shorter lifespans than those with higher class status. Next time we'll
focus on a different aspect of socioeconomic stratification: social mobility – or, how your
social position can change over your life time, or across generations.

26. Social Mobility


Everyone loves a good rags to riches story. Books and movies and music are full of this
idea. Whether it’s Gatsby turning himself from a nobody to a somebody, or Drake
starting from the bottom, there’s something appealing about the idea that anyone can
make it, if they try hard enough. And more than maybe anywhere else, that idea is
embraced in the United States, where the mythos of the land of opportunity is practically
part of our foundation.

But is the US a land of opportunity? Can anyone move up the rungs of the social ladder?
Or is the American Dream just that: a dream? To get a handle on the answer, we have to
understand changes in social position – or what sociologists call social mobility.

There are a few different types of social mobility, so let’s get some definitions straight
first. Intragenerational mobility is how a person moves up or down the social ladder
during their lifetime. Intergenerational mobility, however, is about movement in social
position across generations. Are you doing better or worse than your parents were when
they were your age? There’s also absolute versus relative mobility. Absolute mobility is
when you move up or down in absolute terms – are you better or worse than before?

Like, if you make $50,000 a year now and made $40,000 10 years ago, you experienced
upward mobility in an absolute sense. But what if all your peers who were making the
same amount ten years ago are now making $65,000 a year? Yes, you’re still better off
than you were 10 years ago, but you’re doing worse relative to your peers.

Relative mobility is how you move up or down in social position compared to the rest of
society. We can measure social mobility quantitatively, using measures of economic
mobility, like by comparing your income to your parent’s income at the same age. Or we
can look at mobility using qualitative measures. A common measure used by sociologists
is occupational status.

If your father worked in a blue collar job, what’s the likelihood that you will too? A
recent study of absolute intergenerational mobility found that about one-third of US men
will end up in the same type of job as their fathers, compared to about 37% who are
upwardly mobile, and 32% who are downwardly mobile. It’s pretty common to remain
within the same class group as your parents.

About 80 percent of children experience what’s called horizontal social mobility, where
they work in a different occupation than their parents, but remain in a similar social
position. So, how much social mobility is there in the US? Well, there’s good news and
bad news. The good news is that if we zoom out and look at absolute mobility across the
years, the long term trend in social mobility is upwards.

Partially because of industrialization, median annual family income rose steadily


throughout the 20th century, going from around $34,000 in 1955 to $70,000 in 2015.
Standards of living now are much better than they were 60 years ago. Unfortunately,
more recent trends in social mobility have been less rosy. Since the 1970s, much of the
economic growth in income has been at the top of the income distribution.

Meanwhile, family incomes have been pretty flat for the rest of the population. This
unequal growth in incomes has meant less absolute mobility for Americans. A recent
analysis of tax data by a group of economists and sociologists found that absolute
mobility has declined over the last half century. While 90% of children born in the 1940s
earned more than their parents as adults, only 50% of children born in the 1980s did.

The other bad news is that within a single generation social mobility is stagnant. While
people generally improve their income over time by gaining education and skills, most
people stay on the same rung of the social ladder that they started on. Of those born in the
bottom income quintile, 36% remain in the bottom quintile as adults. Only 10% of those
born at the bottom end up in the top quintile as adults. Started at the bottom, now we’re
probably still at the bottom, statistically speaking.

And socioeconomic status is sticky at the top, too. Researchers at the Brookings
Institution, including Crash Course Sociology writer Joanna Venator, found that 30% of
those born in the top quintile stay in the top quintile as adults. Plus, social mobility differs
by race/ethnicity, gender, and education. White Americans see more upward mobility
than Black Americans: half of Black Americans that are born at the bottom of the income
distribution are still in the bottom quintile at age 40.

Black Americans also face higher rates of downward mobility, being more likely to move
out of the middle class than White Americans: Let’s go to the Thought Bubble to take a
look at research on race and social mobility in action. In 1982, American sociologists
Karl Alexander and Doris Entwisle began following the lives of a random sample of 800
first grade students growing up in a variety of neighborhoods in the Baltimore area. What
began as a study meant to last only three years eventually ended up lasting 30 years, as
the researchers followed up with the kids throughout their lives, to see the paths that their
early circumstances put them on.

Alexander and Entwisle collected data on everything imaginable, interviewing the kids
yearly about who they lived with, where they lived, work history, education, drug use,
marriage, childbearing, you name it.

And what they found was that poverty cast a long shadow over the course of these kids’
lives. 45% of kids with higher socioeconomic status, or SES, had gotten a college degree
by age 28 – only 4% of low SES kids had.

Those born better off were also more likely to be middle class at age 28. And these
unequal outcomes were heightened for African American kids. Low SES white kids
ended up better off than low SES Black kids. 89% of white high school dropouts were
working at age 22 compared to only 40% of black high school dropouts.

And contrary to what The Wire might have made you think about inner-city Baltimore
lifestyles – these differences can’t be explained away by differences in criminal behavior
or drug use. Low SES White men were more likely to use hard drugs, smoke, and binge
drink than low SES Black men. And holding all else constant, a police record
was more of an impediment to getting a job for African American men than White men.

So, the impacts of where you’re born on the social ladder can have far reaching
consequences. And in addition to race, social mobility can also vary by gender. Over the
last half century, women as a whole have experienced absolute mobility – 85% of women
earn higher wages than their mothers did. And the income gap between men and women
has narrowed significantly. In 1980, the average income for a woman was 60% that of
men, whereas by 2015 that gap was 8%. But despite the great strides over the last half
century, there are still gaps in opportunity for women.

Women born at the bottom of the social class ladder are more likely to remain there than
men – about half of women born in the bottom quintile are still there at age 40 compared
to only about one-third of men. Also, women born at the bottom experience more
downward mobility than men, with more women than men having family incomes lower
than that of their parents.

Some of these differences by gender may be because women are much more likely to
head up single parent homes than men are. Being married is a huge plus for social
mobility, because two incomes are better than one. People who marry tend to accumulate
wealth much faster than those who are single, making it easier to ascend the social ladder.

Modern-day Cinderella doesn’t just move up the social ladder by marrying the prince,
she’s also more likely to build a solid 401K and stock portfolio, key sources of wealth.
As we’ve seen, social class mobility depends on where you start and who you are. So
let’s go back to the question we asked at the beginning is America the land of
opportunity?
If you’re a glass half full kind of person, you might think so based on some of what
we’ve talked about today. After all, most people are better off than past generations were.
Accounting for inflation, about three times as many Americans make incomes above
$100,000 now than did in 1967.

But not all groups have benefitted equally from this economic growth – your chance at
upward mobility can vary a lot by education or race or gender, or where you start on the
income distribution.

For those in the middle of the income distribution, earnings growth has stalled for many
workers, but the costs of necessities like healthcare or housing have climbed ever higher.
Manufacturing, an industry that historically provided stable jobs and decent pay to less-
educated workers, has been declining for a while now and was particularly hard hit by
the recession from 2007 to 2009.

In the wake of this decline, most of the jobs available for less-educated workers tend to
be low-paying service industry jobs, contributing to lower absolute mobility than we’ve
seen in the past. All of these patterns, plus the growing income inequality we talked about
a couple episodes ago, mean that the rungs of the social mobility ladder in the United
States seem to be getting harder to climb.

Today, we talked about intergenerational and intragenerational mobility and the


difference between absolute and relative mobility. We talked about the long run upward
social mobility trends in the United States as well as the recent declines in absolute social
mobility. Then, we touched on how opportunities for social mobility differ by your class,
race, and gender.

27. Global Stratification & Poverty


You’ve heard of “First World Problems,” right? Someone cracks the screen on their
iPhone or gets the wrong order at Starbucks, and then they go on Twitter and complain
about their hashtag First World Problems. So, you’re heard the phrase, but have you
thought about the implications of talking about countries as First or Third?

Where do these names even come from? These terms are outdated, inaccurate, and
frankly insulting ways of talking about global stratification. So how should we talk about
global stratification?

First, let’s deconstruct the idea of the first, second, third world hierarchy; see where it
came from; and learn what its implications are. The terms date back to the Cold War,
when Western policymakers began talking about the world as three distinct political and
economic blocs. Western Capitalist countries were labeled the “First World”. The Soviet
Union and its allies were termed the “Second World”. And then everyone else – got
grouped into “Third World.”

After the Cold War ended, the category of Second World Countries became null and
void, but somehow the terms First World and Third World stuck around in the public
consciousness. Third World Countries, which started as just a vague catch-all for non-
aligned countries, came to be associated with impoverished states, while First World was
associated with rich, industrialized countries.

But in addition to being seriously outdated, these terms are also inaccurate. There are
more than 100 countries that fit the label of “Third World,” but they have vastly different
levels of economic stability. Some are relatively poor, but many aren’t. So, lumping
Botswana and Rwanda into the same category, for example, doesn’t make much sense,
because the average income per capita in Botswana is nine times larger than in Rwanda.
Nowadays, sociologists sort countries into groups based on their specific levels of
economic productivity.
To do this, they use the Gross Domestic Product or GDP, which measures the total output
of a country, and the Gross National Income or GNI, which measures GDP per capita.
High income countries are those with GNI above $12,500 per year. There are 79
countries in this group, including the US, the UK, Germany, Chile, Saudi Arabia,
Singapore, and more. As the name suggests, standards of living are higher here than the
rest of the world.

High income countries are also highly urbanized, with 81% of people in high income
countries living in or near cities. Much of the world’s industry is centered in these
countries, too – and with industry, comes money and technology. Take cell phones, for
example. 60% of those in low income countries have a cell phone. But in high income
countries, not only does almost everyone have a cell phone, but for every 100 people in
high income countries, there are 124 cell phone plans.

The next category is the upper middle income countries, defined as those with GNI
between $4,000 and $12,500 per year. There are 56 countries in this group, and they tend
to have advancing economies with both manufacturing and high tech markets, such as
China, Mexico, Russia, and Argentina.

They’re also heavily urban, have access to public infrastructure like education and health,
and have comfortable standards of living for most citizens – not too different from what
you’d expect in a high income country. Now, you might notice that I keep talking about
how “urban” these types of countries are.

Why does it matter how many people live in cities? Well, if you’re used to media
depictions of poverty in the US, you might think of it as an inner city problem. But
poverty worldwide is mostly rural. Agricultural societies produce less than industrialized
ones. Which brings us to our next grouping: lower middle income countries.
These have GNI between $1000 and $4000 per year, and they include such countries as
Ukraine, India, Guatemala, and Zambia. Unlike the previous groups, only 40% of people
living in lower middle income countries live in urban areas, and the economy is based
around manufacturing and natural resource production.

Here, access to services, like quality health care and education, is limited to those who
are well-off. For example, the maternal mortality rate is 5 times higher in lower middle
income countries than in upper middle income countries, and one-third of children under
the age of five are malnourished. Our final grouping includes the 31 countries designated
as low-income, which have yearly GNI less than $1000 per year.

These countries are primarily rural. Most of the world’s farmers live in these countries,
and their economies are mainly based on agriculture. Not only do these countries face
income poverty, they also have greater rates of disease, worse healthcare and education
systems, and many of their citizens lack access to basic needs like food and clean water.

Here, 8% of children die before the age of five. And among older children, more than
one-third never finish primary school. This type of poverty is very different than the type
of poverty that we see in high income countries like the United States. That’s why, when
talk about social stratification on a global level, it’s important to remember the
distinctions between relative and absolute poverty. Relative poverty exists in all societies,
regardless of the overall income level of the society.

But absolute poverty is when a lack of resources is literally life-threatening. Let’s go to


the Thought Bubble to talk about two groups that are particularly vulnerable in low-
income countries: children and women. The results of child poverty range from
malnutrition to homelessness to children working in dangerous and illegal jobs.

UNICEF estimates that there are 18.5 million children worldwide who are orphans, and
an estimated 150 million are engaged in child labor. Child malnutrition is worst in South
Asia and Africa, where one-third of children are affected. And half of all child deaths
worldwide are attributed to hunger. Women also make up a disproportionate number of
the globally poor. 70% of those living at or below absolute poverty levels worldwide are
women.

Some of this is a result of women being kept from working, due to religious or cultural
beliefs. Some of it is because many women who do work don’t get to control the fruits of
their labor. Quite literally. Even though women in low income countries produce 70% of
the food, men own the land that the women’s labor is done on. 90% of the land in poor
countries is owned by men.

And the poverty of children and the poverty of women are connected, specifically by
reproductive health care. Poor access to reproductive health care is part of the reason that
birth rates are so much higher in low income countries.

And less money plus more mouths to feed equals more child poverty. Women and
children may be the most vulnerable to global poverty, but poor societies have many
problems beyond malnutrition and poor healthcare. Including slavery. You might think of
slavery as a problem from long ago – I mean, the US was slow to abolish
slavery compared to other Western countries. But slavery is very much alive around the
world.

The International Labor Organization estimates that there are at least 20 million men,
women, and children currently enslaved. Now, all of these symptoms of global poverty
might make you think: What causes it? One likely cause is simply the lack of access to
technology.

And I’m not talking about, like, self-driving cars. Being able to use simple things like
fertilizer and modern seeds, for example, can make huge differences for families in low-
income countries. Also, cell phones. The growing number of cell phones in Sub-Saharan
Africa has increased access to educational tools, banking services, and health care
resources.

Another major cause of global inequalities is population growth. Even with the higher
death rates, the high birth rates in lower income countries mean that the populations in
poor countries double every 25 years, further straining those countries’s economic
resources.

And this is directly related to a third reason for global poverty: gender inequalities. The
same cultural and social factors that prevent women from working also tend to limit their
access to birth control, which in turn, increases family sizes. And that contributes to
population growth and slows economic development, as resources become strained.

Social and economic stratification, both within countries and across countries, are also
part of the story. Unequal distribution of wealth within a country makes it hard for those
stuck in poverty to get out of poverty. And inequality across nations means that countries
with more economic power have historically been able to subjugate less powerful nations
through systems like colonialism.

Colonialism is the process by which some nations enrich themselves by taking political
and economic control of other nations. Western Europe colonized much of Latin
America, Africa, and Asia starting more than 500 years ago. And as a result, much of the
wealth and resources flowed out of those regions and into European coffers.

And colonialism isn’t some distant past. Most African British colonies gained their
independence in 1968. In other words, the Baby Boomers that you know were alive when
the UK still had colonies. So, it’s no wonder that so many colonized countries remain low
or lower middle income, when they’ve only had a little over a half century to begin
building their own independent countries.
And as colonialism fell, new power relationships emerged that have made it harder for
poor countries to develop further. Neo colonialism doesn’t involve direct political control
of a nation; instead it involves economic exploitation by corporations, for example.
Corporate leaders often exert economic pressure on lower income countries to allow them
to operate under business conditions that are favorable for the companies, and often
unfavorable for the citizens that work for them.

This is all difficult stuff to talk about, but there is good news: global poverty is getting
better. Life expectancy is improving rapidly in low income countries. Between 1990 and
2012, life expectancy in low income countries has increased by 9 years. And child
mortality rates halved worldwide in the same time period.

How do we keep up this progress? If we want to tackle global poverty, addressing the
social, cultural, and economic forces that keep countries mired in poverty will be the first
step. Today we discussed the terms First and Third World countries and the reasons why
these terms are no longer used. We also went over four types of countries: high income,
upper middle income, lower middle income, and low income countries, and the lifestyles
of people within those countries.

We talked about some of the consequences of global poverty, including malnutrition,


poor education, overpopulation partially due to poor reproductive healthcare, and slavery.

Finally, we discussed some explanations for global poverty, including technology, gender
inequality, social stratification, and global power relationships like colonialism. Next
week, we’ll discuss the main theories behind global stratification.

28. Theories of Global Stratification


For much of human history, all of the societies on Earth were poor. Poverty was the norm
for everyone. But obviously, that’s not the case anymore. Just as you find stratification
among socioeconomic classes within a society, like the United States, across the world
you also see a pattern of global stratification, with inequalities in wealth and power
between societies. So what made some parts of the world develop faster, economically
speaking, than others?

How you explain the differences in socioeconomic status among the world’s societies
depends, of course, on which paradigm you’re using to view the world. One of the two
main explanations for global stratification is modernization theory, and it comes from the
structural-functionalist approach. This theory frames global stratification as a function of
technological and cultural differences between nations.

And it specifically pinpoints two historical events that contributed to Western Europe
developing at a faster rate than much of the rest of the world. The first event is known as
the Columbian Exchange.

This refers to the spread of goods, technology, education, and diseases between the
Americas and Europe after Columbus’s so-called discovery of the Americas. And if you
wanna learn more about that, we did a whole World History episode about it. This
exchange worked out pretty well for the European countries.

They gained agricultural staples like potatoes and tomatoes, which contributed to
population growth, and provided new opportunities for trade, while also strengthening the
power of the merchant class. But the Columbian Exchange worked out much less well for
Native Americans, whose populations were ravaged by the diseases brought from Europe.

It’s estimated that in the 150 years following Columbus’ first trip, over 80% of the Native
American population died due to diseases such as smallpox and measles. The second
historical event is the Industrial Revolution in the 18th and 19th century. We’ve
mentioned this before, and there are a couple World History episodes that you can
check out for more detail, but this is when new technologies like steam power and
mechanization allowed countries to replace human labor with machines and increase
productivity.

The Industrial Revolution at first only benefited the wealthy in Western countries. But
industrial technology was so productive that it gradually began to improve standards of
living for everyone. Countries that industrialized in the 18th and 19th century saw
massive improvements in their standards of living. And countries that didn’t industrialize
lagged behind.

The thing to note here is that Modernization Theory rests on the idea that affluence could
have happened to anyone. But of course, it didn’t. So why didn’t the Industrial
Revolution take hold everywhere? Well, modernization theory argues that the tension
between tradition and technological change is the biggest barrier to growth. A society
that’s more steeped in family systems and traditions may be less willing to adopt new
technologies, and the new social systems that often accompany them.

Why did Europe modernize? The answer goes back to Max Weber’s ideas about the
Protestant work ethic. The Protestant Reformation primed Europe to take on a progress-
oriented way of life, in which financial success was a sign of personal virtue, and
individualism replaced communalism. This is the perfect breeding ground for
modernization.

And according to American economist Walt Rostow, modernization in the West took
place – as it always tends to – in four stages:

First, the Traditional Stage refers to societies that are structured around small, local
communities with production typically getting done in family settings. Because these
societies have limited resources and technology, most of their time is spent laboring to
produce food, which creates a strict social hierarchy.

Think Feudal Europe or early Chinese Dynasties. Tradition rules how a society functions:
What your parents do is what their parents did, and what you’ll do when you grow up too.
But as people begin to move beyond doing what’s always been done, a society moves
into Rostow’s second stage, the Take-off Stage.

Here, people begin to use their individual talents to produce things beyond the
necessities, and this innovation creates new markets for trade. In turn, greater
individualism takes hold, and social status is more closely linked with material wealth.

Next, nations begin what Rostow called the Drive to Technological Maturity, in which
technological growth of the earlier periods begins to bear fruit, in the form of population
growth, reductions in absolute poverty levels, and more diverse job opportunities.
Nations in this phase typically begin to push for social change along with economic
change, like implementing basic schooling for everyone, and developing more
democratic political systems.

The last stage is known as High Mass Consumption – when your country is big enough
that production becomes more about wants than needs. Many of these countries put social
support systems in place to insure that all of their citizens have access to basic
necessities. So, the TL; DR version of Modernization Theory is that if you invest capital
in better technologies, they’ll eventually raise production enough that there will be more
wealth to go around, and overall well-being will go up. And rich countries can help other
countries that are still growing by exporting their new technologies in things like
agriculture, machinery, and information technology, as well as providing foreign aid. But
critics of Modernization Theory argue that in many ways, it’s just a new name for the
idea that capitalism is the only way for a country to develop.
These critics point out that, even as technology has improved throughout the world, a lot
of countries have been left behind. And they argue that Modernization Theory sweeps a
lot of historical factors under the rug when it explains European and North American
progress. Countries like the US and the UK industrialized from a position of global
strength, during a period when there were no laws against slavery or concerns about
natural resource depletion.

And some critics also point out that Rostow’s markers are inherently Eurocentric, putting
an emphasis on economic progress. But that isn’t necessarily the only standard to aspire
to. After all, economic progress often includes downsides, like the environmental damage
done by industrialization and the exploitation of cheap or free labor.

Finally, critics of modernization theory also see it as blaming the victim. In this view, the
theory essentially blames poor countries for not being willing to accept change, putting
the fault on their cultural values and traditions, rather than acknowledging that outside
forces might be holding back those countries. This is where the second theory of global
stratification comes in.

Rather than focusing on what poor countries are doing wrong, Dependency Theory theory
focuses on how poor countries have been wronged by richer nations. This model stems
from the paradigm of conflict theory, and it argues that the prospects of both wealthy and
poor countries are inextricably linked.

This theory argues that in a world of finite resources, we can’t understand why rich
nations are rich without realizing that those riches came at the expense of another country
being poor. In this view, global stratification starts with colonialism – and it’s where
we’ll start today’s Thought Bubble.

Starting in the 1500s, European explorers spread throughout the Americas, Africa, and
Asia, claiming lands for Europe. At one point, Great Britain’s empire covered about one-
fourth of the world. The United States, which began as colonies themselves, soon
sprawled out through North America and took control of Haiti, Puerto Rico, Guam, the
Philippines, the Hawaiian Islands, and parts of Panama and Cuba.

With colonialism came exploitation of both natural and human resources. The
transatlantic slave trade followed a triangular route between Africa, the American and
Caribbean colonies, and Europe. Guns and factory-made goods were sent to Africa in
exchange for slaves, who were sent to the colonies to produce goods like cotton and
tobacco, which were then sent back to Europe. As the slave trade died down in the mid-
19th century, the point of colonialism came to be less about human resources and more
about natural resources.

But the colonial model kept going strong. In 1870, only 10% of Africa was colonized.
But by 1940, only Ethiopia and Liberia were not colonized. Under colonial regimes,
European countries took control of land and raw materials to funnel wealth back to the
West. Most colonies lasted until the 1960s, and the last British colony, Hong Kong, was
finally granted independence in 1997.

This history of colonialization is what inspired American sociologist Immanuel


Wallerstein’s model of what he called the Capitalist World Economy. Wallerstein
described high-income nations as the “core” of the world economy. This core is the
manufacturing base of the planet, where resources funnel in, to become the technology
and wealth enjoyed by the Western world today. Low-income countries, meanwhile, are
what Wallerstein called the “periphery”, whose natural resources and labor support the
wealthier countries, first as colonies and now by working for multinational corporations
under neocolonialism.

Middle-income countries, such as India or Brazil, are considered the semi-periphery, due
to their closer ties to the global economic core. In Wallerstein’s model, the periphery
remains economically dependent on the core in a number of ways, which tend to
reinforce each other.

First, poor nations tend to have few resources to export to rich countries. But corporations
can buy these raw materials cheaply and then process and sell them in richer nations. As a
result, the profits tend to bypass the poor countries. Poor countries are also more likely to
lack industrial capacity, so they have to import expensive manufactured goods from
richer nations. And all of these unequal trade patterns lead to poor nations owing lots of
money to richer nations, creating debt that makes it hard to invest in their own
development. So, under Dependency Theory, the problem is not that there isn’t enough
global wealth; it’s that we don’t distribute it well.

But just as Modernization Theory had its critics, so does Dependency Theory. Critics
argue that the world economy isn’t a zero-sum game – one country getting richer doesn’t
mean other countries get poorer. And innovation and technological growth can spill over
to other countries, improving all nations’ well-being, not just the rich.

Also, colonialism certainly left scars, but it isn’t enough, on its own, to explain today’s
economic disparities. Some of the poorest countries in Africa, like Ethiopia, were never
colonized and had very little contact with richer nations. Likewise, some former colonies,
like Singapore and Sri Lanka, now have flourishing economies. In direct contrast to what
Dependency Theory predicts, most evidence suggests that, nowadays, foreign investment
by richer nations helps, not hurts, poorer countries.

Dependency Theory is also very narrowly focused. It points the finger at the capitalist
market system as the sole cause of stratification, ignoring the role that things like culture
and political regimes play in impoverishing countries. There’s also no solution to global
poverty that comes out of dependency theory – most dependency theorists just urge
poornations to cease all contact with rich nations or argue for a kind of global socialism.
But these ideas don’t acknowledge the reality of the modern world economy – making
them not very useful for combating the very real, very pressing problem of global
poverty. The growth of the world economy and expansion of world trade has coincided
with rising standards of living worldwide, with even the poorest nations almost tripling in
the last century.

But with increased trade between countries, trade agreements such as the North American
Free Trade Agreement have become a major point of debate, pitting the benefits of free
trade against the costs to jobs within a country’s borders. Questions about how to deal
with global stratification are certainly far from settled, but I can leave you with some
good news: it’s getting better. The share of people globally living on less than $1.25 per
day has more than halved since 1981, going from 52% to 22% as of 2008.

Today we learned about two theories of global stratification. First, we discussed


modernization theory and Walt Rostow’s Four Stages of Modernization. We then talked
about dependency theory, the legacy of colonialism, and Immanuel Wallerstein’s
Capitalist World Economy Model.

29. Economic Systems & the Labor Market


A social institution that has one of the biggest impacts on society is the economy. And
you might think of the economy in terms of numbers – unemployment numbers, GDPs,
or whatever the stock market is doing today. But while we often talk about it in numerical
terms, the economy is really made of people! It’s the social institution that organizes all
production, consumption, and trade of goods in a society.

And there are lots of different ways in which stuff can be made, exchanged, and used.
Think capitalism, or socialism. These economic systems – and the economic revolutions
that created them – shape the way that people live their lives.
Economies can vary a lot from one society to the next, but in any given economy, you
can typically see production split into three sectors. The primary sector extracts raw
materials from natural environments – so, workers like farmers or miners would fit well
here. The secondary sector takes raw materials and transforms them into manufactured
goods. So, someone in the primary sector may extract oil from the earth, but someone in
the secondary sector refines the petroleum into gasoline.

And then the tertiary sector is the part of the economy that involves services rather than
goods. You know, doing things, rather than making things. But, this system is actually
pretty complicated. Or at least, more sophisticated than the way things used to be for
much of human history. So, how did we get from a world where people worked to
produce just what they needed for their families, to one with all these sectors that have to
work together?

To understand that, we need to back up a little – about 12,000 years. The first big
economic change was the agrarian revolution. When people first learned how to
domesticate plants and animals, it ushered in a new agricultural economy that was much
more productive than hunter-gatherer societies were. Farming helped societies build
surpluses, which meant not everyone had to spend their time producing food.

This, in turn, led to major developments like permanent settlements, trade networks, and
population growth. Now, let’s go to the Thought Bubble to discuss the second major
economic revolution: the industrial revolution of the 1800s. With the rise of industry
came new economic tools, like steam engines, manufacturing, and mass production.

Factories popped up, changing how work functioned. Now, instead of working at home,
where people worked for their family by making things from start to finish, they began
working as wage laborers and becoming more specialized in their skills. Overall
productivity went up, standards of living rose, and people had access to a wider variety of
goods thanks to mass production – all good things. But every economic revolution comes
with economic casualties. The workers in the factories – who were mainly poor women
and children – worked in dangerous conditions for low wages.

There’s a reason that the industrialists of the 19th century were known as robber barons:
with more productivity came greater wealth, but also greater economic inequality. So, in
the late 19th century, labor unions began to form. These organizations of workers sought
to improve wages and working conditions through collective action, strikes, and
negotiations.

Inspired by Marxist principles, labor unions are partly to thank for us now having things
like minimum wage laws, reasonable working hours, and regulations to protect the safety
of workers. So, the industrial revolution was an incredibly big deal, when it came to the
changes that it brought to both economies and societies. And there’s a third revolution we
should talk about too – one that’s happening right now. But before we get to that, we
should pause and explore two competing economic models that sprung up around the
time of the industrial revolution, as economic capital became more and more important to
the production of goods.

Pretty sure you’ve heard of them! And possibly have strong opinions about them!
They’re capitalism and socialism. Capitalism is a system in which all natural resources
and means of production are privately owned. And it emphasizes profit-seeking and
competition as the main drivers of efficiency. If you own a business, you need to out-
perform your competitors if you're going to succeed.

So you’re incentivized to be more efficient – by improving the quality of your product,


and reducing your prices. This is what economist Adam Smith in the 1770s called the
“Invisible Hand” of the market. The idea is that if you just leave a capitalist economy
alone, consumers will regulate things themselves, by selecting goods and services that
provide the best value. But, in practice, an economy doesn’t work very well if it’s left
completely on auto-pilot.
There are lots of sectors where a hands-off approach can lead to what economists call
“market failures,” where an unregulated market ends up allocating goods and services
inefficiently. A monopoly, for example, is a kind of market failure.

When a company has no competition for customers, it can charge higher prices without
worrying about losing customers. That, as economic allocations go, is really inefficient,
at least on the consumer end. So, in situations like these, a government might step in and
force the company to break up into smaller companies to increase competition.

Market failures like this are why most countries, the United States included, are not
purely capitalist societies. For example, the US federal and state governments own and
operate a number of businesses, like schools, the Postal Service, and the military.
Governments also set minimum wages, create workplace safety laws, and provide social
support programs like unemployment benefits and food stamps. Government plays an
even larger role, however, in socialism.

In a socialist system, the means of production are under collective ownership. Socialism
rejects capitalism’s private-property and hands-off approaches. Instead, here, property is
owned by the government and allocated to all citizens, not just those with the money to
afford it. Socialism emphasizes collective goals, expecting everyone to work for the
common good, and placing a higher value on meeting everyone’s basic needs than on
individual profit.

When Karl Marx first wrote about socialism, he viewed it as a stepping stone toward
communism, a political and economic system in which all members of a society are
socially equal. But of course, in practice, this hasn’t played out in the countries that have
modelled their economies on socialism, like Cuba, North Korea, China, and the former
USSR.
Why? Well, Marx hoped that, as economic differences vanished in communist society,
the government would simply wither away and disappear. But, that never happened. If
anything, the opposite did. Rather than freeing the proletariat from inequality, the
massive power of the government in these states gave enormous wealth, power, and
privilege to political elites, retrenching inequalities along political – rather than strictly
economic – lines. At the same time, capitalist countries economically out-performed their
socialist counterparts, contributing to the unrest that eventually led to the downfall of the
USSR.

Before the fall of the Soviet Union, the average output in capitalist countries was about
$13,500 per person, which was almost three times that in the Soviet countries. But there
are downsides to capitalism, too – namely greater income inequality. A study of
European capitalist countries and socialist countries in the 1970s found that the income
ratio between the top 5% and the bottom 5% in capitalist countries was about 10:1,
whereas in socialist countries it was 5:1.

We could fill whole episodes about the merits of each economic model – and in fact, we
did in Crash Course World History. There are many more questions we could answer
about how societies build their economic systems. But in any case, those two models
aren’t the end of the story. Because: we’re living in the middle of the economic
revolution that followed the industrial revolution. Ours is the time of the information
revolution. Technology has reduced the role of human labor, and shifted it from a
manufacturing-based economy to one based on service work and the production of ideas
rather than goods.

And this has had a lot of residual effects on our economy. Computers and other
technologies are beginning to replace many jobs, by making it easier to either automate
them or send them offshore. And we’ve also seen a decline in union membership.
Nowadays, most unions are for public sector jobs, like teachers. So what do jobs in a
post-industrial society look like?
Well, agricultural jobs, which once were a massive part of the American labor force, have
fallen drastically over the last century. While 40% of the labor force was involved in the
agricultural sector in 1900, only about 2% of workers today work in farming. Similarly,
manufacturing jobs, which were the lifeblood of the US economy for much of the 20th
century, have also declined in the last 30 years. So, the US economy began with many
workers serving in either the primary and secondary economic sectors, but now, much of
the US economy is centered on the tertiary sector, or the service industry.

The service industry makes up 85% of jobs in the US, including everything from
administrative assistants to nurses to teachers to lawyers to everyone who made this
Crash Course video for you. Now, that’s a really big and diverse group. That’s because
the tertiary sector – like all the economic sectors we’ve been discussing – is defined
mainly by what it produces, rather than what kinds of jobs it includes. So, sociologists
have a way of distinguishing between types of jobs, based more on the social status and
compensation that come with them:

There’s the primary labor market and the secondary labor market. The primary labor
market includes jobs that provide lots of benefits to workers, like high incomes, job
security, health insurance and retirement packages. These are white collar professions,
like doctors or accountants or engineers. Secondary labor market jobs provide fewer
benefits and include lower-skilled jobs and lower-level service sector jobs. They tend to
pay less, have more unpredictable schedules, and typically don’t offer benefits like health
insurance.

They also tend to have less job security. So what’s next for capitalism, or socialism?
Well, no one knows what the next economic revolution is going to look like. But I can
tell you that, nowadays, a key part of both our economic and political landscape is
corporations.
Corporations are defined as organizations that exist as legal entities and have liabilities
that are separate from its members. So, they’re their own things. And more and more
these days, corporations are operating across national boundaries, which means that the
future of the US economy – and most countries’ economies – will play out on a global
scale. Today we discussed how economies can be broken down into the primary,
secondary, and tertiary sectors.

We discussed the three stages of economic revolution that brought us to the modern post-
industrial era. And in the middle there, we talked about two types of economic models:
capitalism and socialism.

30. Politics
You're a good citizen, right? You voted in the last election, or you're looking forward to
voting in the future. You pay your taxes. You're happy to exercise the full range of your
civic responsibilities. The point is, you might already know all about how your
government works. If you don't, and you're American, well, there's a Crash Course for
that. But even if you're an informed citizen who knows every line of your constitution by
heart, that doesn't mean you know why your government works. For that, we need a
different kind of political knowledge. Civics can tell you how your system works, but
sociology can help you understand why.

So, what do we mean when we talk about politics? A civics class can define politics in
terms of the particular systems of government, but sociologists have a broader definition:
Politics is the major social institution by which society organizes decision-making and
distributes power and resources. By this definition, politics obviously includes things
like the government itself, but it also includes things outside of it, like political party
organizations and lobbying groups, and even social movements. Voting is a political
action, but so is going to a demonstration or calling your representative. Or boycotting a
company whose CEO has ideas that you find disagreeable.

Because, these are all ways of trying to influence societal decision-making and the
distribution of power. That being said, the government does have special importance
here, because it's the major formal organization that organizes and regulates politics, so
it’s responsible for making decisions for the whole of society.

And it can carry out these decisions, because it has a lot of power, which our old friend
Max Weber defined as the ability to achieve desired ends over the objections of others.
Now, Weber considered a government's power to be coercive power, or power that’s
backed by the threat of force. You might not think of your government as a threat,
but Weber actually defined a state as the organization that has a monopoly on the
legitimate use of violence.

Of course – and thankfully – not every action that a government takes requires an overt
use of force. Under normal circumstances, people respect the political systems at work in
their government, and they tend to view state power as an expression of authority, where
state leaders have the right to use legitimate power. And so, while violence for Weber is
always the ultimate last resort of the state, most of the time, it isn't necessary.

Weber also recognized that the power of a political system comes in a variety of forms.
Traditional authority is power that’s legitimized by respect for long-standing cultural
patterns and beliefs. It’s based on the same idea as the traditional mindset we talked about
in episodes 9 & 17, namely that the world has a basic order to it, and that order must be
respected. Another style of power is known as rational-legal authority, or power
legitimized by legally enacted rules and regulations.

This is the power behind the US Constitution, whose written rules determine the entire
American political and legal system. When the Constitution is changed or reinterpreted,
the rules change, as with when the Supreme Court ruled that same-sex marriage was legal
in 2015, for example. Finally, we have a kind of wildcard power: charismatic authority,
which is power legitimized by the extraordinary personal qualities of a leader. Jesus of
Nazareth leading a new religious movement, or Martin Luther King Jr. leading thousands
of people in the civil rights movement are examples of personalities that mobilized
precisely this kind of authority.

But authority that rests entirely on the qualities of one person can be unstable. So
sometimes that power becomes transferred to something outside – separate from – that
one charismatic person. This is called the routinization of charisma, and it’s where
charismatic authority is transformed into some combination of traditional and/or rational-
legal authority.

The founding of the Church after Jesus' death is a good example of this. Now, just as
there are different kinds of authority, so too are there different forms of government. For
instance, democracy – a political system that gives power to the people as a whole – tends
to be backed by rational-legal authority. This isn't terribly surprising, since, in Weber’s
model, democracy as a form of government and a rational-legal approach to authority
both emerged with rationalization and the rise of bureaucracy.

And we can see a certain affinity between democracy and rational-legal authority in the
fact that leadership in democracies is linked to office-holding. So, the power is attached
to a legally defined office, not to a particular person. By contrast, monarchy is a political
system in which power is legitimized by traditional authority and held by a single family.

This is maybe most obvious in the feudal European idea of the Divine Right of Kings, in
which the monarchs were held to be ordained by God from time immemorial. And just as
democracies are much more common in modern bureaucratic states, monarchies are more
common in traditional agrarian societies.
But a certain type of authority doesn’t always reside in a specific form of government.
Monarchy, for example, is just one type of authoritarianism, which is any system that
denies people participation in their own governance and leaves ruling to the elites.

And while monarchy relies on traditional authority, another variety of authoritarianism,


totalitarianism, does not. Totalitarianism is a centralized political system that extensively
regulates people’s lives. And it has some of the same affinities for legal-rational authority
that democracy does. Both are modern systems, for one thing, but it's also much easier to
closely control a people through a system of bureaucratic rules.

For example, a totalitarian government might enact a law that, say, every household has
to display a picture of the ruler. It’s a small bureaucratic rule with major political
implications. And democracy isn’t always associated with legal-rational authority, either.
Take the United States! The President has power because of the rules set out in the
Constitution – which is a form of legal-rational authority – but the President attains that
power by winning an election, which can often rely on charismatic authority.

We can even see traditional authority of a kind at work in the reverence with which the
Constitution and the Founding Fathers are invoked in political discourse. Now, the US as
an example can move us from what has so far been a pretty theoretical discussion of
authority and politics, to seeing how sociology can help us understand what they look like
in practice.

To understand power, authority, and politics, we need to understand the political attitudes
of a population. And to do this, we need to talk about the political spectrum, the broad
array of beliefs and ideas that make up the politics of a society. In the US, this ranges
from liberal on the left of the spectrum to conservative on the right. And again, this isn’t
just a theoretical difference of ideas; these beliefs shape the distribution of power and
resources in the US in some very fundamental ways.
On economic issues, for instance, left-leaning or liberal perspectives often favor
government intervention in the economy to help guarantee an equality of outcomes.
Equal pay for women, equitable distribution of wealth among races, and regulations that
promote workplace and product safety are all examples of economic issues that
the left is frequently concerned with. By contrast, conservative or right-leaning
perspectives may tend to take a more laissez-faire or “hands off” approach, in which
government regulation is seen as hampering the natural flow of economic activity.

So, that’s how the political spectrum can look when it comes to economic matters. On
social issues, one way of understanding the gap between left and right is in terms of the
different kinds of authority that each faction tends to support, or endorse.

Here, the right tends to build its arguments on traditional authority, while the left tends to
look to legal-rational frameworks. We can see this in the issue of marriage equality, for
example: The right has often described its opposition as a defense of “traditional
marriage,” while the left has argued that marriage equality was an extension of legal, civil
rights.

Now, no matter where your political leanings fall on the spectrum, in the end they’d be
pretty meaningless without some way to give them form in the struggle for things like
power and wealth. That’s where political parties come in, as well as interest groups, like
political action committees, which organize around particular issues rather than around a
whole party platform.

And beyond the formal, institutional politics, there are also social movements that try to
mobilize masses of people to further particular political goals. Black Lives Matter and the
Tea Party are both good examples of this. But lobbying, special interest groups, and
social movements all raise difficult questions about how truly democratic the American
system is.
Why would you need to demonstrate in the streets if you’re supposed to be able to
express your political beliefs by voting? The answer lies in sociological theories of
power – that is, the different understandings of how power is distributed in a society.

One common view is known as the pluralist model, which sees power as being very
widely distributed. In this view, politics is a matter of negotiation, but everyone has at
least some voice in the process. This model was closely linked with structural
functionalist theory and dominated much of American sociology in the 1950s and early
60s.

In this line of thinking, demonstrations are seen as irrational outbursts, pointless gestures
in a political system that already distributes political power fairly. However, in the
power-elite model, political protests make perfect sense. This view sees political power
as being concentrated in the hands of small groups, especially among the very rich. If this
is the case, protests may be the only way for many people to advance their interests and
have their voices heard.

Finally, there’s the Marxist political economy model, which holds that both of the other
two models really miss the point: Here, power isn't evenly distributed, but it's also not
held by a strictly political elite. Instead, the cause of the imbalance of power is seen as
being systemic, and the powerful few are seen as the products of a particular economic
system.

So meaningful political change, in this understanding, is only possible through a change


in the underlying economic system. So to understand politics – in the United States or
anywhere else – we need to look at all the aspects we’ve talked about – the types of
authority, the kinds of government, political beliefs, models of power, and how they all
relate to each other.
Today we learned about the sociological approach to politics. We defined politics and
power. We discussed the different types of authority and how they relate to different
political systems. And we looked at American politics in some detail, talking about
demographics and political organizations. Finally, we discussed different sociological
theories of power.

31. Sex & Sexuality


Let’s talk about sex. It’s totally OK if that makes you wanna cringe. After all, most
people will tell you that sex is private, not something that people generally talk about at
least, not in class. Besides, sex is usually thought of as a deep, primeval part of ourselves.
It's a matter of drives and instincts, of biology and psychology. And if sex and sexuality
are both primeval and private, can a social science tell us anything about them? Of course
it can.

Because no matter how natural and private you think they are, sex and sexuality are still a
part of every society. And like I’ve seen saying since this course started: society gets in
everywhere. In order to talk about sex, we need to get a handle on some terms, starting
with sex. Not sex the act, but sex the category. Sex is a biological category, and it
distinguishes between females and males.

And biologically speaking, the root cause of sex is a pair of chromosomes: XX for
females and XY for males. These chromosomes result in two kinds of visible differences:
There are primary sex characteristics, which show up as the sex organs involved with the
reproductive processes and which develop in utero.

And then there are secondary sex characteristics, which develop at puberty and are not
directly involved in reproduction, things like pubic hair, enlarged breasts or facial hair.
Now, we tend to think of sex as a simple fixed binary: You’re either male or female. But
that's not the case. A significant portion of the population is intersex, that is “people
[who] are born with sex characteristics that do not fit typical binary notions of male or
female bodies."

This can mean a lot of different things. Like, it can mean having different combinations
of sex chromosomes –as in Klinefelter Syndrome, which creates chromosomes XX and
Y, or in Triple-X Syndrome, which results in XXX.

An intersex condition can also mean that the body responds differently to hormones, or
that the genitals aren’t fully developed. This wide variety of intersex conditions makes
population figures hard to pin down. If intersex is defined strictly in terms of having
atypical genitalia at birth, then 1 in every 1500-2000 births fits that description.

If defined more broadly, however – to include all of the conditions I just mentioned –
intersex conditions appear in as much as 2% of the population. And of course, different
societies respond to intersex people differently. In some societies, they’re accepted as just
a natural variation.

But Western society and medicine have long understood sex as an immutable binary, so
intersex people were not seen as an acceptable variation, but rather as a deviation in need
of correction. Some intersex conditions do require medical intervention for the sake of
the patient’s health, but many don’t.

And for years doctors performed unnecessary operations on intersex children, in order to
make them acceptable according to cultural ideas about sex. So, society plays a role in the
biological category of sex. But when it comes to gender, those distinctions are all about
society.

Gender is the set of social and psychological characteristics that a society considers
proper for its males and females. The sets of characteristics assigned to men are
masculinities, and those assigned to women are femininities. A lot of people have a hard
time understanding the difference between sex and gender, but hopefully this definition
makes it clear.

Gender is its own thing, separate from sex. Some people don't even want to accept that
gender is anything but biological, but sociology is here to tell you that it really isn't.
Instead, it's a matter of social construction.

To explore this idea some more, let's go to the Thought Bubble: Let’s start with how we
dress. A business suit is considered masculine. A skirt is feminine. And it should be
obvious and uncontroversial that this is a purely social convention: Because, for example,
you’d be pretty hard pressed to explain the objective difference between a skirt and a
kilt, except to say that wearing one is feminine, and wearing the other is masculine. And
this is also true of things that might seem to be more biologically determined.

For example, physical labor like construction has typically been understood as masculine.
And there might seem to be an underlying biological explanation for that, because on
average men do tend to be bigger and have more muscle mass than women. But even
with an average difference between the sexes, there’s a great deal of overlap too.

Plenty of women are bigger and stronger than plenty of men. And minor differences in
average size and strength can’t explain why some occupations have been stratified by
gender. The reality is that minor, average, biological differences are used as the
justification for widespread gender stratification, funnelling males and females into
different jobs, hobbies, and identity constructions.

And society then points to this resulting stratification as “proof” of an underlying


difference in biological reality, even though that reality doesn’t actually exist. So, one
way of thinking about gender is that it’s a matter of a self-presentation, a performance
that must be worked at constantly. What we wear, how we walk and talk, even our
personal characteristics – like aggression or empathy – are all ways of "doing" gender.
They’re ways of making claims to masculinity or femininity that people will see and,
hopefully, respect. And we can be sanctioned if we don't do gender right, or well enough.

This is precisely what's happening when a man is called a "sissy" or a woman is told she
“really ought to smile more.” This idea of gender as a performance is known as gender
expression. But gender is more than that; it's also a matter of identity. Gender identity
refers to a person's internal, deeply held sense of their gender. Nobody really, perfectly
fits the cultural ideal of masculinity or femininity. And lots of people construct their
gender differently from these conventional ideas. In particular, transgender people are
those whose gender identity doesn’t match the biological sex they were assigned at birth.
By contrast, cisgender people's gender identity matches their biological sex.

Still, both trans and cis people can express their identity in a variety of ways,
conventional or otherwise. And this should make it clear that gender, like sex, is not
binary. There are many ways of doing femininities and many ways in which a person can
be masculine. Now that we've got a basic understanding of sex and gender, we can finally
get to sexuality. Sexuality is basically a shorthand for everything related to sexual
behavior: sexual acts, desire, arousal – the entire experience that is deemed sexual. One
part of sexuality is sexual orientation, or who you're sexually attracted to, or not. Most
people identify as heterosexual, meaning they’re attracted to people of the other gender.

While this is the most common orientation, significant numbers of people are homosexual
– attracted to people of their own sex or gender. But these are really only poles on a
continuum, with plenty of people being attracted to both their own and other genders, as
in bisexual or pansexual. And some people are asexual, and don't experience
sexual attraction at all. Now, these definitions can vary from person to person, just as
they vary from society to society.
This, and the fact that social norms may make people wish to keep their orientation
private, makes estimates of the number of homosexual and bisexual people necessarily
imprecise. That said, based on the surveys we do have, around 4% of the American
population identifies as gay, lesbian, or bisexual. However, this increases to around 10%
if we ask instead whether a person has ever experienced same-sex attraction or engaged
in homosexual activity.

So, what can each of the three sociological paradigms tell us about sexuality? We’ll start
with symbolic-interactionism, because its insight is the most fundamental: And that is
that sexuality, this intensely private and supposedly primeval thing, is socially
constructed.

You might think that this is a claim too far, because sexuality is a matter of inbuilt urges.
Some things just are sexual. But if we actually start asking "what is sexual?" then the
constructed nature of sexuality gets pretty obvious pretty fast. We might think, for
instance, that oral sex is just sexual. But that’s not necessarily true in all societies.

For example, among the Sambia of the Eastern Highlands of Papua New Guinea, young
boys perform oral sex on, and ingest the semen of, older men, as part of a rite of passage
to adulthood. Oral sex is definitely happening, but it’s not clear that this should be
thought of as sexual in the way we understand it. And we might also be inclined to label
this ritual as homosexual behavior, but it’s still not quite the same thing as homosexuality
as we understand it in the US.

So physically identical acts can have radically different social and subjective meanings.
We can explain this, in part, with the concept of sexual scripts. These are cultural
prescriptions that dictate the when, where, how, and with-whom of sex, and what that sex
means when it happens. The idea that sex happens at home between two willing partners,
for example, is part of a generic sexual script in our society. Likewise, sex that happens
between two people who met at a bar might come with a different script –and therefore
different shared expectations –than sex between two people who’ve known each other for
a long time.

This brings us to the structural functionalist perspective. Since sexual reproduction is


necessary for the reproduction of society, this view says that sex has to be organized in
some way, in order for society to function.

And society organizes sexuality by using sexual scripts. Before contraception was
widespread, it was these norms that controlled how many people were born, by
determining when and how often people had sex. And by controlling who had sex with
whom, they also, generally, made sure that those kids were born into families that could
support them. This is one function of the universal incest taboo, the prohibition of sex
between close relatives.

Reproduction between family members would ultimately break down kinship relations. It
would be impossible to maintain a clear set of familial obligations if, for instance, your
brother could also be your father. But, as seen from the perspective of social conflict
theory, regulating sexuality is also a matter of creating, and reinforcing, inequalities. In
particular, our society is traditionally built around heteronormativity. This is the idea that
there are only two genders, that gender corresponds to biological sex, and that the only
natural and acceptable sexual attraction is between these two genders.

Heteronormativity makes heterosexuality seem like it’s directly linked to biological sex,
but heterosexuality is just as much a social construction as any other sexuality. It’s
defined by dominant sexual scripts, privileged by law, and normalized by social practices,
like religious teachings, so it comes to be understood as natural in a way that other
sexualities are not. Queer theory challenges this naturalness and especially shows how
gender and heterosexuality are tied together.
Heteronormativity is based on the idea of two opposite sexes that naturally fit together,
like poles of a magnet: So by this logic, men pursue, women are pursued, men are
dominant, women are submissive. But all this is socially constructed; the sexes aren't
opposites, there are just two of them at both ends of a spectrum, along with the whole
array of variations between them.

But the idea of opposite sexes helps make heterosexuality seem natural to us. And so you
can see how sex, gender, and sexuality are all linked, and all socially constructed. And
you can see how society gets in everywhere, even among these apparently private and
primeval things. And in turn, these things help structure society, creating and sustaining
inequalities and giving them the veneer of the natural. But sociology can help us pick
them apart.

Today we learned about what sociology can tell us about sex and sexuality. We talked
about the biological classification of sex, and how it's more complicated than we tend to
think. And we discussed the social construct of gender and a little bit about how it works.

Finally, we talked about sexuality and sexual orientations, and what the three paradigms
of sociology can tell us about them.

32. gender stratification


Why do some people think that drinking black coffee is manly, while ordering a pumpkin
spice latte is “girly?" Don’t let them fool you. Pumpkin spice has no gender. Pumpkin
spice is for everyone. The gendering of inanimate objects is a super-common practice,
and it’s a good example of how societies create markers of gender that have nothing to do
with anything biological. Gender, as you’ll recall, refers to the personal and social
characteristics – but not the biological traits – that we associate with different sexes.
That’s why sociologists say that gender is a social construct, something that we as a
society create and enforce.

Now, those social constructs may be totally made up, but their effects on how we interact
with each other are very real. Indeed, gender influences how we organize all of society,
and how we distribute power. Trust me: the identity-politics of your morning coffee are
only the beginning.

When I say that gender affects the organization of society and the distribution of power,
what I mean is that our society is largely stratified by gender. Gender stratification refers
to the unequal distribution of wealth, power, and privilege across genders.

Take, for example, the right to vote. Denying women the vote has been one way that
many societies have kept political power in the hands of men. It was less than a century
ago, in 1920, that women in the United States gained the right to vote. Saudi Arabia
didn’t allow women to vote until the 2015 election. This kind of disenfranchisement is an
example of patriarchy at work.

Patriarchy is a form of social organization in which men have more power and dominate
other genders. Matriarchal, or female dominated, societies exist, too. But most societies
throughout human history have been patriarchies. And patriarchal societies are
maintained through a careful cultivation of attitudes, behaviors, and systems that favor
men and encourage society to believe that one gender is innately better than others.

Also known as sexism. For example, little girls may sometimes be encouraged to be
tomboys. But young boys are often shamed for liking toys that are considered
stereotypically feminine, or even, say, the color pink. Societies often define, and
celebrate, certain sets of characteristics as being masculine. Sociologist Raewyn Connell
describes this process as ‘hegemonic masculinity’.
Think of the type of guy who’s the lead of every action movie – tall, broad shouldered,
strong, able-bodied, heterosexual, usually wealthy… probably named Chris – that’s
hegemonic masculinity. But it goes beyond mere appearance. Hegemonic masculinities
are linked to power within society, too. Fitting into the archetype of masculinity pays off
in the form of societal approval.

But ultimately, in a patriarchal society, all men share in patriarchal dividends. This is a
fancy way of saying that there are benefits that accrue to men simply because they are
men. But before we get too deep into what those benefits are, let’s take a step back and
look at how different gender expectations are taught in our society.

As you might remember from our episode on socialization, the first people who teach us
about gender are our parents. If daughters are given dolls to play with and sons are given
toy hammers, kids learn that caring behaviors are feminine and building things is
masculine. This type of anticipatory socialization is reinforced by the societal assumption
that men are the breadwinners in families and women will take care of the home and
children.

Even as more women have become equal earners outside the home, they still tend to do
more work in the household as well. Sociologist Arlie Hochschild called this
phenomenon the ‘second shift’, in which women come home from work to more work –
cooking, laundry, childcare – whereas men are more likely to spend their time in leisure
after work. According to a survey on time use from the Bureau of Labor Statistics in
2015, full time working moms spend about 9 more hours per week on household chores
and caring for family members than full time working dads.

These gender dynamics are helped along by corporate and governmental policies that set
aside parental leave only for women. And by less formal influences, too, like
commercials or TV shows that depict fathers who can’t do the laundry or take care of
their own kids for a weekend. The media play a big part in teaching kids about gendered
ideals. Unfortunately, their depictions of what the typical woman or man looks like tend
to be a bit skewed.

Women in particular are exposed to messages that encourage them to value youth,
beauty, and thinness. These media messages – which encourage women to be desirable to
men – contribute to what Raewyn Connell has referred to as emphasized femininities.
This is the flip side of the hegemonic masculinities. Emphasized femininities are forms of
femininity that conform to what the ideal female is in men’s eyes. The social reality is
that femininities come in many different forms and may or may not be constructed in
ways that emphasize stereotypical notions of gender.

But media are only one source of gender socialization. The gender constructions that kids
see outside of the home also tend to reinforce the dynamic of women in caring roles and
men in leadership roles. Take school, for example. While three-quarters of K through 12
teachers are women, about half of school principals and only 14% of school
superintendents are women. Female principals are more likely to work in elementary
schools, which is less likely to lead to promotions to higher positions in the district.

And who you see at the front of the classroom isn’t the only way that schools influence
gender socialization. Let’s go to the Thought Bubble to talk about how sports ended up as
part of the landmark United States law about gender discrimination in schools: Title IX.
Passed in 1972, Title IX is a law that prohibits discrimination on the basis of sex in public
schools.

It was originally developed in response to discrimination in higher education, such as


enrollment quotas, or refusing to hire female academics with children. But the law
became most well-known for its effects on sports. Prior to 1970, most schools only had
official teams for boys – and if a girl wanted to join the team, she could be turned away
without question. As a result, only about 4% of girls played sports. By tying schools’
funding to equal opportunities for boys and girls, Title IX required that schools offer girls
just as many opportunities to play sports as boys.

This increased the number of high school girls playing sports from 295,000 in 1970 to
over 3 million nowadays. But more importantly, it also forced colleges to increase their
funding for female sports scholarships, which was one of the factors in the increase in
women pursuing higher education.

One person for whom it made difference? Sally Ride. Thanks to Title IX, she was able to
get a tennis scholarship to college – which led to her studying physics and eventually
becoming America’s first female astronaut. Since the 1970s, the number of women
pursuing higher education has skyrocketed, with women now making up the majority of
all college graduates. But different majors attract different genders, with men being
heavily represented in fields like computer science, economics, and engineering, while
women are more likely to cluster in biology, psychology, or sociology.

Moving past education, the jobs that women work tend to be in service or care positions,
such as food service, education, health care, and administrative roles. Sometimes known
as “pink collar jobs”, these jobs with the highest concentrations of women tend to come
with both lower prestige and lower pay. You’ve probably also heard of the glass ceiling:
a term used by sociologists to describe the invisible barrier that stops women’s
advancement to the top levels of an organization.

Women are particularly underrepresented in leadership positions across all major


institutions. Of the Fortune 500 companies, only 32 CEOs are women. In politics, only
19% of the US House of Representatives and 21% of the US Senate are female. The US
has never had a female president or vice president and did not have its first female
supreme court justice until 1981.
Why does the glass ceiling persist? While the US and many other countries have laws in
place to prevent explicit discrimination on the basis of sex and gender, women are often
held back through less explicit kinds of sexism.

For example, men who are assertive in salary negotiations are more successful in getting
a higher salary, but women who do the same tend to be seen negatively. Which is a
Catch 22 for women – do you negotiate and get labelled as too aggressive or do you settle
for lower pay?

One of the results of gender stratification is gender wage gap. According to a survey done
in 2016 by the Pew Research Center, white women earn about 80 cents for every dollar
that white men make. This gap is wider for non-white women, with Black women earning
65 cents and Hispanic women earning 58 cents for every dollar that white men make.
Now, there’s a lot to unpack from the gender pay gap.

That 20 cent gap isn’t all due to gender discrimination. Much of it can be explained by
differences in education, choices of careers, differences in the hours worked, and
differences in experience. But those last two factors – hours worked and career
experience – are often related to the decision to leave the workforce to care for children,
which is way more normative for women than for men.

So, some people argue that, if we can explain the gender gap by looking at people’s
choices, then it must the people alone who are responsible for the gap being there. But
the fact is, society has a tremendous influence on what choices people make, as well as
what type of person is considered the right “fit” for a given job.

Yes, the gender gap is smaller if you compare female CEOs with 30 years of work
experience to male CEOS with 30 years of work experience. But, there are fewer women
who are offered those positions. Gender socialization is also part of why women might
choose to opt out of the workforce, to care for children.
And society also informs the educational choices that women and men make that
contribute to the gap. For example, until the 1980s, the number of women who majored
in computer science was increasing at a pace similar to other fields, like medicine. But
around 1985, that rate began to drop, roughly around the time that personal computers
and video games came on the market and were marketed as gadgets for boys and men.

Gendered marketing strikes again! And patriarchal norms about masculinities can affect
men as well as women. For example, men have higher rates of suicide than women.
Studies of suicide among men have found that it’s often linked to financial troubles or
divorce, two crises of masculinity that may be related to men’s identity as a breadwinner.
Men are also more likely to be incarcerated.

They’re more likely to engage in criminal behavior, yes, but holding all else equal, men
are more likely to be tried for a crime and more likely to be found guilty. This stems from
the stereotype that women are more moral and innocent, an example of benevolent
sexism that makes women less likely to be seen as criminal types. But benevolent or not,
sexism and the patriarchy have real impacts that make it harder for all genders to be on
even footing in our society.

Today we learned about some of those impacts, starting with discussing patriarchy and
sexism and Raewyn Connell’s concept of hegemonic masculinities and emphasized
femininities. Then, we discussed gender socialization in the home, media, and schools.

Finally, we talked about how gender stratification results in different outcomes by gender
in education, occupations, earnings, and criminal activity.

33. Theories of Gender


Why is gender even a thing? We’ve talked about what gender is, and how it affects
people’s lives. But we’ve skipped over a fundamental part of the whole issue of gender:
Why does it matter to us so much? Gender isn’t the same in all cultures. Mainstream
Western ideas have focused on the idea of gender as a binary, with masculinity and
femininity serving as mutual opposites. But other cultures have three genders, or see
gender as fluid, or describe gender as a spectrum rather than a set of distinct types.

For example, many Native American and First Nation peoples recognize a third gender
that incorporates both the masculine and feminine, and it plays a specific, sacred role
within their culture. While different tribes all have different terms for this gender within
their own languages, nowadays many use the umbrella term Two Spirit.

But there are no known societies that have no concept of gender. So, why is that? Why do
we, no matter what society we live in, ascribe so much meaning to gender?

To talk about why we have gender in the first place, we need to go back to the three
theories that sociology is built on: structural functional theory, symbolic interaction
theory, and social conflict theory. All three of these theories have different perspectives
on why gender exists. Let’s start with structural functionalism. Remember, the structural
functional approach understands human behavior as part of systems that help keep
society organized and functioning.

From this perspective, gender is a means of organizing society into distinct roles that
complement each other. Some anthropologists have argued that hunter-gatherer societies
originated the idea that men are providers and women take care of the home.

Men were physically stronger and didn’t have the demands of childbearing, which made
it easier for them to take on more aggressive, autonomous roles, like hunting or warfare.
And these roles became institutionalized. Even once physical strength was no longer
important for many jobs, it was taken for granted that men would be the breadwinners
and women would care for the children.

But there are holes in this theory – namely that the early anthropologists who studied this
dynamic overemphasized the role of things like big game hunting. More recent
anthropological work suggests that gathering, fishing, and small game hunting – all of
which were also performed by women – played a much larger role in providing food
in these societies.

But the idea that we have two genders to play complementary roles has stuck around,
partially through the work of sociologist Talcott Parsons. He argued that boys and girls
are socialized to take on traits that are complementary to each other, to make it easier to
maintain stable, productive family units. Boys are taught what Parsons calls instrumental
qualities, such as confidence and competitiveness, that prepare them for the labor force.

Meanwhile, girls are taught expressive qualities, such as empathy and sensitivity, which
prepare them to care for their families. Parsons’ theory was that a successful family needs
people to have complementary skill sets, and gender gives us a way of pairing off these
skills. And society, in turn, encourages gender conformity by making people feel that
they have to fit these molds if they want to be romantically desirable, and by also
teaching people to reject those who go against these gender norms.

Though this theory was influential in the mid twentieth century, it’s fallen out of favor for
a few reasons. First, Parsons was basing his theory on a division of labor that was specific
to middle-class white America in the 1940s and 50s. It assumes a heteronormative and
Western perspective on what a family is. But not all families are nuclear units with one
man, one woman, and a gaggle of children. When you expand the definition of family to
include same-sex couples, single parents, multi-generational families, or childless adults,
it’s less obvious that you should assume that a man works outside the home and a woman
works inside the home.
Second, the idea of complementary genders rests on there being two distinct and opposite
genders. Again – a Western perspective. The idea of gender as a binary isn’t universal,
and it ignores all those whose identities don’t conform to a two-gender system. Third,
Parsons’ theory ignores the personal and social costs of maintaining rigid gender roles.
Critics argue that the idea that men need to be the ones working outside the home to
maintain family stability is arbitrary, and it reinforces gender dynamics that give men
power over women.

Now, another perspective on gender is the symbolic-interaction approach. While


structural functionalists are concerned with how gender helps all of society work well,
symbolic-interactionists are more focused in how gender is part of day to day life. From
this perspective, gender is something that a person does, rather than something that’s
either innate or imposed by institutions. Let’s go to the Thought Bubble to talk about
different ways that people ‘do gender.’ Clothes, hairstyles, and makeup all telegraph
gender to the people around you. Take these two people. You probably have a gut
reaction about the gender of these people, even though the only thing that‘s
different about them is what they’re wearing. But what if the person in the suit had long
hair or was wearing makeup?

These might flip a switch in our brain to start seeing the person as a woman in a business
suit. But having short hair and no makeup while wearing a dress doesn’t necessarily flip
the same switch. This pattern is an example of gender roles, or how a society defines how
women and men should think and behave.

A man wearing a skirt is seen as more of a rejection of traditional gender roles than a
woman wearing pants is. Body language and how people interact with each other are also
part of how people do gender. Women are socialized to be deferential in conversation,
meaning that they’re more likely to make eye contact to show that they’re listening, or to
smile as a way of encouraging their speaking partner.
Crossing your legs is called ‘ladylike’ whereas if you sit on the subway with your legs
spread out, you might get glared at for ‘manspreading.’ These exercises in ‘doing
gender’ are good examples of how our society’s definitions of masculinity and femininity
are inextricably linked to each gender’s power in society. Masculine traits are associated
with power – taking up more space, directing the conversation – and are often valued
more than feminine traits. In other words, everyday social interaction reflects and helps
reinforce gender stratification.

But a limitation of the symbolic interaction approach is that it focuses on the micro,
rather than the macro. Because of its focus on situational experiences, it misses the
broader patterns of gender inequality. For that, we need social conflict theory. You might
remember gender conflict theory from our episode about Harriet Martineau.

But in case you’ve forgotten, gender conflict theory argues that gender is a structural
system that distributes power and privilege to some and disadvantage to others.
Specifically, that structural system is the patriarchy, a form of social organization in
which men have more power and dominate other genders.

We can see examples of this structure in institutional practices that disadvantage women,
like restricting higher education to men or refusing to allow women to vote. But we also
see this in less official ways. Think about the traits that our society values – rationality is
often praised as a desirable way of thinking, especially in leaders, while irrationality
means letting emotion affect decisions and is seen as a weakness.

Women are stereotyped as more emotional and men as more rational, which makes
people falsely see men as more natural fits for leadership positions. The way that
patriarchy privileges certain people over others also isn’t as simple as saying that all men
are at the top of the power distribution. This is why there’s more attention paid
in sociology to intersectionality, or the analysis of the interplay of race, class, gender,
sexual orientation and other identities, which often results in multiple dimensions of
disadvantage.

While all women are disadvantaged by gender, it’s also true that some women experience
more disadvantage than others. And the converse is true for men – all men benefit from
living in a society that privileges masculinity, but some men benefit more than others. To
see this in action, let’s go back to the stats on the gender wage gap we talked about in the
last episode. White women make 80 cents for every dollar a white man makes. Black
women make 65 cents for every dollar a white man makes.

If we divide those two numbers, we get the wage gap between white women and black
women: a black woman makes 81 cents for every dollar that a white woman makes. But
what about black men? Well, they make 73 cents for every dollar that white men make.
So black women do worse economically than black men, who do worse than white
women, who do worse than white men.

Just looking at gender or just looking at race misses the way that disadvantages can stack
on top of one another. Our understanding of social conflict theory also would not be
complete without discussing a movement closely entwined with gender conflict theory:
feminism. Feminism is the support of social equality for all genders, in opposition to
patriarchy and sexism. Broadly speaking, feminism advocates the elimination of gender
stratification, expanding the choices that women, men and other genders are allowed to
make, ending gender-based violence, and promoting sexual freedom.

There are many forms that feminism can take, but let’s highlight three major schools of
thought within feminist theory. The first is liberal feminism – and no, I don’t mean liberal
in the political sense. I mean liberal in the classical sense, rooted in the ideals of freedom
of choice and equal opportunity.
Liberal feminists seek to expand the rights and opportunities of women by removing
cultural and legal barriers to women’s equality, like implementing policies that prevent
discrimination in the workforce or improve reproductive freedom. This contrasts with
socialist feminism, which views capitalism as the foundation of the patriarchy and
advocates for full economic equality in the socialist tradition. Socialist feminists tend to
believe that the liberal feminist reforms don’t go far enough since they maintain most of
the existing institutions of power.

The third feminist school of thought is known as radical feminism, which believes that to
reach gender equality, society must actually eliminate gender as we know it. Radical
feminism has clashed heavily with other subsets of feminism, particularly on transgender
individuals’ rights. Many radical feminists refuse to acknowledge the gender identities of
trans women and have accused the transgender movement of perpetuating patriarchal
gender norms.

And these three ways of thinking about feminism are only a few of the many views on
how to best advocate for gender equality. Kinda like how there are many theories within
sociology about how we should think about gender.

Today, we learned about three of those schools of thought on gender theory. Structural
functionalism sees gender as a way of organizing society and emphasizes the ways that
men and women can act as complements to each other. Symbolic interactionism looks at
gender on the micro level, exploring how gender guides day to day life.

And gender conflict theory, intersectional theory, and the theories of feminism focus on
the ways that gender distributes power within society.

39. religion
Religion might not seem like something a sociologist can study. After all, religion is
about personal beliefs, right? So, sociology won’t give you any answers about the
existence of God, or how many angels can dance on the head of a pin. But sociology can
help you think about religion as a social institution. In the same way that we might study
the family or the government, we can ask questions about religion’s role in society. Like,
how do different religions influence social norms in a society?

What’s the function of religion in a society? Does it improve social cohesiveness or


entrench inequalities? Before we try to answer those big questions, let’s start with a
simpler one: What is religion?

To understand how sociologists think about religion, we need to go back to the work of
our old friend, French sociologist Emile Durkheim. Durkheim defined religion not in
terms of gods or supernatural phenomena, but in terms of the sacred – things that are set
apart from society as extraordinary, inspiring awe, and deserving of reverence. He
claimed that in all societies, there’s a difference between the sacred and the profane, or
the mundane, everyday parts of life. Religion, then, is a social institution that involves a
unified system of beliefs and practices that recognizes the sacred. But this isn’t a set-up
between good and evil. Sacred doesn’t mean good and profane doesn’t mean bad.
Instead, recognizing something as sacred is about seeing a certain place, object, or
experience as special and creating markers that separate it from your day to day life.

It’s natural, then, to think about religion from the perspective of Symbolic-Interactionism,
which thinks about society in terms of the symbols that humans construct. And all
religions rely on the use of symbols to create the Sacred. Rituals, for example, are a form
of symbolic practice that highlight faith. Many religions use certain actions during prayer
that symbolize deference to God, such as Catholics making the sign of the cross before
prayer, or Muslims supplicating themselves and facing Mecca, the birthplace of the
prophet Mohammed.
Many religions also practice ritual ablution, or washing certain parts of the body during a
religious ceremony. For example, in the religious practice of baptism, water is a symbol
of people’s belief that faith cleanses the soul. Objects can also take on Sacred meaning.
Symbols like the Cross or the Star of David are considered totems, objects that we have
collectively defined as Sacred.

Types of dress or grooming practices, such as men’s beards in Islam or Orthodox


Judaism, also become sacred indicators of faith because they’re visible symbols of
religious belief. In this way, Totems confer in-group membership to those who wear or
use these symbols, because they provide a way for people to demonstrate
their faith and recognize that faith in others.

But the role of religion in society goes beyond influencing our symbolic practices. In
addition to defining the Sacred and the profane, Emile Durkheim also looked at religion
through the lens of structural functionalism. And he identified three major functions of
religion that contribute to the operation of society.

First, religion helps establish social cohesion, by uniting people around shared symbols,
norms, and values. Durkheim argued that religious thought promotes norms like morality,
fairness, charity, and justice. Churches act as gathering places, forming the backbone of
social life for many people. In fact, membership in a church is the most common
community association for Americans. Second, Durkheim said, societies use religion as a
form of social control. People behave well, not only out of fear of their friends and
families disapproving, but also out of a desire to remain in their god’s good graces.
Christianity and Judaism, for example, have the Ten Commandments, a set of rules for
behavior that they believe were sent directly from God. But these commandments aren’t
just rules about how to worship – many of them match up with societal norms, like
respecting your parents or not committing adultery, or with secular laws, which prohibit
murder and theft. Third, in a functionalist perspective, religion provides people with a
sense of purpose in life. Sometimes it can feel like our lives are such tiny blips in the
grand scheme of the universe, it can be hard to imagine why your actions matter.

Religion gives people a reason to see their lives as meaningful, by framing them within
the greater purpose of their god’s grand plan. But while Durkheim’s framing
demonstrates the many ways religions promote social unity, religion can, of course, also
be a force of division. Social Conflict Theory perspectives understand religion in terms of
how it entrenches existing inequalities. Karl Marx saw religion as an agent of social
stratification, which served those in power by legitimizing the status quo and framing
existing inequality as part of a divine plan.

Rulers in many societies were believed to be given their right to rule by divine right.
Chinese emperors were believed to have a mandate from heaven, and were given the title
Son of Heaven to indicate their divine authority to rule. In Europe, heads of state were
often also the head of the Church – in fact, to this day, the British monarchs are the
formal heads of the Church of England. And some Christian religions, such as Calvinism,
espouse predestination, or the belief that God pre-ordains everything that comes to pass,
including whether you get into heaven.

So, by this logic, having wealth and power was seen as an indication of God’s favor. So,
for these reasons, Marx saw religion as a huge barrier to revolutionary change, referring
to it as the ‘opiate of the masses.’ After all, it’s hard to convince people to rise up against
the elites if they believe that the elites have the power of God behind them! In addition to
entrenching political and economic inequalities, Conflict Theory perspectives also
explore how religion contributes to gender and racial inequalities. Let’s go to the
Thought Bubble to look at how feminist theory and race conflict theory help us
understand religion’s effects on these kinds of inequality. If you walk around any major
museum in the Western world, you’re pretty much guaranteed to find some art depicting
religious figures from Judaism or Christianity.
And in these paintings, God is pretty much exclusively an old white man with a beard.
And in fact, divine figures and their prophets in most religions are male. Virtually all of
the world’s major religions are patriarchal, with religious texts often explicitly describing
men in the image of God and women in subordinate roles to men. For example, in
Christianity, the first man, Adam, was created in God’s image whereas the first woman,
Eve, was created from Adam’s rib to serve and obey Adam. Many religions also position
women as immoral beings in need of male constraint. In the Bible, Eve committed the
original sin by tempting Adam to eat the forbidden fruit and got both of them booted
from paradise. Many religions ban women from positions with the clergy, including
Catholicism, Orthodox Judaism, and Islam.

Religion has also been used as a way to control women’s behaviors, requiring them to
dress modestly or not allowing women to speak in church or be alone with men outside
their family. Religion has also been used to uphold another type of social inequality:
racial inequality. Slavery in the United States, for example, was framed as morally
justifiable based on various texts from the Bible, most prominently the story of Cain and
Abel, in which God ‘marked’ Cain for murdering his brother, which was interpreted to
mean marking him as sinful with darker skin.

But that’s not to say that religion is always on the side of oppression. Quakers, a sect of
Christianity, were leaders in the abolition movement and in the women’s suffrage
movement of the 19th century. The Civil Rights Movement of the 1960s was led by many
with ties to the Black religious community. Most notably the Southern Christian
Leadership Conference, a civil rights organization headed up by a Baptist minister that
you might have heard of named Martin Luther King, Jr. So, we’ve talked a lot so far
about religion in theory – but how does religion work in a practical sense?

Understanding how different religions are organized and how they integrate with the rest
of society helps us understand which of these theories make sense in different religious
contexts. We only have to look at the US to see why understanding the practical
importance of religion might be of interest to sociologists: In the United States, more than
70% of American adults claim that religion is important in their lives, which is more than
double the rate of adults in other high income countries like Norway or Japan.

National surveys show that about 50% of Americans identify as Protestants, 20% identify
as Catholics, 6% identify with a non-Christian faith, and 23% do not identify with a
religion at all. Within the Protestant faith, there are a large number of denominations, or
subgroups of religious practice, including both mainstream denominations, such as
Presbyterians and Lutherans, and Evangelical churches, such as Methodists and Baptists.

Evangelical denominations are characterized by more active attempts to proselytize, or


spread the faith to others outside the faith. But who identifies as what religion depends a
lot on who you are – in terms of where you live, in terms of class, and in terms of race
and ethnicity. More well-established religious faiths that are well-integrated into society
are what sociologists call Churches.

Most major religions are what we would call a Church – for example, Christianity, Islam,
Judaism, Hinduism, and Buddhism are all ‘Churches’. Religious sects, meanwhile, are
faiths with belief systems that are less formal and less integrated into society. And they
tend to attract followers who are more disadvantaged.

Some examples include Jehovah’s Witnesses, Pentecostals, or Unitarians. Not only does
class matter when it comes to your religion – where you live might, too. Catholicism is
more common in Northeastern and Southwestern states, whereas the South has high
concentrations of Evangelical Protestants, such as Baptists, and the Midwest has higher
concentrations of other Protestant faiths, such as Methodists and Lutherans. Many of
these regional differences stem from which racial ethnic groups settled in these regions.

The Midwest, for example, had high numbers of German and Scandinavian immigrants
settle there, and these ethnic groups are often Lutherans. Irish and Italian Americans, who
were more likely to be Catholic, settled in New England and the Mid-Atlantic. Black
Americans – who are heavily concentrated in Southern states – are somewhat more likely
to be religious than the US population as a whole, with 87% claiming an affiliation with
some faith.

And the vast majority of Black Americans identify with a Protestant faith, with
evangelical churches being the most common affiliation. There’s also a growing number
of Black Americans who identify as Muslim, with about 40% of all native-born Muslims
in the US identifying themselves as African American.

Overall, however, the importance of religion in the United States has been on the decline
in recent decades – a process known as secularization. Younger Americans are much
more likely now to report that they do not believe in any religion compared to past
generations. Nonetheless, the influence of religion on society isn’t going anywhere
anytime soon. As we learned today – no matter which school of sociological thought you
subscribe to – religion has ties to the very rules and norms that shape what our society
and culture look like.

Today, we looked at how symbolic interactionism helps us understand religion’s


dichotomy of the Sacred vs. the Profane. Then, we compared the perspectives of
Structural Functionalists and Social Conflict Theorists on whether religion improves
social cohesiveness or increases social stratification.

And we ended with a discussion of how religious practice in the US differs across race
and class lines.

Вам также может понравиться