Вы находитесь на странице: 1из 82

An EBook Presentation

Student as Researcher
Brief Research Guide for Beginners

2009

Author: Robert S. Feldman.


Compiled from CD-ROM by Sujen M. Maharjan.

FOR PSYCHOLOGY NETWORK


Contents
Introduction
1

Step1: Think like a Scientist


Step2: Choose a Research Question
Step3: Generate a Research Hypothesis
Step4: Form Operational Definitions
Step5: Choose a Research Design
Step6: Evaluate the ethics
Step7: Collect Data
Step8: Analyze Data and Form Conclusions
Step9: Report Research Results
Step10: Take the Next Step
Glossary
References
Introduction

Do you ever "people-watch"? Do you ever wonder why people behave the way they do? Do you ever come up
with explanations for people's behavior?

If you answered "yes" to these questions, you have already taken the initial steps for conducting research in
psychology. That you are enrolled in an introductory psychology course suggests you are interested in
learning more about why people think and behave the way they do.
2
But simply people-watching and coming up with our own explanations for people's behavior isn't research; it's
not scientific. In this "Student as Researcher" component of your psychology course, you will learn the basic
steps for conducting psychological research. As you learn about research, you will also learn how to evaluate
psychological research.

Reports of psychological research are all around us-we learn about research findings in the news, in our
magazines, and on the Internet. Throughout this Student as Researcher guide, you'll find many websites for
more information. For example, for psychology news, try these websites:

http://www.apa.org/monitor

http://www.psychologicalscience.org/observer/

http://psych.hanover.edu/APS/exponnet.html

The public seems hungry to know more about psychology; after all, psychology is about us. Much of what we
learn from the media has direct relevance for our own lives. For example, we may learn about the effects of
divorce on children (http://mentalhelp.net), how to communicate with elderly relatives who have Alzheimer's
(http://www.alzheimers.org), learn how to forgive (http://www.forgiving.org), and so on. But how do we
separate the good research from the bad research? How do we know whether to accept what we read or hear
as true? What if scientists seem to disagree about research findings?

How we answer these questions depends on understanding the basic steps of the research process.
Psychological research, like all scientific research, uses the scientific method. Therefore, as we start these
Student as Researcher exercises, we will first answer the question, what is the scientific method?

Research Example: Adjusting to College

As we work though the steps of the research process, we will illustrate the primary steps and decisions using
a research example. We think you will find this example interesting-it's about how students adjust to college,
particularly the emotional experiences associated with the transition to college. As you read about this
research you will find tips for improving your grades and health.

We will report the research method and findings used by Dr. James Pennebaker and Martha Francis in their
study called, "Cognitive, Emotional, and Language Processes in Disclosure." The research report was
published in 1996 in a psychology journal called Cognition and Emotion. For more information about Dr.
Pennebaker, check out his website at
http://homepage.psy.utexas.edu/homepage/faculty/pennebaker/pennebaker.html
3
For information about adjusting to college, check out these websites:

http://www.studentadvantage.lycos.com

http://www.aboutcollege.com/front.htm

In addition, check your own college or university's website for resources on adjusting to college.

Evaluating Research

This component of Student as Researcher asks you to think about research findings. During each step of the
research process, we must evaluate the evidence and decide whether to accept the conclusions that are
presented. This is true for when we review previous research findings, as well as when we generate our own
research findings. This " Evaluating Research" guide will give you questions to ask about research findings.

Step 1: Think Like a Scientist

Take the Step

Research Example

Evaluate Research

Step 1: Think Like a Scientist

The first step of the research process is to begin to think like a scientist. Psychology is a scientific discipline.
Because psychologists use the scientific method, they are similar to scientists in disciplines such as
anthropology, biology, chemistry, and sociology. What differs among these scientific disciplines is the content
of researchers' investigations. Psychologists study the mind and behavior, whereas anthropologists study
evolution and culture, biologists study cells, chemists study molecules, and sociologists study societies.

What is the scientific method?

What's so special about the scientific method?

Why should I conduct psychological research? 4


Does my research have to meet all four goals?

What's the difference between basic and applied research?

What is the scientific method?

< A way to gain knowledge

The scientific method is a way of gaining knowledge. All people strive to gain knowledge, and there are many
different ways to gain knowledge. These ways of gaining knowledge are seen in the different academic
departments of universities, for example, philosophy, literature, arts, mathematics, and the sciences. As you
continue your course work and complete your degree, you will see that each of these disciplines has a unique
method for gaining knowledge about the world and the human condition.

What's so special about the scientific method?

< Empirical approach, control, attitude

When scientists talk about the scientific method, they don't refer to a particular technique or piece of
equipment. Instead, the scientific method is a way of thinking and making decisions. Of course, we all think
and make decisions every day. What makes the scientific method so special? To answer this question, we
can compare the scientific method to our 'everyday' ways of thinking and making decisions.

Each day, you make judgments and decisions using your intuition what 'feels right' or what 'seems
reasonable.' Usually, this works fine for us but not for scientists. The scientific method uses an empirical
approach for making judgments and decisions. An empirical approach emphasizes direct observation and
experimentation.

Observation and experimentation in science is systematic and controlled. In fact, control is the essential
ingredient of science. By having control over their observations, scientists isolate and study various factors
one at a time. This is why most scientific research, including psychological research, is done in laboratories.
Research labs provide an opportunity for psychologists to control and isolate the factors they think have
important effects on behavior and mental processes. In everyday life, our behavior is influenced by many
different factors, all operating together. Scientists try to 'tease apart' all these factors and study them one at a
time. This is the essence of controlled observation and experimentation.

In order to think like a scientist, you have to have 'an attitude.' In our everyday life, we often accept
explanations and claims uncritically. In fact, we may become excited to read about new weight loss pills or
tape recordings that we play under our pillow at night to get better grades or earn more money. The scientific 5
attitude, however, is one of caution and skepticism.

Scientists recognize that there are no easy explanations or 'quick fixes.' Humans are complex — many factors
interact to influence behavior and mental processes. So, scientists are skeptical when they hear a claim. We
should withhold judgment until we can evaluate the evidence for the claim. And importantly, we should know
what kind of evidence is offered to support the claim. The best evidence comes from an empirical
approach/systematic and controlled observation and experimentation.

Several websites provide information about scientific skepticism. Try these:

http://psg.com/~ted/bcskeptics/ratenq?Re3.3-Attitude.html

http://www.psychology.org/links/Resources/Pseudoscience/

http://www.apa.org/pubinfo

Why should I conduct psychological research?

< Four research goals

We use the scientific method to meet four research goals:

Description Prediction Understanding Create Change

1.Description: The first step of any scientific investigation is to describe fully the phenomenon we're interested
in. Therefore, the goal of descriptive research is to define, classify, or categorize events and their relationships
in order to describe mental processes and behavior.
Example: Psychologists who are interested in depression might describe symptoms of helplessness, such as
failure to initiate activities and pessimism regarding the future.

2.Prediction: Once events and behaviors have been described, we may seek to predict when these events
occur. Researchers identify correlations (relationships) among factors to predict mental processes and
behaviors.

Example: Research that meets the prediction goal may examine the relationship between symptoms of
depression and helplessness. One relationship might be: As level of depression increases, individuals exhibit 6
symptoms of helplessness. Another factor, or variable, that may be related to helplessness concerns
individuals' feelings that they are unable to do things successfully. The relationship might be as follows: The
more people believe they can't do things successfully, the more likely they are to feel helpless.

3.Understanding: We understand an event or behavior when we can identify its cause(s). Understanding
involves more than description and prediction. Just because we observe a relationship between two variables,
we can't say one causes the other. More formally, this is stated as "correlation does not imply causation."
Researchers conduct experiments to identify causes of a phenomenon. Experiments are characterized by the
high degree of control required of the scientific method. Thus, by conducting controlled experiments
psychologists infer what causes a phenomenon.

Example: We might test the relationship between unsuccessful performance and helplessness. We could ask
some participants in an experiment to do problems that are unsolvable and ask other participants to do
solvable problems. We might then ask participants to estimate their success on future problems, and whether
they want to do additional problems. Suppose that participants who worked on unsolvable problems were
more pessimistic about their future performance and less willing to do additional tasks than participants who
completed solvable problems. Based on this experiment, we could infer that unsuccessful performance on a
task causes individuals to become helpless (i.e., pessimistic and less willing to initiate new tasks).

4.Create change: Psychologists work in a world in which people suffer from mental disorders, in which people
are victims of violence and aggression, and in which stereotypes and prejudices affect how people live and
function (to name but a few problems we face in society). Therefore, an important goal of psychology research
is to apply knowledge and research methods to change people's lives for the better.

Example: Clinical psychologists who treat depressed people could encourage them to attempt tasks that can
be mastered or easily achieved; research evidence suggests this may decrease their helplessness and
pessimism.

Does my research have to meet all four goals?

< No; research is cumulative.


It's very difficult for a single research project to meet all four goals of psychological research. Instead,
researchers may focus on a single goal, or one or two goals. For example, when researchers investigate the
causes of behavior, they may also be able to describe a particular aspect of behavior and identify a predictive
relationship. Psychological research is cumulative. Our ability to describe, predict, understand, and create
change depends on the many research projects conducted by psychologists all over the world.

What's the difference between basic and applied research?

< The lab vs. the "real world," understanding vs. creating change 7

As you read about psychologists isolating and controlling behavior in research labs, you may have thought,
"But this isn't what real life is like." And that's true; our everyday life is complicated and "messy." In order for
psychologists to isolate specific factors that influence behavior, however, they do their research in a lab. This
is called basic research. The goal of basic research is to test theories about behavior and mental processes,
and "basic researchers" often focus on the goal of understanding.

Applied research, in contrast, is more directly concerned with the goal of creating change. Psychologists who
do research in "the real world" apply the findings from basic research to improve people's lives. And, in turn,
basic researchers use the findings from applied research to refine their theories about behavior.

Together, findings from basic research and applied research help psychologists to describe people's behavior
and mental processes, make predictions about behavior, understand the causes of behavior, and create
positive change in people's lives.

Step 1: Think Like a Scientist

What factors distinguish students who adjust well to college?

What factors predict whether students will do well?

Why do some students do better than others?

These are important questions. Our "everyday" thinking relies on intuition to answer them. Type in this space
a few factors you think may influence students' adjustment to college:
You may have thought of factors such as 8

• Having supportive friends and parents

• The difficulty of the classes

• Distance from home

• Whether the college environment is fun and active

• Many other factors

Our intuition tells us that these factors influence college adjustment, but to be scientific, we need to do more.

When we think like a scientist, we set aside our more everyday approach to decision making and gaining
knowledge. A scientific approach to these questions involves isolating important factors, such as support from
friends and observing the effects of this factor by itself.

Scientists are skeptical. They look at lists of factors that intuition suggests may be important, but they adopt a
wait-and-see attitude. What does the research evidence say about these factors? Is the evidence good? Is it
based on well-controlled observation and experimentation? As you begin to think like a scientist, you won't be
satisfied with people's opinions about psychological topics. You'll want to know what the research evidence
says.

Why should we do research on college students' adjustment?

This question addresses the goals of psychological research.

Description: A first goal in conducting research in this area is to describe the characteristics of students who
adjust well to college and the characteristics of students who don't adjust well.
Prediction: We can also conduct research to predict which students will adjust well and those who won't adjust
well.

Understanding: An important research goal concerns why some students adjust better to college than others.
We can try to identify what causes some students to adjust better than others.

Creating Change: When we can describe, predict, and understand college students' adjustment, we're in a
position to create change. We can intervene to improve students' adjustment to college.
9

Step 1: Think Like a Scientist

Here are 10 questions that will help you to be skeptical about research findings. Ask yourself these questions
when you hear or read about psychological research:

• What is the source? For example, are the findings in a scientific publication, in the popular media (e.g.,
television, magazines, Internet), or presented as part of cultural traditions or stereotypes? (Scientific
publications are the best source for research findings about mental processes and behavior.)

• Is the evidence based on testimonials (personal accounts) or research involving large samples?
(Testimonials are not scientific.)

• Are the results coincidental-could they be due to unusual, chance events?

• Do the researchers encourage more controlled investigations? (Beware of those who dismiss the need for
more research.)

• Are the findings based on more than one study? Is the research evidence accumulating for phenomenon?

• Are there conflicting findings, or are conflicting findings ignored?

• Is the research controlled or scientific? (Beware of those who say that the findings disappear when
controlled studies are conducted.)

• Have the findings been verified with independent observers?

• Do explanations for findings appeal to forces outside the realm of science?


• Are causal explanations for a phenomenon offered, even when controlled research hasn't been conducted?

Step 2: Choose a Research Question

You probably have lots of questions about people's behavior and mental processes. As you learn more about
psychology, you may learn answers to your questions based on research that already has been conducted.
And, you may begin to ask new questions. Researchers in psychology are no different-they have many
questions. Often, the hardest step in the research process is choosing which question to answer!
10
The purpose of this section in Student as Researcher is to consider important resources available to us as we
choose our research questions: personal experiences, psychological research literature, and online
resources.

How can my personal experiences in psychology help me with research questions?

How can past research help me to choose a research question?

How do I search the psychological literature for information on my topic?

What online resources are available for learning about psychological topics?

How can my personal experiences in psychology help me with research questions?

< Textbooks, participate in research projects, research teams

By now you've probably gained some exposure to the diverse topics covered by psychologists. A quick glance
through your Introductory Psychology textbook will reveal that researchers study topics in clinical, social,
neuropsychological, cognitive, health, developmental, and many other areas of psychology. To learn more
about the different areas of psychology, go to the websites of the American Psychological Association (APA)
and the American Psychological Society (APS):

APA: http://www.apa.org/about/division.html

APS: http://www.psychologicalscience.org/

Another way to learn about psychological research is to participate in research projects. You may be able to
participate in projects through your Psychology Department. Another fun option is to participate in research
online. There are a wide variety of opportunities; all you need to do is click on the projects that seem
interesting to you. Try these websites for participating in research:
http://psych.hanover.edu/APS/exponnet.html

http://www.socialpsychology.org/

http://www.psych.upenn.edu/~baron/qs.html

http://psychexps.olemiss.edu/
11
Perhaps the best way to learn about research in psychology is to become involved in conducting research.
Many professors conduct research and are eager to involve students on research teams. You may need only
to ask. You may also learn more about how psychologists conduct research in research methods and lab
courses offered by your Psychology Department.

How can past research help me to choose a research question?

< Psychological literature: inconsistencies, suggestions

No matter how or where you begin to develop a research question, you will need to read about psychological
research that has already been conducted. In fact, reading psychological literature (e.g., books, research
articles) can provide many ideas for research.

As you read reports of psychological studies on a topic, you may note inconsistencies, contradictions, and
limitations of the research. New questions arise from the findings of research studies. Often, researchers
suggest ideas for future research at the conclusion of their research report. Thus, reading the psychological
research literature is a very important step of the research process.

How do I search the psychological literature for information on my topic?

< PsycLit, PsycInfo

Many resources are available to you to help you search the psychological literature. The first step is to learn
what's available. The American Psychological Association publishes abstracts from more than 1,000 national
and international periodicals in Psychological Abstracts.

Fortunately, we can use a computer to access these abstracts. PsycLIT is the CD-ROM version of
Psychological Abstracts, and PsycINFO is the online version of Psychological Abstracts. Other online
resources for conducting searches of psychological literature are FirstSearch and InfoTrac 2000. Check your
library for these resources, and ask your librarian for help. Librarians specialize in searching for information
electronically, so be sure to ask them for help.
Electronic databases allow users to search for information using keywords and key phrases, subject words,
authors' names, titles of articles or books, and year. The most effective approach is to have intersecting
keywords; that is, both words must be present in the title or abstract before the computer will ÏflagÓ an article.
This will allow you to find the research articles that are most relevant to your research topic.

An example might illustrate how to search for research articles. In Step 1: Think Like a Scientist, we used the
example of the relationship between depression and helplessness to describe the goals of psychological
research. If we use depression in a keyword search, the computerized search will identify thousands of
research articles that have examined some aspect of depression. However, if we use the terms depression 12
and helplessness a more reasonable number of articles will be located. By entering the key words, depression
and helplessness and problem solving, your search will be even more restricted.

It's a good idea to search with several different keywords to make sure you catch all the articles that are
related to your topic. PsycINFO will provide the abstract of the research article and all the information you
need to locate the research article in your library. Other resources available at your library will allow you to
print out the full text of research articles. Finally, as you read each article, you will find the author refers to
additional research. The References section at the end of the article will provide the information you need to
locate the articles cited by the author.

You can learn more about using PsycINFO by going to the following APA website:

http://www.apa.org/psycinfo/about

Be sure to click on demo to try a PsycINFO search, or go directly to the demo at:

http://www.psycinfo.com/demo

PsycINFO is available for a subscription fee, but the demonstration is free. Your library probably subscribes to
the PsycINFO service, so you should do your literature searches there.

What online resources are available for learning more about psychological topics?

< Psychology-related websites

There are many good resources online for learning about psychology. Presented below is a small list of
websites you can check out. Many of these sites have links to other sites.

General information about psychology (through APA site, with search engine):

http://www.apa.org/psychnet/
National Institute of Child Health and Human Development:

http://www.nichd.nih.gov/

American Academy of Child and Adolescent Psychiatry: http://www.aacap.org/info_families/index.htm

Mental health risk factors for adolescents:

http://education.indiana.edu/cas/adol/mental.html 13
Youth development:

http://www.cyfernet.mes.umn.edu/youthdev.html

Biological changes in adolescence:

http://www.personal.psu.edu/faculty/n/x/nxd10/adolesce.htm

Federation of Behavioral, Psychological, and Cognitive Science:

http://www.am.org/federation/

Neuropsychology, genes, science:

http://serendip.brynmawr.edu/serendip

Ask Noah About Mental Health:

http://www.noah-health.org/index.html

Information about psychological disorders:

http://www.mhsource.com/disorders

Suicide awareness:

http://www.save.org/

American Family Foundation website about cults:

http://www.csj.org/index.html
National Clearinghouse for Alcohol and Drug Information:

http://www.health.org/

Resources for Clinical Psychology:

http://www.rider.edu/users/suler/tcp.html

Information about mental health (Knowledge Exchange Network): 14


http://www.mentalhealth.org/

Comprehensive guide to mental health online:

http://www.mentalhealth.net/

Stress management:

http://www.psychwww.com/mtsite/

Using the Internet for therapy:

http://netpsych.com/index.htm

National Institute of Mental Health (NIMH):

http://www.nimh.nih.gov/

Websites for alcohol and drug additions:

http://www.arg.org/

http://www.niaaa.nih.gov/

http://www.well.com/user/woa/

http://www.support-group.com/

Emotional intelligence:

http://www.cwrl.utexas.edu/~bump/Hu305/3/3/3/
The Personality Project:

http://personality-project.org/personality.html

Behavioral and cognitive development research:

http://www.mpipf-muenchen.mpg.de/BCD/bcd_e.htm

Explanations for criminal behavior: 15


http://www.uaa.alaska.edu/just/just110/crime2.html

Psychology and law:

http://www.unl.edu/ap-ls/

Sigmund Freud and the Freud Archives:

http://plaza.interport.net/nypsan/freudarc.html

Step 2: Develop a Research Question

Dr. Pennebaker was most interested in students' emotional adjustment to college. He wondered whether
students' emotional adjustment influences their grades and health.

Pennebaker has conducted many research studies on a diverse range of psychological topics, such as
religious conversion, lie detection, traumatic experiences, psychosomatic problems, and reasons for why
therapy seems to work. As he put together his findings from these topics, a theme emerged. Pennebaker
noticed that people who experience traumas often have later health problems. He also noted that people often
felt better after they revealed secrets (such as emotional traumas) in confessions, lie detection, and therapy.

Based on his personal experiences conducting research and reading the psychological literature, Pennebaker
developed the following research question:

If students express their emotional feelings about adjusting to college, will they be healthier and do better in
school?

To learn more about emotional disclosure research, you may enjoy Dr. Pennebaker's highly readable and
popular book:
Pennebaker, J. W. (1997). Opening up: The healing power of expressing emotion. New York: Guilford.

Searching the Psychological Literature

We can search the psychological literature to learn more about adjusting to college and about disclosing
personal information and health.

PsycINFO is the online search tool for psychological research. We used the following search terms in a 16
keyword search. The computer searched for these words in all fields (e.g., title, abstract).

1. college and adjust: We used these terms to find articles related to adjusting to college. By selecting the
word adjust, the computer flagged words such as adjusting and adjustment. This search resulted in over 200
articles'too broad a search. As we scanned the titles, many of them were vision studies that examined eye
adjustments with college students as research participants. So, we tried again' but saved information for
articles related to our topic.

2. college and emotion: We thought we might see whether we could look just at college students' emotions,
but this was even a broader search (over 3000 articles), not narrower. Many of the articles described studies
of emotions with college student samples.

3. college and adjust and emotion: We used this search to focus on college students'emotional adjustment to
college. This resulted in only 3 articles- too narrow a search. One article was about a small sample of college
students' reactions to Rorschach inkblots at high altitudes. Not what we were looking for.

4. college and adjust and health: We thought this search might produce articles related to how college
adjustment related to students' health. This search produced fewer than 10 articles on a diverse set of topics
(e.g., bulimia, concussions, divorce). One of them was in Norwegian. Still, we did find some possibilities and
saved the abstract and reference information for these.

5. college adjustment and academic achievement: This was a different search because both of these phrases
are subject words, which are indexed. When an article is entered into a database, the main subjects
addressed by the article are identified. This produced over 20 articles, several of them relevant to our topic.
Many of them were in Dissertation Abstracts. This means that the research was conducted as a graduate
student's doctoral (Ph.D.) dissertation. Although the dissertation abstract is easily retrieved, the dissertation is
not.

In our next searches we focused on some of the words Pennebaker identified as central to his research:

6. trauma and health: This was a very broad search, resulting in over 1000 articles covering a wide range of
topics. We tried to narrow this search.
7. trauma and health and disclosure: Pennebaker focused on the effects of revealing secrets-referred to as
"disclosure" in psychological literature. This resulted in over 600 articles; many of them were unpublished
dissertations.

8. emotion and health and disclosure: This search allowed us to capture articles that addressed both the
emotional and health consequences of disclosure and resulted in 12 articles.

The "output" of computer searches is a list of the titles and authors of research articles and books. If the title
indicates the research might be related to your topic, you can click on the word "Abstract" to read the 17
summary of the research. Sometimes you'll find the article really isn't about your topic. If the article is related
to your topic, you'll find information about the year, journal, and pages of the article. Your library might have
the journal available online. If so, you might be able to print a copy of the article. If not online, you'll need to
check whether your library owns copies of the journal or request the article through an inter-library loan
service.

It's important to remember that although computerized searches are helpful, you'll never find all the research
articles related to your topic using computerized searches. When you photocopy articles to read (or print them
from the online service), remember that the References section of articles is one of your most important
resources. In each research article, authors cite additional research studies. This is a very important way for
you to identify past research on your topic. So remember to keep the References section!

Step 2: Choose a Research Question

An important part of conducting research is searching the psychological literature for more information about a
topic. Ask yourself these questions as you evaluate research evidence presented in psychological literature
(e.g., research articles).

• Does the researcher place his/her findings in the context of other research on the topic?

• Are the results of previous studies based on scientific, controlled research?

Step 3: Generate a Research Hypothesis

After choosing a research question, the next step is to formulate a research hypothesis (plural: hypotheses). A
research hypothesis is a tentative answer to the research question. That is, after reading reports of
psychological research, researchers predict in advance what they think the outcome of a research study will
be. This may seem silly at first-why try to answer the question beforehand? Why not simply conduct the study
to learn the answer to the research question?
Researchers form hypotheses "tentative answers for a research question "because the hypothesis will
influence how the research study is conducted. As you'll see in later sections of Student as Researcher, there
are many methods psychologists use to answer research questions. Which method a researcher chooses will
depend on the hypothesis.

Psychologists use theories as they develop their research hypotheses. Therefore, in this section, we first
address theories, and then focus on how to develop a hypothesis.

What are psychological theories? 18

What is a research hypothesis?

How can I come up with a research hypothesis?

What are psychological theories?

< Explanations for why people behave the way they do; coherent and logical frameworks that guide research

Theories are explanations about how nature works. Psychologists propose theories about the nature of
behavior and mental processes and reasons why people (and animals) behave the way they do.

Some psychological theories attempt to explain a wide range of human behavior. For example, Sigmund
Freud tried to explain all of human development, personality, and mental illness. More modern-day theorists
may not try to explain such a broad array of phenomena, but still tackle complex topics such as love
(Sternberg, 1986) and cognition (Anderson, 1990, 1993).

Other theories are more limited in their scope-attempting to explain more specific behaviors and phenomena,
such as déjà vu experiences (Findler, 1998) and how we stick with a plan to change our behavior (theory of
planned behavior; Ajzen & Madden, 1986). As you might imagine, the more behavior a theory tries to explain,
the more complex the theory will be, and the more difficult it will be to test the theory. Therefore, most theories
in psychology tend to be modest in scope, attempting to explain only a limited range of behavior or mental
processes.

A good theory has to accomplish several things. First, a theory needs to define and describe the events or
phenomena it seeks to explain, and predict when we can expect certain behaviors to occur. Finally, theories
must explain the causes of events described in the theory. These predictions and explanations are tested in
research studies.

The process of developing and testing theories follows these steps:


1. Theorists develop their ideas by reviewing all the research evidence for a particular phenomenon or
behavior.

2. They attempt to organize this evidence into a coherent and logical framework that explains the
phenomenon.

3. Using this theory, new ideas and hypotheses are developed to guide the next research projects in an area.
19
4. These new research studies help to refine the theory.

5. The end result is a greater understanding of human behavior and mental processes.

What is a research hypothesis?

< Simpler, more tentative explanation that can be tested

A research hypothesis is simpler and more tentative than a theory. That is, any particular hypothesis may
represent only a small part of the theory.

Several criteria determine whether a hypothesis is testable (i.e., can be investigated in a research study).
First, the concepts addressed by the hypothesis must be clearly defined and measurable. Many of Freud's
hypotheses are not testable because there are no clear ways to define and measure important concepts in his
theory, such as id, ego, and superego.

Hypotheses cannot be tested if they are circular. A circular hypothesis occurs when an event itself becomes
an explanation for the event. We can find circular hypotheses on many talk shows and in our everyday
conversations. For example, to say "your 8-year-old son is distractible in school... because he has an attention
deficit disorder" is circular (Kimble, 1989). Because attention deficit disorders are defined by the inability to
pay attention, this hypothesis doesn't explain anything. It offers no more than saying, "Your son doesn't pay
attention because he doesn't pay attention." A good hypothesis avoids this type of circularity.

Finally, research hypotheses must refer to concepts that can be studied scientifically. To say that someone's
behavior is caused by the devil isn't a testable hypothesis because this hypothesis refers to a concept (the
devil) that isn't in the province of science. Science deals with what can be observed; this is the basis for
empirical observation.

How can I come up with a research hypothesis?

< Read psychological research, consider personal experiences, think of exceptions and inconsistencies
There are many ways to generate a research hypothesis. After reading reports of psychological research
related to your research question, you may consider whether your personal experiences match what is
described by the theories and past research. You may also "brainstorm" to think of "exceptions to the rule."
That is, a theory or past research may describe only specific situations; you may think of conditions in which
the theory may not apply. As you continue to read research articles, you will find inconsistencies or
disagreements among researchers.

In all of these situations, you may think of explanations for the discrepancies among previous research
articles, and why the theories and research may differ from your own experience. These explanations become 20
fruitful research hypotheses.

Step 3: Generate a Research Hypothesis

Our next step in understanding Pennebaker and Francis' (1996) research is to look at the theories that
influenced their thinking and the hypothesis they generated. Also, we will ask you to think of a hypothesis for
this research.

What theories guided Pennebaker's work on students' emotional adjustment to college?

What hypothesis did Pennebaker and Francis test in their experiment?

What hypothesis would you develop?

What theories guided Pennebaker's work on students' emotional adjustment to college?

One theory comes from intuition. Popular wisdom tells us that we shouldn't keep our emotions and thoughts
about negative life events "bottled" within us, that it's good to "get things off our chest." In fact, the emotional
release of catharsis was one of Freud's therapeutic techniques and continues to be an important component
of many modern-day psychotherapies. Until recently, however, little research examined the psychological and
physical health consequences of directly confronting traumatic emotional experiences.

Pennebaker used "inhibition theory" to guide his work. He theorized that keeping thoughts and feelings about
painful experiences bottled up might take a physical toll-that is, it's hard on the body to keep these
experiences inside. According to inhibition theory, preventing (inhibiting) the expression of painful thoughts
and feelings increases autonomic nervous system (ANS) activity. Specifically, overactivity of the sympathetic
branch of the ANS may lead to stress-related problems such as hypertension. Thus, inhibition was theorized
to lead to prolonged activation of the ANS, which, in turn, was theorized to have long-term negative health
consequences.
Pennebaker and his colleagues conducted many research studies to test inhibition theory. In several of their
experiments, participants were assigned to one of two groups. One group of participants wrote about
emotional experiences they've had, and another group wrote about superficial topics. Results of these
experiments indicated that participants who wrote about emotional events had better health outcomes than
participants who wrote about superficial topics. Pennebaker concluded that disclosing emotional thoughts and
feelings-rather than inhibiting them-seemed to produce beneficial health outcomes.

This seems like the end of the story, doesn't it? But, it isn't. Some research results indicated that inhibition
theory couldn't fully explain research results. For example, students asked to dance expressively about 21
emotional experience did not experience the same benefits as students who danced expressively and wrote
about their experiences. The theory had to be refined.

Pennebaker and Francis proposed that an essential component of disclosing emotional thoughts and feelings
is that individuals try to understand the meaning and significance of their negative experiences. That is, it's not
enough simply to write about emotional events. Pennebaker's new theory suggested that cognitive changes-
such as greater understanding and meaning-associated with disclosing emotional topics are critical for
beneficial outcomes. The theory needed to be tested again.

What hypothesis did Pennebaker and Francis test in their experiment?

Their hypothesis has two parts:

• Pennebaker and Francis predicted that college students who write about their emotional experiences
associated with starting college would have better health and academic outcomes than students who don't
write about adjusting to college.

• Cognitive changes that take place in students who write about their emotional experiences can account for
the beneficial outcomes.

What hypothesis would you develop?

A first step in developing a hypothesis is to see how ideas match your experience. Ask yourself these
questions:

• Do you feel better after disclosing your emotional experiences?

• Do you keep a journal?

• Do you write letters or e-mail to people, telling them about your emotional experiences?
• Does writing about events help you to feel better? Why?

An important step in developing hypotheses is to know the findings from previous research. This was critical in
Pennebaker's work. He was able to see that expressing emotional experiences didn't always work (e.g.,
through dance). There had to be something more to it. What's special about writing? One possible answer
which Pennebaker and Francis decided to test concerned cognitive changes.

As we started this section, we could have asked you different questions:


22
• Have you talked to anyone about adjusting to college?

• Does it matter who you talk to?

• Does talking to people about emotional experiences help?

These questions aren't about writing. Talking is a different way that people express their emotions, and it's a
way that people can experience changes in the way they think about an experience.

What hypothesis could you develop about the relationship between talking to others and college adjustment?

Step 3: Generate a Research Hypothesis

A central component of the research process is the hypothesis. Ask yourself these questions when you read
or hear about psychological research to evaluate the researcher's hypothesis:

• Does the researcher present a theory about the behavior or mental process that is investigated?

• Does the theory define and describe events, predict when specific phenomena or events should occur, and
explain the causes of events described in the theory?

• Is a research hypothesis presented?

• Is the hypothesis testable? That is, are the concepts clearly defined and measurable; does the hypothesis
avoid circularity; does the hypothesis refer to concepts that are scientific?

• Is the hypothesis very general or very specific? (Specific hypotheses provide better tests of theories.)
Step 4: Form Operational Definitions

Once researchers develop hypotheses, they are ready to begin identifying the specific methods for their study.
The next step involves forming operational definitions of the concepts to be investigated in the research.

What is an operational definition?

How do I decide what operational definition to use? 23


What is a variable?

How do I measure psychological concepts?

How do I know whether I have good measures of my concepts?

What is an operational definition?

< Specific definition of a concept in a research study

An operational definition defines a concept solely in terms of the operations (or methods) used to produce and
measure it. For example, we might operationally define "anxiety"using a paper-and-pencil questionnaire
designed to measure symptoms of anxiety such as worry, sweaty palms, and heart palpitations. To
operationally define "stressful situation," we might ask people to give a speech in front of a large audience.
With these operational definitions, we might test the hypothesis that anxiety increases during stressful
situations.

Not everyone may accept these definitions. For example, some may say that important symptoms of anxiety
are missing from our questionnaire. From a cross-cultural perspective, others may criticize our operational
definition of "stressful situation" because it may only apply to a small segment of the population. However,
once we decide on a particular operational definition for our study, no one can argue about the definition of the
concept for our study. Operational definitions help researchers to communicate about their concepts. An
important question you should ask as you read psychological research is, How did the researcher
operationally define his/her concepts?

How do I decide what operational definition to use?

< Previous research


As with forming research questions and hypotheses, we identify operational definitions by reading research
articles that examine the same concepts we intend to investigate. What measures have other researchers
used? What are the strengths and weaknesses of these measures? These are some of the questions
researchers ask as they decide the ways in which they will operationally define the concepts in their research.

In most cases, there's no need to "reinvent the wheel." Psychologists have studied a wide range of topics, and
many measures exist for many concepts. Most likely, you will be able to build on previous research by using
the same operational definitions employed by other researchers.
24

What is a variable?

< A dimension or factor that varies

As you read about psychological research, you will find that researchers talk about "variables." A variable is a
dimension or factor that varies. For example, people naturally vary in the amount of anxiety they experience,
and situations vary in how stressful they are.

Psychologists work with variables in two ways: They measure variables and they manipulate (or control)
variables. For example, by using an anxiety questionnaire as an operational definition for anxiety, the
researcher measures the extent to which people experience anxiety symptoms. By manipulating whether
participants make a speech or do not make a speech, the researcher controls whether people experience a
stressful situation. Participants in the two conditions of the research would vary-some would experience a
stressful situation and others would not.

How do I measure psychological concepts?

< Psychological measurement, observer agreement, self-report scales

Scientists use both physical measurement and psychological measurement. Physical measurement involves
dimensions for which people agree on the standard and instruments for measurement-for example, length,
weight, and time. Psychology researchers, however, typically rely on psychological measurement. We don't
have agreed-upon standards and instruments for measuring psychological concepts such as beauty,
aggression, personality, and intelligence. How do we measure these concepts?

One way to measure psychological concepts is to have two or more observers rate a behavior or action using
a rating scale. For example, two observers may agree that a child's behavior warrants a score of "7" on a 1 to
10 scale of aggressiveness. When observers agree we become more confident in our psychological measure
of aggression.
Often, however, we're interested in measuring psychological concepts, such as mental processes, that cannot
be readily observed. To measure thoughts and feelings, for example, psychologists typically use self-report
questionnaires. A typical rating scale may ask respondents to report the extent to which they disagree or
agree with several statements using a 1 (Strongly Disagree) to 5 (Strongly Agree) rating scale. Self-report
questionnaires are the most frequently used measurement instruments in psychology.

How do I know whether I have good measures of my concepts?

< Validity, reliability 25

Validity refers to the "truthfulness" of a measure; that is, does it measure what it is intended to measure?
Although validity may seem straightforward, a few examples will illustrate that validity isn't easily achieved,
particularly for complex concepts.

One example concerns the measurement of intelligence. Many psychologists have debated whether the most
frequently used measures of intelligence, which emphasize verbal and spatial ability, adequately assess all
aspects of intelligence. Do these tests assess creativity, emotional intelligence, social intelligence, and good-
old common sense? A more familiar example may be aptitude tests, such as the SAT. Does this test validly
measure students' readiness for college?

The reliability of a measure refers to its consistency. Researchers may refer to several different types of
reliability. For example, when observers agree about the aggressiveness of an action, we say they are reliable
(i.e., they are consistent in their observations). Measures are also reliable if they are consistent over time. If a
person's intelligence score doesn't change (relative to others'), we say the intelligence test is reliable.

Psychologists want to use valid and reliable measures of their concepts. You may see that if an invalid and
unreliable measure is used, it's hard to interpret the findings of a research study. This is because we wouldn't
be able to know whether the concept was defined truthfully and consistently. Because it's important to have
good measures, many researchers conduct psychological studies to develop valid and reliable measures of
concepts. Many research reports in the psychological literature describe the reliability and validity of
psychological measures.

Step 4: Form Operational Definitions

Recall Pennebaker and Francis' hypotheses:

• Pennebaker and Francis predicted that college students who write about their emotional experiences
associated with starting college would have better health and academic outcomes than students who don't
write about adjusting to college.
• Cognitive changes that take place in students who write about their emotional experiences can account for
the beneficial outcomes.

The next step is to define the concepts in the hypotheses. What, specifically, do we mean by emotional
experiences and better health and academic outcomes? What do we mean by cognitive changes?

Operational definition for writing about emotional experiences


26
Operational definition for health outcomes

Operational definition for academic outcomes

Operational definition for cognitive change

Are these valid and reliable measures of health and academic outcomes and cognitive change?

Operational definition for writing about emotional experiences:

Pennebaker and Francis manipulated a variable to operationally define "writing about emotional experiences."
Some students were asked to "write about your very deepest thoughts and feelings about coming to college."
An important part of their research was to compare the outcomes for college students who wrote about
emotional experiences to outcomes for students who did not write about adjusting to college. Therefore, a
second group of students was asked to "describe in writing any particular object or event of your choosing...as
objectively or dispassionately as you can...without mentioning your emotions, opinions, or beliefs."
Participants wrote for 20 minutes on 3 consecutive days.

Operational definition for health outcomes

In order to see if emotional writing produces better health outcomes than superficial writing, Pennebaker and
Francis had to measure health outcomes. There are a number of ways researchers could assess health
outcomes. Pennebaker and Francis counted the number of times students went to the Student Health Center
for illness during the academic year (visits for allergy shots, routine checkups, and follow-up appointments
were not counted).

Operational definition for academic outcomes

Again, there are a number of ways to measure academic outcomes. Pennebaker and Francis chose to
measure students' grade point average (GPA) for the fall semester in which they wrote and the subsequent
spring semester. Typically, when calculating a GPA, an A is assigned 4 points, a B is assigned 3 points, and
so on. Each student's GPA represents the average of these points across all of the student's classes.

Operational definition for academic outcomes

Again, there are a number of ways to measure academic outcomes. Pennebaker and Francis chose to
measure students' grade point average (GPA) for the fall semester in which they wrote and the subsequent
spring semester. Typically, when calculating a GPA, an A is assigned 4 points, a B is assigned 3 points, and
so on. Each student's GPA represents the average of these points across all of the student's classes. 27

Are these valid and reliable measures of health and academic outcomes, and cognitive change?

This question gets at whether the operational definitions truthfully and consistently measure the concepts. For
example, we could argue that not all students use the health center (e.g., some may use their doctor at
home). Thus, number of health center visits may not validly measure students' illnesses. With respect to
reliability, we can ask whether students' GPAs are consistent across semesters. Students' GPAs can vary for
lots of reasons, including the number of credits in the semester, the difficulty of the classes, and whether
students are also working or participating in other activities. All of these potential variables can influence the
consistency (reliability) of GPAs.

Generally, researchers choose several operational definitions of their concepts in a single study. For example,
Pennebaker and Francis measured cognitive change in several different ways (e.g., increase in the number of
insight and cause words in students' essays). To the extent that the results for the different operational
definitions agree, we are more confident in the validity of our measures.

Step 4: Form Operational Definitions

When you evaluate research evidence, you must identify the researcher's operational definitions. Ask yourself
these questions when evaluating research evidence:

• How did the researcher operationally define his/her concepts?

• Did the researcher provide a clear rationale for his/her operational definitions?

• Did the researcher use valid and reliable measures of his/her concepts?

Did the researcher use several measures of concepts? (Several measures are better than one measure.)

Step 5: Choose a Research Design


A research design is a plan for answering a research question, a plan for testing the hypothesis. Psychology
researchers typically rely on four main types of research designs: observational and correlational,
experimental, quasi-experimental, and single-case designs. The design researchers choose depends on the
research question and hypothesis, and ultimately, their goal for the research. In this section, we will cover
each research design and provide examples. As you'll see, this section provides more details than other
sections. This is because choosing the research design is one of the most important steps in the research
process.

What research design should I choose if I want to describe or predict people's behavior? 28

How do I conduct an observational or correlational study?


Can a study be both observational and correlational?

What research design should I choose if I want to understand the causes of behavior?

How do I "control" a variable?


How do I know whether the variable has an effect?
Apology-present Condition
No-apology Condition
How do I conduct an experimental research design?
Should people participate in all conditions of my experiment?
Because different people participate in each condition, could this explain any differences in the
outcome?
Isn't it possible that conditions of an experiment differ in other ways, besides the independent
variable?

What research design should I choose if I want to understand the causes of behavior or create change in the
"real world"?

How do quasi-experiments differ from "true" experiments'?


How do I conduct a quasi-experiment?

What research design should I use if I want to understand and treat the behavior of one person?

How do I conduct a single-case research design?


Can I make a claim that a treatment causes a client to improve?

What research design should I choose if I want to describe or predict people's behavior?
< Observational research design, correlational research design

Two important goals of research in psychology are description and prediction. In observational research,
researchers attempt to describe fully all aspects of behavior in a situation. Correlational research goes one
step further by attempting to find predictive relationships (correlations) among the variables that are measured
in the study. A correlation exists when two variables are associated (co-vary), but the relationship may not be
causal (i.e., one variable does not cause the other).

These two types of studies are combined in this section because a key feature of both designs is that 29
researchers don't attempt to control or manipulate the participants' behavior. Instead, they simply measure
and record behavior and mental processes as they naturally occur. For this reason, these designs sometimes
are called passive observational studies (Kazdin, 1999).

How do I conduct an observational or correlational study?

Can a study be both observational and correlational?

How do I conduct an observational or correlational study?

< Observe, measure variables

When researchers choose observational and correlational designs, they typically first select a sample of
participants and then observe and measure the variables of interest (as defined by operational definitions).
Psychologists can make observations either directly by watching and recording people's behavior or indirectly
by checking records of people's past behavior. Another form of observation occurs when individuals are asked
to report their thoughts and feelings on paper-and-pencil surveys (questionnaires) or during interviews.

Can a study be both observational and correlational?

< Describe participants' responses and identify relationships among variables

Most research studies attempt to both describe and predict behavior and mental processes by observing
behavior directly and/or by asking participants to complete surveys. When researchers gather information for
several variables, they often look to see whether there are relationships among the variables. An example
may help to clarify this.

In one study, researchers were interested in finding out whether people notice significant changes in their
environment (Simons & Levin, 1998). They had a confederate "a person who helps the researcher create a
research situation for observation" ask individuals on a college campus for directions. Midway through the
conversation, the confederate was replaced by a different person. Simons and Levin observed and recorded
their variable: whether people in their study noticed the change (yes or no). Would you detect the change?

How did they accomplish this magic act? The confederate first approached a stranger on a campus sidewalk
and asked for directions to a campus building. As they talked, two additional confederates rudely walked
between them carrying a door. The unsuspecting research participant could see the door, but could not see
the two people carrying the door. As the door interrupted the conversation, the first confederate (who had
asked for directions) switched places with a person carrying the door. The new confederate then continued
the conversation and noted whether the research participant noticed the switch. 30

To see a video of the change, go to http://www.wjh.harvard.edu/~viscog/lab and check out the


Demonstrations section.

Many people did not notice the change. So, we can describe people's behavior by saying that people often
miss important changes. Can we make a prediction too? To make a prediction, we need to observe a
relationship between two variables. Simons and Levin observed a second variable. Whether participants
noted the change depended on whether the confederate was similar to the participant or not similar (e.g.,
same age or different age). People detected the change when the confederate was similar, but were less
likely to detect the change when the confederate was dissimilar. Based on these findings, we can predict
when people will detect changes.

In sum, if your research question seeks to describe and/or predict an aspect of behavior or mental processes,
you should use an observational or correlational research design.

What research design should I choose if I want to understand the causes of behavior?

< Experimental research design

Researchers choose an experimental design when they seek to understand the causes of psychological
phenomena. This requires a high degree of control over the variable of interest. That is, researchers must
isolate the variable they're interested in and assess behavior in a controlled setting. Experimental designs
differ from passive observational/correlational designs because the researcher actively controls important
aspects of the research situation.

How do I "control" a variable?

How do I know whether the variable has an effect?

How do I conduct an experimental research design?


Should people participate in all conditions of my experiment?

Because different people participate in each condition, could this explain any differences in the outcome?

Isn't it possible that conditions of an experiment differ in other ways, besides the independent variable?

How do I "control" a variable?


31
< Compare at least two conditions

An important feature of experimental designs is that the researcher compares two (or more) conditions or
groups. In one condition, a "treatment" is present in the situation (called the "treatment" condition), and in
another condition, the treatment is absent (the "control" or "comparison" condition). The Pennebaker and
Francis (1996) experiment on college students' adjustment was an experiment with two conditions. The
"emotional writing" condition was the treatment condition, and the "superficial writing" condition was the
comparison condition. More information about their experiment is provided in the Research Example portion
of this Student as Researcher CD-ROM.

An additional example might help to illustrate what we mean by experimental control. This example comes
from a different area of research in psychology: How people respond when someone hurts or angers them. A
victim's natural response to an offense is to want revenge. What can be done to reduce retaliation and
aggression following an interpersonal injury? We might ask whether an offender's apology following a harmful
action decreases the likelihood that the victim will want revenge. Here's one possible sequence of events in
this situation:

Offender hurts Victim wantsOffender Victim's desire for


a victim revenge apologizes revenge
decreases

After reviewing research literature on this topic, we may hypothesize that the presence of an apology,
compared to no apology, decreases the likelihood of victims' desire for revenge.

To test this hypothesis, we may create two hypothetical scenarios ("vignettes") to describe an offense. We
could control whether the offender apologizes or does not apologize in the scenario. The two scenarios would
be identical, except for the presence or absence of an apology. This would be the operational definition of
apology in this experiment.

How do I know whether the variable has an effect?

< Measure participants' responses in each condition


We could measure participants' desire for revenge in the hypothetical situation by asking them to respond to a
question, such as, "To what extent would you like something bad to happen to this person to make things
even?" Participants could rate their response on a 1 (not at all) to 10 (very much) rating scale; this rating
would be the operational definition of the variable, desire for revenge.

Click on either the apology-present condition or the no-apology condition to see the hypothetical situation,
question, and rating scale.

Apology-Present Condition 32

No-Apology Condition

Apology-Present Condition

Instructions: Imagine this event happened to you.

An acquaintance offers to drop off a paper to an instructor for one of your classes. You worked very hard on
the paper in order to pull up your grade for the course. A week later, when the instructor returns papers, he
does not return your paper to you. After the class, you ask the instructor about your paper. The instructor says
that he never received your paper and will not accept late papers. When you ask about it, your acquaintance
says he/she ran into a friend, went to get coffee, and forgot to take your paper to the instructor. In fact, this
person then finds the crumpled paper in his/her book bag and gives it back to you. This person apologizes
over and over to you.

Imagine how you would feel in this situation as you answer this question:

To what extent would you like something bad to happen to this person to make things even?

1-----2-----3-----4-----5-----6-----7-----8-----9-----10

not at all very much

No-Apology Condition

Instructions: Imagine this event happened to you.

An acquaintance offers to drop off a paper to an instructor for one of your classes. You worked very hard on
the paper in order to pull up your grade for the course. A week later, when the instructor returns papers, he
does not return your paper to you. After the class, you ask the instructor about your paper. The instructor says
that he never received your paper and will not accept late papers. When you ask about it, your acquaintance
says he/she ran into a friend, went to get coffee, and forgot to take your paper to the instructor. In fact, this
person then finds the crumpled paper in his/her book bag and gives it back to you.

Imagine how you would feel in this situation as you answer this question:

To what extent would you like something bad to happen to this person to make things even?
33
1-----2-----3-----4-----5-----6-----7-----8-----9-----10

not at all very much

How do I conduct an experimental research design?

< Manipulate an independent variable, measure a dependent variable

This hypothetical research study has two essential ingredients of an experiment: an independent variable and
a dependent variable. An independent variable is controlled, or manipulated, by the researcher. In this
hypothetical experiment, the variable we controlled is the presence or absence of an apology in the scenario.
Researchers measure dependent variables to determine the effect of the independent variable. In this
hypothetical experiment, the dependent variable is participants' rating on the revenge question. If the
presence of an apology affects people's desire for revenge, there should be a difference in participants'
ratings in the two conditions.

Should people participate in all conditions of my experiment?

Should people participate in all conditions of my experiment?

< Independent groups design

Often, individuals participate in only one of the conditions. This is called an "independent groups design." In
our hypothetical experiment, one group of participants would read the apology-present scenario, and a
separate group of participants would read the no-apology scenario. We would calculate the mean (average)
revenge rating for participants in the apology group and the mean revenge rating for participants in the no-
apology group. Suppose the mean revenge rating for the no-apology group is 8.0 on the 10-point scale, and
the mean revenge rating for the apology group is 4.0. We would conclude that an apology, compared to no
apology, causes people to have lower desires for revenge.

Because different people participate in each condition, could this explain any differences in the outcome?
< Random assignment to conditions creates equivalent groups, on average.

In order to make the causal inference that an apology causes people to desire less revenge, one important
feature must be present in the experiment. Participants must be randomly assigned to the conditions (i.e., the
scenarios) of the experiment. Random assignment means that a random procedure, such as a flip of a coin,
determines which condition each participant experiences.

Because different groups of people participate in the different conditions of the experiment, an alternative
explanation for the outcome (i.e., mean revenge ratings of 4.0 and 8.0) is that the people in the two groups 34
differed in terms of whether they are naturally more vengeful or forgiving. That is, the mean revenge ratings
might differ because different people participated in the groups of the experiment, not because of the
presence or absence of an apology.

The solution to this potential problem, though, is random assignment. Random assignment creates equivalent
groups of participants, on average, before participants read the scenarios. Neither group is more vengeful or
forgiving; nor do the groups differ, on average, in terms of any other potentially important characteristics.
Therefore, we can rule out the alternative explanation that differences in revenge might be due to
characteristics of the people who participated in each group.

Isn't it possible that conditions of an experiment differ in other ways, besides the independent variable?

< Holding conditions constant

A second feature that must present in the experiment in order to conclude that an apology causes people to
have lower desires for revenge is called holding conditions constant. Holding conditions constant means that
the only thing we allow to vary in the two conditions is the presence or absence of an apology. Everything else
for the two groups is the same. Remember that scientists seek to isolate the variables they think impact
behavior. By manipulating only whether an apology is present and holding all other potential variables
constant, the researcher can test whether apologies influence vengeful behavior. Thus, in our example, the
two scenarios are exactly the same except for one sentence about an apology.

The goal of experimental research is to understand the causes of people's behavior. When we manipulate an
independent variable, randomly assign participants to conditions, and hold conditions constant, we are in a
position to state that the independent variable causes any differences in the dependent variable. When we
can confidently make this causal inference, we say that an experiment has internal validity.

Experimental designs are the most powerful designs for identifying cause-and-effect relationships (causal
inferences) between variables. Thus, if your research question seeks to identify the causes of a relationship
between variables, you should use an experimental design.
What research design should I choose if I want to understand the causes of behavior or create change in the
"real world"?

< Quasi-experimental designs

We've seen that control is an essential aspect of experimental research designs. Sometimes, however,
researchers cannot control all aspects of a situation, for example, when they conduct research in the "real
world" rather than a lab. When researchers seek to control some aspects of an experimental situation, but
cannot control all important aspects, they may conduct a quasi-experiment. Quasi means "almost"; therefore, 35
quasi-experiments are "almost-experiments."

How do quasi-experiments differ from "true" experiments?

How do I conduct a quasi-experiment?

How do quasi-experiments differ from "true" experiments?

< No random assignment, unable to hold conditions constant

When researchers use a quasi-experimental design they seek to compare the effects of a treatment condition
to a control condition in which the treatment is not present-just like in a "true" experiment. However, in quasi-
experiments, researchers often are unable to assign participants randomly to the conditions. In addition, the
researcher may not be able to isolate the effects of the independent variable by holding conditions constant.
Thus, participants' behavior (as measured by the dependent variable) may be affected by factors other than
the independent variable.

Although quasi-experiments provide some information about variables, the cause-and-effect relationship
(causal inference) may not be clear. The benefit of quasi-experimental designs, however, is that they provide
information about variables in the real world. Often researchers conduct quasi-experiments with the goal of
creating change. Psychologists have a social responsibility to apply what they know to improve people's lives;
quasi-experiments help psychologists to meet this goal.

How do I conduct a quasi-experiment?

< Assign entire groups to treatment vs. control conditions

An essential feature of an experiment is that the researcher compares at least two conditions. One group
receives a "treatment," and the other does not. In quasi-experimental designs, rather than randomly assigning
individual participants to treatment and control conditions, we might assign an entire group to receive a
treatment and withhold the treatment from another group.
For example, we might test the hypothesis that students who are allowed to choose the type of assignments
they complete in a course perform better than students who are not given a choice. The independent variable
is whether students are allowed choice. The dependent variable could be their final grade for the course.

You may see that it wouldn't be fair to allow some students in a class to choose their assignments and give
other students in the class no choice. Therefore, we might manipulate the independent variable using two
different sections of the same course. That is, students in one section of the course would be allowed to make
choices and students in another section would not make choices. We would hold constant that students have
to do the same number of assignments. 36

Although this experiment includes an independent variable (choice) and a dependent variable (grade), we
have no control over many aspects of this experiment. Most importantly, students in the two sections are likely
to be different. Suppose one section meets at 8:00 a.m. and another section meets at 2:00 p.m. Students who
enroll in an 8:00 class are likely to be different from students who select a 2:00 class. In addition, class
discussions may differ during the academic term, and the instructor may cover slightly different material. All of
these potential variables may influence the outcome"students" final grade in the course.

Quasi-experiments provide some information about variables, but the cause-and-effect relationship between
choosing assignments and grades may not be clear at the end of the study. Suppose students who are
allowed to choose their assignments earn higher grades than students who are not allowed a choice. Can we
confidently say that our independent variable, assignment choice, caused this difference in grades?
Researchers who conduct quasi-experiments often face difficult decisions about whether other variables, such
as time of day or material covered in the class, could have caused the different grade outcomes.

Thus, if in your research question you seek to examine the causal effect of an independent variable on a
dependent variable, but you cannot control other important variables in the research, you should use a quasi-
experimental design.

What research design should I use if I want to understand and treat the behavior of one person?

< Single-case research design

In observational/correlational, experimental, and quasi-experimental designs, researchers focus on groups of


participants. Psychologists use these designs to identify "general laws" of behavior and describe how people
behave and think on average. As the name implies, the researcher who uses a single-case design focuses on
a particular individual. These designs are most frequently used in clinical psychology, in which the
psychologist wishes to describe, predict, understand, and treat the problems faced by a client.

How do I conduct a single-case research design?


Can I make a claim that a treatment causes a client to improve?

How do I conduct a single-case research design?

< Observe behavior during baseline and treatment

Similar to quasi-experimental designs, single-case researchers frequently cannot control all the important
variables in the research. For example, suppose a psychologist works with a family to help treat a 6-year-old 37
child's impulsive behavior. The psychologist's research question might be whether a specific treatment helps
the child to "stop and think" before acting. The psychologist and family may first observe what the child's
behavior is like without the treatment; this is called baseline observation. The psychologist then begins
treatment, with the hope of improving the child's behavior and concluding that the treatment caused the
improvement.

Can I make a claim that a treatment causes a client to improve?

< Other explanations for improvement exist

Although it seems easy to determine a treatment's effectiveness, many alternative explanations can frustrate
the psychologist's efforts to claim that the treatment changed the impulsive behavior. For example, the child's
teacher may work with the child using a different treatment (e.g., rewarding thoughtful behavior), or the child
may stop behaving impulsively because other children stop playing with him or her. Any of these other
"treatments," rather than the psychologist's treatment, may cause the improved behavior. Single-case
research designs require that the psychologist control as many aspects of the treatment situation as possible
in order to test the effectiveness of the treatment.

To summarize, if your research question seeks to describe, predict, understand, and/or treat the behavior and
mental processes of one individual, you should choose a single-case design.

Step 5: Choose a Research Design

Pennebaker and Francis (1996) used an experimental design to test their hypothesis that students who write
about their emotional experiences associated with adjusting to college would have better health and academic
outcomes than students who don't write about their experiences.

The independent variable was type of writing. Pennebaker and Francis used two conditions. The "treatment
condition" was emotional writing, and the "control condition" was superficial writing.- Click on the boxes for
"Emotional Writing Condition" and "Superficial Writing Condition" to see what students' experiences were like
in this study.
Emotional Writing Condition

Superficial Writing Condition

To assess the effect of the independent variable, Pennebaker and Francis measured several dependent
variables: health outcome, academic outcome, and cognitive change. Thus, they recorded how many times
each student in the experiment visited the health center for illness during the academic year, student's GPA
after the fall and spring semesters, and their language use over the 3 days of writing (i.e., number of insight
and causal words). Look at what you wrote in one of the conditions. How many insight words did you use 38
(e.g., realize, see, understand)? How many causal words did you use (e.g., because, why, reason, thus)?
Pennebaker and Francis measured how many of these words students used over the 3 days of writing.

Pennebaker and Francis hoped to infer that emotional writing causes students to be healthier and
academically more successful than superficial writing, and that these beneficial outcomes were related to
cognitive changes. But to do so, they had to rule out possible alternative explanations.

One alternative explanation concerns the fact that different students participated in each condition. The two
groups of students may have differed naturally in their health and academic ability and their tendency to
search for meaning in the events that happen to them (among other things). However, we can rule out these
alternative explanations for any differences in outcomes because Pennebaker and Francis randomly assigned
participants to the conditions of the experiment. This makes the two groups of students equivalent, on
average, before they did any writing.

Another alternative explanation concerns holding conditions constant. Is it possible that students' experiences
in the two conditions differed in ways other than what they wrote? Any potential differences become
alternative explanations for differences in outcome at the end of the study.

To hold conditions constant, Pennebaker and Francis had participants in both conditions write on the same
days of the semester, for the same amount of time, in the same classroom. The experimenters conducting the
study didn't know which condition students were in, so there was no way students could be treated differently.

Because Pennebaker and Francis conducted a controlled experiment, we can infer that emotional writing,
compared to superficial writing, caused the different outcomes in their experiment. Their experiment had
internal validity.

Emotional Writing Condition

For all three writing days of this experiment, your task is to write about your very deepest thoughts and
feelings about coming to college. In your writing, try to let yourself go and to write continuously about your
emotions and thoughts related to leaving home, coming to college, and preparing for the future. You can write
about leaving your friends, family, or high school, or about adjusting to a new social and academic world here.
You could also focus on classes, your future, your parents' or your own expectations. The primary task,
however, is for you to reflect on your most basic thoughts and emotions about coming to college.

Please type your thoughts here:

39

Superficial Writing Condition

For all three writing days of this experiment, your task is to describe in writing any particular object or event of
your choosing. In your writing, try to describe some object or event as objectively and as dispassionately as
you can without mentioning your emotions, opinions, or beliefs.

Please type your thoughts here:

Step 5: Choose a Research Design

The research design is the most important choice a researcher makes. The design which is used determines
the goals that can be achieved by the research. Answer these questions as you read or hear about a research
study:

• What type of research design was used (observational/correlational, experimental, quasi-experimental,


single-case)?

• Does the researcher's conclusion about the study match the goals accomplished by the research design?
For example, if an observational/correlational was used, are the researcher's conclusions about description
and prediction, and not about understanding?
• Is the researcher cautious about making causal inferences?

If an experimental design was used, did the researcher randomly assign participants to conditions and hold
conditions constant?

Step 6: Evaluate the Ethics

Before researchers can begin to collect data for a research project, they must first evaluate the study's risks 40
and benefits. In this section of Student as Researcher, we will discuss researchers' responsibilities, examine
the different ethical issues involved in psychological research, and consider how the ethics of research
projects are evaluated.

What are my ethical responsibilities?

Do people have to consent to be in my research?

Is it ethical to deceive people about research?

Is it ethical to use animals in research?

Is my research project ethical?

What are my ethical responsibilities?

< Protect participants from risk

When conducting psychological research, psychologists must protect the welfare of their research
participants. Sometimes this isn't easy; for example, a research project may involve administering severely
depressed individuals a drug that has unpleasant side effects. In any decision about research involving
human and animal subjects, researchers must decide whether the benefits of a study or procedure are greater
than the risks. So, for example, the potential benefit of a drug that reduces depression may outweigh the risk
of side effects.

Determining whether research participants are "at risk" illustrates the difficulties associated with ethical
decision making. Life itself is risky. Simply showing up for a psychology experiment has a degree of risk. A
research project is described as having "minimal risk" when the harm or discomfort participants may
experience is not greater than what they may experience in their daily lives. When the possibility of risk or
injury is greater than minimal, researchers have a serious obligation to protect participants' welfare.
Psychologists often ask people to report their inner thoughts and feelings, sometimes about sensitive topics.
Participants may feel embarrassed if their responses were made public. Researchers are obligated to protect
participants from social risk by making sure participants' responses are confidential or anonymous.

For more information about ethics in research, try this site:

http://www.apa.org/ethics/code.html
41
Do people have to consent to be in my research?

< Informed consent

In most situations, researchers are required to gain individuals' informed consent to participate in the
research. The key word here is informed. The researcher is obligated to explain the nature of the research,
participants' tasks in the research, and the risks and benefits of the research and explain to participants that
they can withdraw their consent at any time without negative consequences. It would be unethical for
researchers to withhold any information, such as potential risks, that could influence individuals' decision to
participate in the research.

Sample Informed Consent Form

I, [insert name of participant] , state that I am over 18 years of age and that I voluntarily agree to
participate in a research project conducted by [insert name of principal investigator, title, institutional
affiliation]. The research is being conducted in order to [insert brief description of the goals of the research].
The specific task I will perform requires [insert details of the research task, including information about the
duration of participant's involvement. Any possible discomfort to participant must also be described.]

I acknowledge that [insert name of principal investigator or research assistant] has explained the task to me
fully, has informed me that I may withdraw my participation at any time without prejudice or penalty, has
offered to answer any questions that I might have concerning the research procedure, and has assured me
that any information that I give will be used for research purposes only and will be kept confidential. [ Explain
procedures for protecting confidentiality of responses.]

I also acknowledge that the benefits derived from, or rewards given for, my participation have been fully
explained to me, as well as alternative methods, if available, for earning these rewards, and that I have been
promised, on completion of the research task, a brief description of the role my specific performance plays in
the project. [Specify here the exact nature of any commitments made by the researcher, such as the amount
of money to be paid to individuals for participation.]

_______________________________ _______________________________
[Signature of researcher] [Signature of participant]

_______________________________ _______________________________

[Date] [Date]

Is it ethical to deceive people about research?


42
< Deception, debriefing

One of the most controversial ethical issues in psychological research concerns deception. Deception occurs
when information is withheld from participants or when participants are intentionally misinformed about an
aspect of the research. Some people believe that research participants should never be deceived because
ethical practice requires that the relationship between researcher and participant be open and honest (e.g.,
Baumrind, 1985). In addition, deception contradicts the ethical principle of informed consent. Despite these
objections to deception, it is still a widely used practice in psychological research. How can this be?

One goal of psychological research is to observe and describe people's normal behavior. Sometimes it's
necessary to conceal the true nature of a research study so that participants behave as they normally would
or act according to the instructions provided by the researcher. A problem occurs, however, when deception is
used too often. Participants can become suspicious of psychologists' activities, and as a result, they may
enter the research situation with suspicion-and not act as they normally would! Thus, frequent use of
deception can have an effect that is opposite to what researchers hope to achieve.

When deception is used, the researcher must fully inform participants after the experiment the reasons for
deception, discuss any misconceptions about the research, and remove any harmful effects of the deception.
This information is provided in the debriefing. Debriefing is an oral and/or written explanation of the full
purpose of the research, the hypotheses, and the participant's role in the research. The goals of debriefing are
to educate participants about the research and, hopefully, to leave them with a positive feeling about their
participation.

Is it ethical to use animals in research?

< Protect humans from risk, protect animals' welfare

One other controversial ethical issue warrants our attention: research with animals.

Every year, millions of animals are tested in laboratory investigations aimed at answering a wide range of
research questions. Research with animals is often justified by the need to gain knowledge without putting
humans at risk. Most cures, drugs, vaccines, and therapies have been developed through research involving
animals. In its guidelines for the ethical conduct of research, the American Psychological Association advises
that researchers who work with animals have an ethical obligation to protect their welfare and treat them
humanely. Federal and state regulations also help to insure that the welfare of research animals is protected.

Is my research project ethical?

< Ethical standards, Institutional Review Board (IRBs), Institutional Animal Care and Use Committee (IACUC),
risk-benefit ratio
43
The American Psychological Association (APA) developed its ethics code for individuals who conduct
research, teach, conduct therapy, or serve as administrators. The Ethics Code presents standards to guide
ethical behavior. The standards are general, and specific situational factors help determine how the standards
should apply. Often, more than one ethical standard can be applied to a research situation, and sometimes
the ethical standards can seem to contradict each other. Deciding what is ethical in a particular situation may
not always be easy.

To help make ethical decisions, research proposals are reviewed by a committee of persons not involved in
the research before the research can begin. At institutions such as universities and hospitals, research
proposals are reviewed by Institutional Review Boards (IRBs). The members of these committees are charged
with the task of evaluating research projects to protect the rights and welfare of human research participants.
A similar committee exists for research involving animals: the Institutional Animal Care and Use Committee
(IACUC).

Ethical decisions are made by reviewing both the risks and benefits of a research project. If the benefits of a
study are greater than the risks in this subjective risk/benefit ratio, the research is generally approved. Both
IRBs and IACUCs have the authority to approve, disapprove, and require modifications in a research study
(e.g., to decrease risks). Once IRB or IACUC approval is obtained, the proposed research can begin.

Step 6: Evaluate the Ethics

Before they began their research, an Institutional Review Board (IRB) evaluated the ethics of the Pennebaker
and Francis' (1996) emotional writing experiment. Here are some of the questions the IRB considered.

What were the risks and benefits of Pennebaker and Francis' experiment?

Did Pennebaker and Francis deceive participants?

How would you evaluate the ethics of this experiment?

What were the risks and benefits of Pennebaker and Francis' experiment?
What are the risks of emotional writing? What are the benefits? Are there risks and benefits associated with
superficial writing? These are some of the questions addressed by the Institutional Review Board (IRB) that
evaluated Pennebaker and Francis' research proposal.

Obviously, one risk associated with emotional writing is that students would become upset as they wrote
about a traumatic experience. To protect students from risk, Pennebaker and Francis informed participants of
this potential risk before they consented to participate (as part of the informed consent procedure). If students
did feel upset, they were encouraged to talk to the researcher or to counselors at the Student Counseling
Service. 44

Students in the superficial writing condition faced boredom as they wrote about the same trivial subject each
day. One way to look at this is that boredom may not be more than the minimal risk associated with students'
every day classroom experiences.

Participants also faced the possibility of social risk if their writing was made public in any way. Imagine what it
would be like to write about your difficulties adjusting to college, and then have your feelings and problems
available to others to read! To protect participants from this risk their writing was kept confidential and
anonymous. They did not put their name on their materials, but instead, were assigned a number.

One benefit of students' participation, at least in the emotional writing condition, is that findings from previous
research studies suggested that they would have better health and academic outcomes following emotional
writing. Thus, participants would benefit directly. Another direct benefit is that students in both conditions
learned more about psychological research. Participants in each condition also could benefit indirectly by
contributing to psychology's understanding of disclosure, emotional experiences, and adjustment to college.

Did Pennebaker and Francis deceive participants?

Pennebaker and Francis did not deceive participants about the research. They described the goal of the
research very generally-researchers don't inform participants of the specific hypothesis. They said the project
was about "writing and the college experience" (which is true). They described the specific tasks participants
would be asked to do; that is, they explained that students would be asked to write for 20 minutes after three
consecutive class periods. They explained some would write about emotional experiences associated with
coming to college, and others would be assigned to write about trivial topics. Pennebaker and Francis also
received students' permission to gain access to their health service and academic records.

Finally, Pennebaker and Francis debriefed participants at the end of the year about the full purpose of the
study and the hypotheses. They had to wait because they didn't want to influence students' visits to the health
center. They were able to provide students with preliminary results and encouraged students to discuss their
perceptions and feelings about the experiment.
How would you evaluate the ethics of this experiment?

If you were a member of an IRB reviewing this proposal, how would you evaluate the risk/benefit ratio? Would
you approve the project, require modifications, or disapprove the research project?

Step 6: Evaluate the Ethics

Before research projects can begin, the ethics of the procedures must be evaluated. Thus, if you're reading 45
the results of a scientific study, it's safe to assume the study's ethics were evaluated. However, researchers
sometimes describe specific ethical issues in their research. Answer these questions about ethics as you read
research reports:

• Does the researcher address ethical issues associated with the research in his/her report?

• Was there any risk to participants in the research? How did the researcher reduce risk?

• What were the benefits to participants in the research? How does society benefit from the research?

Was deception used in the study? Was deception justified? Could the research have been conducted without
deception? Were participants debriefed after their participation was over?

Step 7: Collect Data

As we've worked through the research process, you've seen that there are a lot of steps before researchers
even ask the first participant to complete a questionnaire! Much like the backstage of a theater, a lot of
preliminary work takes place before the show can go on. But once researchers identify their research
question, hypotheses, variables, operational definitions, and research design and have obtained IRB or
IACUC approval, the show can go on. This section of Student as Researcher will address the steps involved
in collecting data from participants: choosing a sample of participants, seeking permission, recording data,
and preparing data for statistical analysis.

How should I choose my research sample?

Do I just ask people if they want to be in my research study?

How do I record information about my research participants?

How should I choose my research sample?


< Random samples, convenience samples

We've discussed how researchers seek to establish general laws of behavior and describe how people
respond on average. That is, rather than describing the behavior of one individual (as in single-case research
designs), most psychology researchers seek to apply their findings to a larger population. For example, a
researcher interested in the effects of spinal cord injury typically wants his or her research findings to apply to
the entire population of people who have a spinal cord injury. However, involving all people with spinal cord
injuries in a research project would be costly and time consuming. Thus, researchers rely on samples of
participants to represent the larger population. 46

"Sampling" refers to the procedures used to select a sample. One approach to sampling is random selection;
the outcome of this procedure is called a random sample. In a random sample, every member of the
population has an equal chance of being selected to be in the sample. In general, random selection results in
samples that represent the characteristics of the population.

A second approach to sampling is convenience sampling. A convenience sample is made up of people who
are available and willing to participate in the research. The research projects conducted on the Internet
involve convenience samples because people have to be available (i.e., own a computer and have access to
the Internet) and willing to complete research online. As you might guess, convenience samples generally are
less representative of the population than random samples.

A common mistake which students make is to claim that a sample of research participants was selected
randomly. Most research is conducted with convenience samples. For example, a great deal of psychological
research is conducted with college student samples (you may be asked to participate in research projects as
part of your introductory psychology course). For researchers in Psychology Departments, college students
are an available and (usually) willing group of people for research studies.

Do I just ask people if they want to be in my research study?

< Obtaining permission from authorities

We've seen that researchers must gain IRB or IACUC ethics approval before beginning their research with
human participants or animal subjects, respectively. In addition, researchers must seek permission from
people in authority to gain access to potential research participants.

For example, researchers may be interested in effects of breakfast programs on school performance, morale
in corporations following layoffs, depression in patients hospitalized for cancer surgery, or psychology
students' opinions about ethnic diversity. In each case, administrators at the school, corporation, hospital, and
Psychology Department are responsible for the welfare of those entrusted to their care.
In order to gain permission, researchers can expect to explain to authorities at the setting the study's rationale
and procedures, as well as ways in which participants will be protected from any risks.

How do I record information about my research participants?

< Observation vs. self-report

"Recording" refers to the method for keeping track of participants'thoughts, feelings, and/or behaviors. 47
Researchers don't rely on their memory, but instead, maintain a record of participants' responses. When you
collect data you will need to decide whether you will record data about participants (e.g., using checklists or
rating scales) or whether you will allow participants to report for themselves.

The self-report method is used when participants provide information about themselves, particularly
information about their thoughts and feelings. You've probably already gained experience with surveys and
questionnaires-they are psychologists-most popular way of collecting data from participants. This method
most often involves distributing paper-and-pencil questionnaires to participants.

Remember that to protect students from social injury, all information you collect about people should be
confidential (no identifying information) or anonymous.

How do I prepare the data for analysis?

< Statistical software, spreadsheets

Once you have your data from participants, you will need to organize the information for data analysis. The
most common way to analyze data involves computer software, such as SPSS, and entering data into a
spreadsheet. A spreadsheet is a chart (or table) with columns and rows.

Each row in a spreadsheet represents a participant in the study. If you have 20 participants in your study, you
will have 20 rows in your spreadsheet. Each column in a spreadsheet represents a different variable in your
study. Each "cell"in the table contains the value for a particular variable for a particular participant.

subject gender condition score


01 0 1 25
02 0 2 30
03 1 1 20
04 1 2 25
The first column in this sample spreadsheet, "subject," represents the number assigned to a participant (we
never identify participants by name). The second column identifies the participant's gender, female (0) or male
(1). Statistical packages analyze numbers rather than words, so we enter "codes" rather than the words
female and male. The third column, condition, identifies which of two conditions in an experiment the
participant was in. Condition 1 might be the "treatment" condition, and condition 2 might be the "control"
condition. Finally, the "score" column might represent participant's score on the measure of the dependent
variable. Thus, the first participant in this sample is a female, was in the treatment condition, and had a score
of 25 on the dependent variable.
48
Step 7: Collect Data

In this section, we'll consider the characteristics of Pennebaker and Francis' (1996) sample and their
procedures for collecting data.

What was Pennebaker and Francis' sample?

What procedures did they have to follow to gain access to the participants?

How did they collect and record the data?

What was Pennebaker and Francis' sample?

The research participants were students in Dr. Pennebaker's Introductory Psychology course at Southern
Methodist University (64 freshmen, 8 new transfer students). This is a convenience sample'these students
were available to Pennebaker and Francis at their university and willing to participate in the research. You
may see this isn't a random sample. Not all college students had an opportunity to be in the sample.

What procedures did they have to follow to gain access to the participants?

This situation is a little different than most because the participants were students in the researcher's class.
This raises a special ethical issue. In this situation, students must be reassured that their grade would not be
negatively affected if they chose not to participate in the research project.

Typically, when psychology students participate in research, the Psychology Department has specific
guidelines and procedures to follow. If you do a research project with introductory psychology students-for
example, with a "subject pool"-you need to learn and follow the procedures required by your Psychology
Department.

How did they collect and record the data?


Pennebaker and Francis used several different methods for collecting data. Participants' writing-either
emotional or superficial-represents the self-report method for collecting data. Their writing samples were then
analyzed using a computer program. To do this, they had to type all of the participants' essays. Information
about participants' language use (e.g., insight and causal words) was "collected" by the computer and
recorded in the computer output (i.e., the results of the analysis).

They also collected information from the university health center and registrar about students' health center
visits and grades, respectively. This is not self-report because students didn't provide this information
themselves. This method is called archival. That is, Pennebaker and Francis gained access to university 49
archives and records.

A sample spreadsheet for their data follows. Note, however, that Pennebaker and Francis collected data for
many more variables than we've discussed.

Sample Spreadsheet

subject writing visits GPA insight1 causal1 insight2 causal2 insight3 causal3
01 1 2 3.8 1 1 1 2 4 3
02 1 0 3.6 0 0 1 2 3 4
03 1 1 3.3 1 0 1 2 3 3
04 2 2 3.5 0 0 0 0 0 0
05 2 2 3.4 0 0 0 1 1 0
06 2 4 3.0 1 1 0 1 2 1
and so on

Each participant's data is represented in a row in the spreadsheet. The variables are identified in the column
titles, but they need some interpreting because their names are short descriptors.

• subject: the number assigned to a participant

• writing: which condition the participant was assigned, 1 = emotional, 2 = superficial

• visits: how many visits to the health center for illness during the academic year

• GPA: students' GPA by the end of the spring semester


• insight1: the number of insight words in the first day's essay

• causal1: the number of causal words in the first day's essay

• insight2: the number of insight words in the second day's essay

• causal2: the number of causal words in the second day's essay


50
• insight3: the number of insight words in the third day's essay

• causal3: the number of causal words in the third day's essay

Was Pennebaker and Francis' hypothesis supported by these hypothetical data?

Step 7: Collect Data

Research reports should describe the characteristics of the sample (i.e., who participated, the setting). These
questions will help you to evaluate the researcher's data collection:

Does the researcher describe the characteristics of the sample?


What type of sample was used, random sample or convenience sample?
What population does the researcher wish to describe? Do the sample characteristics match the
population?
How were data recorded (e.g., observers, self-report)? Are there any potential biases (e.g.,
participants trying to "look good" in their answers)?

Step 8: Analyze Data and Form Conclusions

Imagine that we've asked 200 people to complete a 50-item survey. What are we going to do with these
10,000 responses (called data)? The next step in a research project involves data analysis, in which we
summarize people's responses and determine whether the data support the hypothesis. In this section, we will
review the three stages of data analysis: check the data, summarize the data, and confirm what the data
reveal.

How do I check the data?

How do I summarize the data?

How do I know what the data reveal?


How do I check the data?

< Errors, distribution of scores, outliers

In the first analysis stage, researchers become familiar with the data. At a basic level, this involves looking to
see if the numbers in the data make sense. Errors can occur if responses are not recorded correctly and if
data are entered incorrectly into computer statistical software for analysis.
51
We also look at the distribution of scores. This can be done by generating a frequency distribution (e.g., a
stem-and-leaf display) for the dependent variable. When examining the distribution of scores, we may
discover "outliers." Outliers are data values that are very different from the rest of the scores. Outliers
sometimes occur if a participant did not follow instructions or if equipment in the experiment did not function
properly. When outliers are identified, we may decide to exclude the data from the analyses.

How do I summarize the data?

< Descriptive statistics; means, standard deviations, effect sizes

The second step of data analysis is to summarize participants' responses. Researchers rarely report the
responses for an individual participant; instead, they report how participants responded on average.
Descriptive statistics begin to answer the question, what happened in the research project?

Often, researchers measure their dependent variables using rating scales. Two common descriptive statistics
for these data are the mean and standard deviation. The mean represents the average score on a dependent
variable across all the participants in a group. The standard deviation tells us about the variability of
participants' scores'approximately how far, on average, scores vary from a group mean.

Another descriptive statistic is the effect size. Measures of effect size tell us the strength of the relationship
between two variables. For example, a correlation coefficient represents the strength of the predictive
relationship between two measured variables. Another indicator of effect size is Cohen's d. This statistic tells
us the strength of the relationship between a manipulated independent variable and a measured dependent
variable. Based on the effect size for their variables, researchers decide whether the effect size in their study
is small, medium, or large (Cohen, 1988).

How do I know what the data reveal?

< Inferential statistics; confidence intervals, null hypothesis testing

In the third stage of data analysis, researchers decide what the data tell us about behavior and mental
processes and decide whether the research hypothesis is supported or not supported. At this stage,
researchers use inferential statistics to try to rule out whether the obtained results are simply "due to chance."
We generally use two types of inferential statistics, confidence intervals and null hypothesis testing.

Recall that we use samples of participants to represent a larger population. Statistically speaking, the mean
for our sample is an estimate of the mean score for a variable for the entire population. It's unlikely, however,
that the estimate from the sample will correspond exactly to the population value. A confidence interval gives
us information about the probable range of values in which we can expect the population value, given our
sample results.
52
Another approach to making decisions about results for a sample is called null hypothesis testing. In this
approach, we begin by assuming an independent variable has no effect on participants' behavior (the "null
hypothesis"). Under the null hypothesis, any difference between means for groups in an experiment is
attributed to chance factors. However, sometimes the difference between the means in an experiment seems
too large to attribute to chance. Null hypothesis testing is a procedure by which we examine the probability of
obtaining the difference between means in the experiment if the null hypothesis is true. Typically, computers
are used to calculate the statistics and probabilities. An outcome is said to be statistically significant when the
difference between the means in the experiment is larger than would be expected by chance if the null
hypothesis were true. When an outcome is statistically significant, we conclude that the independent variable
caused a difference in participants' scores on the dependent variable.

Step 8: Analyze Data and Form Conclusions

Before going through the steps for analyzing data, we will review the findings of Pennebaker and Francis'
experiment.

Did the results support their hypothesis?

What did they conclude based on their findings?

Sample Data Analysis

Hypothetical Research Study

Three Stages of Data Analysis

Did the results support their hypothesis?

Pennebaker and Francis found that the average number of health center visits for illness in the two months
after writing was lower for the emotional-writing group than for the superficial-writing group. For GPA, results
indicated that the average second-semester GPA for the emotional-writing group (Mean = 3.08) was greater
than the average GPA for the superficial-writing group (Mean = 2.86). In both cases, the effect of writing was
statistically significant and represented a medium effect size. These findings supported their hypothesis that
emotional writing would have a beneficial outcome.

What did Pennebaker and Francis find for their measures of cognitive changes? Across the 3 days of writing,
students' essays in the emotional-writing condition comprised, on average, 3.39% insight words and 1.09%
causation words. In contrast, students' essays in the superficial-writing condition had 1.21% insight words and
0.64% causation words. More importantly, insight and causal words increased over the 3 days for students in
the emotional writing condition. These findings supported their hypothesis. 53

What did they conclude based on their findings?

Pennebaker and Francis concluded that as students attempt to understand and find causal meaning when
writing about their college experiences, they are more likely to experience beneficial physical and
psychological health consequences and an improved grade point average.

We started this research example by stating you would find tips for adjusting to college. Based on Pennebaker
and Francis' findings, we can say that writing about your experience (for example, in a journal) may improve
your grades and health. You don't need to show anyone your writing; just write for yourself. Remember,
though, that an important ingredient is that you try to understand the causes of events and gain insight about
your experiences.

Some people experience traumas that may require more than simply writing about them. If you find that you
experience more distress as you write about events, you should consider talking with someone at the
counseling center at your university or college.

Sample Data Analysis

In what follows, we will "walk though" the steps of data analysis using hypothetical data. This section is long,
and provides many details that you might need only when you analyze your own data. Another source for
learning more about statistics can be found at the following website:

http://www.mhhe.com//socscience/psychology/zech/student/olc/stats_primer.mhtml

Hypothetical Research Study

This hypothetical study is basically the same as Pennebaker and Francis's (1996) experiment. Suppose you
hypothesize that emotional writing is an effective intervention for dealing with stressful or traumatic events,
compared to superficial writing. You randomly assign participants to 3 days of consecutive writing in either the
emotional writing (treatment) group or superficial writing (control) group.
We'll propose a new dependent variable: participants' self-reported rating of "well-being" (1-10 scale)
assessed 1 month after the intervention.

Suppose we observe the following ratings for 10 participants in each group:

Emotional Writing: 8, 5, 7, 9, 6, 7, 6, 9, 6, 7

Superficial Writing: 7, 4, 5, 4, 5, 5, 6, 3, 7, 4 54
Using a spreadsheet (such as SPSS), the data would look like this:

subject writing rating


01 1 8
02 1 5
03 1 7
04 1 9
05 1 6
06 1 7
07 1 6
08 1 9
09 1 6
10 1 7
11 2 7
12 2 4
13 2 5
14 2 4
15 2 5
16 2 5
17 2 6
18 2 3
19 2 7
20 2 4

Brief instructions for using SPSS to analyze data will be presented here. More detailed instructions can be
found in this Online Guide for SPSS:

http://www.mhhe.com//socscience/psychology/zech/student/olc/spss.mhtml
55
For tips on how to enter data into SPSS, click on SPSS: Tip 1.

Instructions for SPSS:

Open SPSS. You will see the "data window" spreadsheet. "Double-click" on the "var" (variable) in the top left
corner. Type "subject." Double click on "var" in the second column and type "writing." Double click on "var" in
the third column and type "rating,"

Enter the subject numbers by typing the numbers 1-20 in the first column (you will see this matches the "case"
label). Enter "1" for the emotional writing condition for the first 10 participants, and enter "2" for the superficial
writing condition for the last 10 participants. Finally, enter the values for the well-being ratings in the third
column.

Three Stages of Data Analysis

1) Check the data. Do the numbers make sense? Are there any values that are out of range? Are there any
outliers?

In our example, all data are within the appropriate range (1-10). We can also examine the distribution using
stem-and-leaf displays:

Possible

Score Emotional Writing Superficial Writing


1
2
3 3
4 444
5 5 555
6 666 6
7 777 77
8 8
9 99
10 56
n = 10 n = 10

We can read this stem-and-leaf display as follows. In the emotional-writing condition, one participant's rating
was a 5, three had a rating of 6, three had a rating of 7, one participant's rating was an 8, and two participants'
ratings were a 9. There were 10 participants in the condition (n = 10). How would you describe the distribution
of scores in the superficial-writing condition?

There are three things we can see. First, the distribution of scores for each sample overlaps; however, the
scores for the emotional writing group tend to be higher than the scores for the superficial writing group
(suggesting our hypothesis that this treatment is effective may be supported). Second, there doesn't seem to
be a problem with outliers in either group' no score is dramatically different from other scores. Third, the
scores seem to center around a middle value within each group, with not too much variability.

Click on SPSS: Tip 2 for instructions that will allow you to check the data.

Instructions for SPSS:

Go to the "toolbar" at the top of the screen. Click on "Statistics," scroll down to "Descriptives," and click on
"Explore." The dependent variable is "rating" (select it by clicking on it and click on the arrow key to move it to
the dependent variable box). The factor is "writing." Click on it and then the arrow key to move it to the factor
box. Click on "plots" and make sure "stem-and-leaf" is selected. Click on "statistics" to make sure that
"descriptives" is selected. Then click on OK to run the analysis.

A new window will appear. This is called the "output" window-it has the results of your analyses. The
descriptive statistics will appear, as well as a stem-and-leaf plot. The stem-and-leaf will differ slightly in
appearance because it has a separate column for frequency. You should be able to see, however, that the
same information is provided.
Three Stages of Data Analysis
2) Summarize the data. We can summarize the data numerically using measures of central tendency (e.g.,
mean or average), measures of variability (e.g., standard deviation), and measures of effect size (e.g.,
Cohen's d).
In order to use SPSS to summarize data, follow the instructions presented in SPSS: Tip 3.

Central Tendency
57
Variability (dispersion)

Effect size

Central Tendency

The mean (M) is the average score, the median (Md) is the value that cuts the distribution of scores in half (5
scores below and 5 scores above the value), and the mode is the most frequent score.

The mean, median, and mode for the hypothetical data we've presented are as follows:

Emotional Writing Superficial Writing


Mean (M) = 7.0 5.0
Median (Md) = 7.33 5.33
Mode = 6, 7 4, 5

Variability (dispersion)

The range is the highest and lowest score. The variance and standard deviation are measures of how far
scores are away from the mean (average) score. Variance is the sum of the average deviations from the
sample mean, squared, and divided by n-1 ("n" is the number of participants in the group). Standard deviation
is the square root of the variance.

Emotional Writing Superficial Writing


Range = 5-9 3-7
2
Variance (s ) = 1.77 1.77
Standard deviation (SD) = 1.33 1.33
Note that the variability is the same for each sample-can you see why this is so in the stem-and-leaf display?

Effect size

Measures of effect size indicate the strength of the relationship between the independent and dependent
variables. Cohen's d is defined as the mean difference between two groups in standard deviation units (which
makes effect sizes comparable across studies). It's calculated by obtaining the difference between two means
and dividing by the population standard deviation. The population standard deviation (_) can be defined using
the sample standard deviations (s) and sample sizes (n) for each group (1 and 2) to form a measure of 58
"pooled" variability:

where N = 20 (total number of participants) In our study, _= 1.26

Cohen's d, therefore equals (7-5) ÷1.26, or 1.59. Cohen offered guidelines for interpreting effect sizes: d = .20
is small, d = .50 is medium, d = .80 is large. Because our value for d exceeds that suggested for a large effect,
we can state that emotional writing, relative to superficial writing, has a large effect on well-being in these
hypothetical data.

Instructions for SPSS:

No additional steps are needed here. Your output from your "Explore" analysis provides the mean and
standard deviation and other descriptive statistics. You can use the statistics from the output to calculate the
values for the population standard deviation using the formula presented above. The numerator in the formula
for Cohen's d is the difference between the two group means (7 - 5 = 2). The denominator is the population
standard deviation.

Three Stages of Data Analysis


3) Confirm what the data reveal. Descriptive statistics are rarely sufficient to allow us to make causal
inferences about what happened in the experiment. We need more information. The problem is that we
typically describe data from a sample, not an entire population. A population represents all the data of interest;
a sample is just part of those data. Most of the time, psychologists sample behavior and seek to make a
conclusion about the effect of an independent variable for the population based on the sample. The problem is
that samples can differ from the population simply by chance. When the results for a sample differ from what
we'd observe if the entire population were tested because of chance factors, we say the findings for the
sample are unreliable.
To compound this problem, one sample can vary from another sample simply by chance. So, if we have an
experiment with two groups (e.g., our emotional writing group and our superficial writing group) and we
observe differences between the two groups on our dependent variable, how do we know that these two
samples didn't differ simply by chance? Asked another way, how do we know that the difference between our
sample means is reliable? These questions bring us to the third stage of data analysis, confirming what the
data reveal.
At this point researchers typically use inferential statistics to draw conclusions based on their sample data and
to determine whether their hypotheses are supported. Inferential statistics provide a way to test whether the
differences in a dependent variable associated with various conditions of an experiment can be attributed to 59
an effect of the independent variable (and not to chance factors).
In what follows, we first introduce you to "confidence intervals," an approach for making inferences about the
effects of independent variables that can be used instead of, or in conjunction with, null hypothesis testing.
Then, we will discuss the more common approach to making inferences based on null hypothesis testing.

Confidence Intervals
Null Hypothesis Testing

Confidence intervals

Confidence intervals are based on the idea that data for a sample are used to describe the population from
which the data are drawn. A confidence interval tells us the range of values in which we can expect a
population value to be with a specified level of confidence (usually 95%). We cannot estimate the population
value exactly because of sampling error; the best we can do is estimate a range of probable values. The
smaller the range of values expressed in our confidence interval, the better is our estimate of the population
value.

We have two sample means in our hypothetical experiment, one for the emotional-writing condition and one
for the superficial-writing condition. With two sample means, we can estimate the range of expected values for
the difference between the two population means based on the results of the experiment.

Confidence intervals tell us the likely range of possible effects for the independent variable. The .95
confidence interval for our hypothetical study is .75 to 3.25. That is, we can say with 95% confidence that this
interval contains the true difference between the population means represented by the emotional-writing
condition and the superficial-writing condition. The difference between population means could be as small as
the lower boundary of the interval (i.e., .75) or as large as the upper boundary of the interval (i.e., 3.25).
Although we don't know the "real" effect of the independent variable for the population (because we didn't test
the entire population), the evidence we have, based on the confidence interval, suggests strongly that there
was some effect of the independent variable. That is, the difference between emotional writing and superficial
writing at the population level is likely to fall within .75 and 3.25 points on the well-being rating scale.
Suppose, however, that the confidence interval includes zero (e.g., a range of values from 0 to 4). A "zero
difference" indicates there is no difference in well-being ratings for emotional writing and superficial writing at
the population level. When the confidence interval includes zero, the results of the independent variable are
inconclusive. We can't conclude that the independent variable, type of writing, did not have an effect because
the confidence interval goes all the way to 4. However, we also have to keep in mind that the independent
variable produces a zero difference we simply don't know.

To learn how to compute a confidence interval using SPSS, click on SPSS: Tip 4.
60
Null hypothesis testing

As we've seen, descriptive statistics alone are not sufficient to determine if experimental and comparison
groups differ reliably on the dependent variable in a study. Based on descriptive statistics alone, we have no
way of knowing whether our group means are reliably different (i.e., not due to chance). Confidence intervals
are one way to draw conclusions about the effects of independent variables; a second, more common method
is called null hypothesis testing.

When researchers use null hypothesis testing, they begin by assuming the independent variable has no
effect; this is called the null hypothesis. For example, the null hypothesis for our writing experiment states that
the population means for emotional writing and superficial writing are not different. Under the null hypothesis,
any observed difference between sample means can be attributed to chance.

However, sometimes the difference between sample means is too large to be simply due to chance if we
assume the population means don't differ. Null hypothesis testing asks the question, how likely is the
difference between sample means observed in our experiment (e.g., 2.0), assuming there is no difference
between the population means. If the probability of obtaining the mean difference in our experiment is small,
then we reject the null hypothesis and conclude that the independent variable did have an effect of the
dependent variable.

How do we know the probability of obtaining the mean difference observed in our experiment? Most often we
use inferential statistics such as the t test and Analysis of Variance (ANOVA), which provides the F test. The t
test typically is used to compare whether two means are different (as in our example). Each value of t and F
has a probability value associated with it when the null hypothesis is assumed to be true. Once we calculate
the value of the statistic, we can obtain the probability of observing the mean difference in our experiment.

In our example, because we have two means we can calculate a t test. The difference between the two means
is 2.0 (7.0 - 5.0). The t statistic for the comparison between the two group means is 3.35, and the probability
value associated with this value is .004 (these values were obtained from output from the SPSS statistics
program). Does this value tell us that the mean difference of 2.0 is statistically significant?
We have two possible conclusions when we do null hypothesis testing: We either reject the null hypothesis or
we fail to reject the null hypothesis. Outcomes (i.e., observed differences between means) that lead us to
reject the null hypothesis are said to be statistically significant. A statistically significant outcome indicates that
the difference between means we observed in our experiment is larger than would be expected if by chance
the null hypothesis were true. We conclude that the independent variable caused the difference between
means (presuming, of course, that the experiment is internally valid).

A statistically significant outcome is one that has only a small likelihood of occurring if the null hypothesis is
true. That is, when we look at the results of our statistical test, the probability value associated with the 61
statistic is low. But just how small does this likelihood have to be? Although there is no definitive answer to
this important question, the consensus among members of the scientific community is that outcomes
associated with probabilities of less than 5 times out of 100 (or .05) are judged to be statistically significant.
The probability we choose to indicate an outcome is statistically significant is called the level of significance.
The level of significance is indicated by the Greek letter alpha ("). Thus, we speak of the .05 level of
significance, which we report as " = .05.

When we conduct an experiment and observe that the effect of the independent variable is not statistically
significant, we do not reject the null hypothesis. However, we do not accept the null hypothesis of no
difference either. The results are inconclusive (this is similar to a confidence interval that includes "zero").
There may have been some factor in our experiment that prevented us from observing an effect of the
independent variable (e.g., few subjects, poor operationalization of the independent variable).

To determine whether an outcome is statistically significant we compare the obtained probability value with
our level of significance, " = .05. In our example, because our probability value (p = .004) is less than .05, we
reject the null hypothesis. This allows us to state that the observed mean difference of 2.0 is probably not due
to chance. This outcome indicates the two means are reliably different; that is, the independent variable had a
reliable effect on the dependent variable. If our obtained probability value had been greater than .05, we
would fail to reject the null hypothesis. This would indicate that the observed difference between means could
be due to chance, and we would withhold judgment about the effect of the independent variable (i.e., the
results would be inconclusive).

To use SPSS for null hypothesis testing, refer to SPSS: Tip 5.

Instructions for SPSS:

There's no need to do anything more because in the previous step you computed a t test. The output of the t
test includes the value for t (3.35), and the probability associated with the statistic ("sig"), which is .004.

We conclude that the independent variable (type of writing) had a statistically significant effect on the
dependent variable (well-being rating) when the probability value in our output is less than .05. Probability
values such as .04, .03, .02, .01, .005, .000, and so on are regarded as statistically significant. Probability
values such as .06, .10, .20, .30, and so on are described as not statistically significant.

Step 8: Analyze Data and Form Conclusions

Researchers seldom report their procedures for checking the data, but often do report their summaries and
inferential statistics. These questions will help you to evaluate their statistical procedures:
62
• Does the researcher describe checking the data-for example, is the distribution of scores described or
outliers identified?

• Are appropriate summary statistics provided for all variables-for example, are the means, standard
deviations, and effect sizes reported?

• Does the researcher present inferential statistics, such as confidence intervals or results of null hypothesis
significance testing?

Step 9: Report Research Results

In order to make a convincing argument or claim about behavior, we need to do more than simply analyze the
data. A good argument requires a good story. A trial attorney, in order to win a case, not only points out the
facts of the case to the jury, but also weaves those facts into a coherent and logical story. If the evidence
points to the butler, then we want to know "why" the butler (and not the cook) might have done it. Thus, after
the data are analyzed, the next step is to construct a coherent story that explains the research findings and
justifies the conclusions. This research story is then reported at psychology conferences and in psychology
journals.

What are psychology conferences?

How do research reports get published?

How do I write about my research project?

What are psychology conferences?

< Meetings of psychology professional organizations

Many psychologists and psychology students belong to professional organizations such as the American
Psychological Association (APA), American Psychological Society (APS), regional associations (e.g.,
Midwestern Psychological Association), and associations organized around special research interests (e.g.,
Society for Research in Child Development). These organizations sponsor annual conferences for their
members and students. These meetings typically last a few days and provide an opportunity for researchers
to present their latest findings and to talk to others about research ideas.

Students often present their work in "poster sessions," in which the main ideas of their research project are
presented, in writing, in a large session with many posters. People stroll through the posters and read the
ones that catch their eye (e.g., posters related to their own research). Poster sessions are a great way for
students to present their research findings and to talk with others about their research ideas. In order to 63
submit a poster for presentation at a conference, students need a faculty member sponsor.

Conferences are one of the best ways to learn what's "hot" in psychology, and they represent psychologists'
way of networking. Attending conferences is an important way for students to become familiar with the
researchers and topics at the forefront of psychology. Across the country there are many research
conferences designed specifically for undergraduates, sponsored by various colleges and universities. Watch
for notices on your department bulletin board!

How do research reports get published?

< Psychology journals, peer review

In addition to conference presentations, psychologists communicate their research findings by publishing


reports of their studies in scientific periodicals (journals). Publishing the results of a scientific investigation can
be very challenging, especially if a researcher wishes to publish in one of the more prestigious scientific
journals. Journals sponsored by the American Psychological Association (APA), such as the Journal of
Abnormal Psychology, Psychological Review, and the Journal of Personality and Social Psychology, can have
rejection rates as high as 80% (or higher).

In order to submit their research for publication, researchers must prepare written reports that follow specific
guidelines (known as "APA format"), and their research must be free of methodological problems and
appropriate for the particular journal to which the manuscript is submitted. Journal editors decide whether the
research study makes a significant contribution to the science of psychology. They usually make their decision
based on comments from experts in the field who evaluate the manuscript. This process is called peer review.

Although the process of manuscript submission, review, manuscript revision, and eventual publication of a
research report usually takes many months (often years), the rewards are great. The most satisfying
conclusion of a research project is to communicate the findings in a peer-reviewed journal. In this way,
researchers make a contribution to the ongoing science of psychology.

How do I write about my research project?


< APA format

Psychologists use a consistent format and style for reporting their research results, called APA format (or APA
style). The guidelines for APA format are found in the Publication Manual of the American Psychological
Association. The fifth edition of the APA Publication Manual was published in 2001.

Research reports in psychology consist of the following sections:


64
Title page Results Author Note
Abstract Discussion Footnotes
Introduction References Tables
Method Appendixes Figures

Title page. The first page, the title page, identifies the title of the research study, the names of the authors of
the research manuscript, and the authors' affiliations (e.g., a university). A good title identifies the primary
variables and summarizes the main idea of the research.

Abstract. The Abstract is a one-paragraph summary (150 words) of the content and purpose of the research
report. The abstract identifies the research question or problem, the method, the findings, and the conclusions
and implications of the findings.

Introduction. The three objectives of the Introduction section are to describe the research question, to
summarize the relevant background literature, and to state the purpose and rationale of the research study
with a logical development of the hypotheses that were tested. All three of these objectives share a common
purpose: to give the reader a firm sense of what was studied in the research project and why.

Method. The Method section provides the details of the research project. The three most common
subsections of the Method section are "Participants,""Materials," and "Procedure." In the Participants section,
the authors describe the sample of people who participated in the research (e.g., demographic statistics). In
the Materials section, the authors describe any questionnaires, instruments, or measures that played a central
role in the research. Finally, the Procedure section describes participants' experience in the study from the
beginning to the end of the session.

Results. The Results section is used to answer the questions raised in the Introduction. The guiding principle
in the Results section is to "stick to the facts." In this section, researchers report the results of their statistical
analyses (descriptive statistics and inferential statistics).
Discussion. In the Discussion section, researchers report their conclusions about the statistical analyses.
They state whether the research hypotheses were supported or not supported and connect their findings to
the goals and objectives they stated in the Introduction. Authors often describe limitations to their research
study (all research studies have limitations) and provide ideas for future research on the topic.

References. The References section identifies all the sources that were cited in the text. Thus, when
describing previous research in the Introduction, authors must identify the source of these ideas. In the
References section, the complete citation information is provided so that readers may locate particular articles
or books of interest. 65

"End matter." In an Appendix (optional), authors may include copies of verbatim instructions if doing so would
help the reader understand the study. The Author Note provides the address of the authors and a place for
them to acknowledge people who helped with the research. Any Footnotes used in the text are included on a
separate page after the Author Note (i.e., footnotes are not printed at the bottom of the page in which they are
cited). Finally, any Tables and/or Figures are attached at the end of the manuscript. Tables and figures
(graphs) are an effective and efficient way to present data from statistical analyses.

Step 9: Report Research Results

Pennebaker and Francis (1996) described their experiment in a psychological journal called Cognition and
Emotion. You can learn more about this journal by going to the publisher's website at:
http://www.psypress.co.uk/journals.html
The full reference for Pennebaker and Francis' article is:
Pennebaker, J. W., & Francis, M. E. (1996). Cognitive, emotional, and language
processes in disclosure. Cognition and Emotion, 10, 601-626.
This reference is written using APA format. The authors' names are listed first (last names and initials only),
separated by an ampersand. The year in which the article was published is next (in parentheses). The title of
the article is next-note that only the first letter of the first word is capitalized. Next is the name of the journal
and the volume number (underlined and separated by commas). The first letters of the major words in the
journal title are capitalized. The pages numbers are last, followed by a period. Additional examples of APA-
format references, including books and chapters, appear in the References section for this Student as
Researcher guide.
As you may see, APA format is quite specific. It would be impossible to go over all the guidelines in this
section. However, there's help available online. The first resource is part of Psych Web, and the second is part
of APA's own website:
http://www.psychwww.com/resource/apacrib.htm
http://www.apa.org/journals/acorner.html#pubmanual
Through the APA website you can also purchase a guide for writing APA reports called APA Style-Helper.
More information about this is available at
http://www.apastyle.org/stylehelper/
Another resource might be available in your word-processing software. For example, WordPerfect (versions 8
and up) has available as part of its "Project" option, a template (guide) for writing APA-format research
reports. You begin by going to file, then to new project, and finally, to APA report. Once you have the template
open, the program will prompt you to type the information needed for each section.

Writing a Research Report


Sample APA-Format Paper

Sample Research Report 66

Rather than write a sample research report for our hypothetical example, we present a sample APA-format
report for a quasi-experiment. This manuscript describes the results of a study that examines whether learning
about research methods in psychology improves students' critical thinking skills (the answer is yes). Because
this is a report of an actual study, you will get a better idea of how to write about psychological research.

Things to note in the sample manuscript:

• One-inch margins should be used, the paper should be left-justified, and double-spaced throughout.

• Do not use bold or italic font, use one font size throughout.

• One space between sentences.

• Always cite the authors when you present information from a source. Identify all authors (last names only)
when you cite a source, plus the year the study was published. If there are three or more authors, you can use
'et al.' after the first author's name when you cite the source the second, third, etc. times.

• Use words to express numbers less than 10, unless associated with a unit of measurement or in a series of
numbers.

• Follow examples precisely when reporting statistics. The format is: statistic(degrees of freedom) = calculated
value for statistic, p = significance value. An example from our hypothetical study is t(18) = 3.35, p = .004.

• List all references cited in alphabetical order using first author's last name.

Click on "Sample APA-Format Paper" to see what your APA-Format paper should look like.

Sample APA-Format Paper


Taking a Research Methods Course Improves Real-Life Reasoning

Scott W. VanderStoep and John J. Shaughnessy

Hope College Abstract

We examined the extent to which students who take a research methods course improve their reasoning
about real-life events. If social science majors improve their methodological and statistical reasoning over 4 67
years of college, a logical source of this improvement in psychology would be the research methods course.
We tested students in research methods and in developmental psychology courses on methodological and
statistical reasoning at the beginning and the end of the term. As expected, reasoning scores of research
methods students improved more than did scores of developmental psychology students. These results have
implications for teaching because they support our intuitive notions that what we are teaching has real-life
value.

Taking a Research Methods Course Improves Real-Life Reasoning

Teachers get excited when students recognize the relevance of what they are taught to something outside the

classroom. We are pleased when students tell us that what they learned in our class helped them with some

other aspect of their lives or that our class taught them to think like a psychologist. Likewise, we are

disappointed when students simply memorize factual information without reflecting on its relevance or when

they fail to se even the most obvious examples of the applicability of course material to new situations.

What students take away from psychology courses will depend on the course. In a developmental psychology

course, for example, students may reflect on their own childhood and how it has made them who they are,

they may see how the course material can make them better parents, or they may learn how to deal more

effectively with aging parents and grandparents. Each content course in psychology has such real-life

applications.
What do students take away from a research methods course? We hope they learn how to conduct

psychological research, including the mechanics of experimental design, survey sampling, and data analysis.

Beyond learning how to conduct research, however, learning about research methods has the potential for

teaching students real-life thinking and reasoning skills that may be useful in a variety of settings.

68
The ability to reason methodologically and statistically is a domain-general cognitive ability that students can

transfer to a variety of contexts (Nisbett, Fong, Lehman, & Cheng, 1987). Furthermore, instruction has been

shown to improve students' methodological and statistical reasoning. Specifically, undergraduates who

majored in social science disciplines showed greater improvements in methodological and statistical

reasoning than either natural science majors or humanities majors (Lehman & Nisbett, 1990). We expected, at

least among psychology courses, that taking a research methods course would explain a large part of the

change in methodological and statistical reasoning. Thus, we tested whether taking a research methods

course would improve reasoning more than another undergraduate course, such as developmental

psychology.

Method

Participants

Participants were students enrolled in two research methods sections and two developmental psychology

sections at a 4-year liberal arts college. The research methods sections were taught by different instructors;

the developmental psychology sections were taught by a third instructor. The second author was the instructor

for one of the research methods courses. Thirty-one students from the research methods classes and 32
students from the developmental psychology classes took both the pretest and the posttest. Most were

traditional-age college students. A majority of students were female (78%), although no gender differences

were found in reasoning scores (see Results). The mean ACT composite score of incoming students at this

institution is 24, and the mean high school GPA is 3.4.

69
Materials

Each form of the instrument for measuring methodological and statistical reasoning contained seven items.

Three items involved statistical reasoning and four items involved methodological reasoning. Two forms were

used and were counterbalanced across pretest and posttest. Some of the items were modified versions of

those used previously by Lehman and Nisbett (1990); others were created for this study. The statistical

reasoning questions tested whether students could recognize examples of regression to the mean and the law

of large numbers when applied to everyday events. The methodological reasoning items tested whether

students recognized concepts such as a spurious causal relationship and selection bias. All of the items were

phrased in everyday language with no reference to methodological or statistical concepts. The scenarios were

followed by four or five alternatives that might explain the event. Although all responses were plausible

explanations, we agreed that one response best illustrated methodological and statistical reasoning.

Participants' scores could range from 0 to 7 based on how many correct answers they selected. An example

question illustrating a spurious causal relationship is:

Suppose it were discovered that those who majored in math, engineering, or computer science scored higher

on tests measuring "problem-solving" ability at the end of four years of college than did those students who

did not major in these fields. How would you interpret this information?
a. Physical science training has positive effects that improve complex reasoning ability.

b. Math, engineering, and computer science majors have more class assignments that require students to use

complex reasoning.

c. It may be that the physical science majors differ on many other things besides problem-solving ability and 70

they would have scored higher at the beginning of their freshman year as well.

d. It is likely that physical science students will score lower on tests of verbal ability.

Answer "C" demonstrates that the relationship between selection of major and future problem-solving skill

may not be causal, based only on evidence provided (i.e., no pretest scores).

Procedure

We administered the instrument to students in their classrooms on the second day of the semester and again

near the end of the semester. Students were told that the stories were similar to events they might read about

in a newspaper or encounter in everyday conversation.

Results

Pretest and posttest means were calculated for the number of correct responses on the seven methodological

and statistical reasoning items for the two courses. Therewere no gender differences, t(61) = 1.01, p = .275,
and no between-instructor differences for the research methods professors, t(29) = 1.10, p = .28, in

methodological and statistical reasoning.

The means and standard deviations for the number of correct responses on the seven methodological and

statistical reasoning items for the two courses are presented in Table 1. The increase from pretest to posttest
71
was greater for the research methods students than for the developmental psychology students, F(1, 61) =

13.10, p = .002. The effect size (eta) for this interaction was .45.

We also found that the number of psychology courses students had taken was a significant predictor of

posttest methodological and statistical reasoning scores. Further analyses were done to assess the relative

contribution of the research methods course while accounting for the contribution of the number of courses

taken. The results of these analyses showed that there is an effect of taking a research methods course on

students' reasoning beyond that accounted for by the number of psychology courses the students have taken.

Discussion

Our study extends work by Lehman and Nisbett (1990) on the effects of undergraduate education on student

reasoning. Whereas Lehman and Nisbett found long-term effects of certain courses, we found more

specifically that a course in methodology can be important in cultivating students' ability to think critically about

real-life events.

General reasoning skills are important, especially when information is modified and updated very rapidly. For

example, a student taking a social psychology course in 1996 will be learning very different material than a
student who took the course in 1970. We do not know what tomorrow's domain-specific knowledge will be, or

whether what we are teaching today will still be relevant in the future. However, if we can teach students to

develop general thinking skills, then the importance and relevance of our courses will be greater (see also

Nisbett et al., 1987). If psychology majors can be taught general skills that they can apply to novel domains,

we can better ensure the relevance of what we teach. Students taking research methods classes may not 72
remember the precise definition of a confounding variable, or how exactly to design a randomized blocks

experiment. However, our results suggest that they may leave with some general skills that they can use while

watching the evening news, shopping for automobiles, voting, or deciding whether or not to adopt a new

weight-loss technique that they have seen advertised.

As psychology instructors, we have intuitive notions about the usefulness of the skills we teach our students.

We talk confidently about the benefits of an undergraduate major in psychology and how -thinking like a

psychologist: helps students in many areas of life. Our results suggest that there is value in learning to think

like a psychologist. There is more to real-life thinking than is represented by our small set of items, but we are

pleased that our intuitions held up to empirical scrutiny.

References

Lehman, D. R., & Nisbett, R. E. (1990). A longitudinal study of the effects of undergraduate training on
reasoning. Developmental Psychology, 26, 952-960.

Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science, 238, 625-
631.

Author Note

Scott VanderStoep, Department of Psychology; John Shaughnessy, Department of Psychology.


We thank Jim Motiff and Jane Dickie for the use of their classroom time. This sample paper is a modified
version of a published article of the same title by VanderStoep, S. W., & Shaughnessy, J. J. (1997). Teaching
of Psychology, 24, 122-124.

Send correspondence concerning this article to Scott VanderStoep, Department of Psychology. Hope College,
P.O. Box 9000, Holland, MI 49422-9000 (email: vanderstoep@hope.edu).

Table 1
73

Mean Pretest and Posttest Reasoning Scores for Developmental Psychology and Research Methods

Students

Course
Time of Test Developmental Research Methods
Pretest 2.38 (1.29) 3.00 (1.69)
Pretest 2.84 (1.59) 4.97 (1.49)

Note. Standard deviations are in parentheses.

Step 9: Report Research Results

Consider these questions as you read a research report:

• Who is the intended audience (e.g., other scientists, general public)?

• Is the report clearly written? Does the researcher present a logical story about the importance of his/her
research findings?

• Has the research undergone peer review? (Sometimes this may be hard to know; generally, research
findings are more acceptable if presented in a reputable, scientific journal rather than at a conference or
simply in the media.)
Step 10: Take the Next Step

Researchers are seldom "done" when they finish a research project. Recall that the first step in the research
process involved selecting a research question. This, by itself, is difficult because often we have so many
questions about behavior and mental processes! In addition, a finished research study may answer a
research question, but more often than not, new questions arise. Thus, the next step in the research process
is to conduct another study.

Are the findings specific to my study? 74

Is psychology research in my future?

Are the findings specific to my study?

< External validity, replication

An important question that arises at the conclusion of any research project is whether the findings apply only
to the specific people and conditions in the study. This question refers to external validity. Validity refers to
"truthfulness," and questions of external validity ask whether the findings for a particular study can be
generalized (or applied) to different settings, conditions, and people.

Researchers often use replication to establish the external validity of their findings. Replication involves
repeating the procedures used in a particular research study to determine whether the same results are
obtained a second time with different participants (and often, different settings and conditions). When
research findings are "replicated," we gain confidence in our ability to describe, predict, and understand
behavior and mental processes. Thus, one "next step" to take after completing a research project is to try to
replicate, or repeat, the findings.

Is psychology research in my future?

< Psychological research will always be popular; new technology for studying the mind

As you become more familiar with topics in psychology, you will find that we have many unanswered
questions about why people think and behave the way they do. As our culture changes, as "baby boomers"
age and the next generation-perhaps your generation-begins to shape the world in which we live, new
questions and areas for research will emerge. In addition, advancing technologies will provide psychology
researchers new tools for investigating how the mind works.

No matter what the future holds, we can safely make one prediction. You will continue to learn about
psychological research from reports on the Internet, in newspapers, magazines, and on radio and television.
The public has an almost insatiable appetite for information related to psychology. And why not? Psychology
is about us-who we are and what makes us tick-and many of us think we are about as interesting as things
get. We hope this guide to research in psychology will help you to evaluate these findings and separate the
good research from the bad research. And, who knows, you might be part of psychology's future. The key to
your future is to get involved. Talk to psychologists and instructors. Find out about opportunities for
participating on research teams. One day, we might learn about your research findings.

Step 10: Take the Next Step


75
Our last step in reviewing Pennebaker and Francis' (1996) experiment is to consider the external validity of
their findings and the next steps in emotional writing research.

Do Pennebaker and Francis- (1996) findings apply only to their students at Southern Methodist University?

What's the next step in research on emotional writing?

Do Pennebaker and Francis' (1996) findings apply only to their students at Southern Methodist University?

This is a question of external validity. Researchers seldom do studies just to describe a single sample.
Instead, they want their findings to generalize, or apply to, a larger population.

By itself, their experiment has little external validity. It was conducted on a single college campus, with one
sample of college freshmen and transfer students, at one time in students' lives. Based on this one study, it's
hard to generalize their findings beyond this one sample and setting. We can ask, does writing about
emotional events improve health and performance outcomes for other populations, in other settings, and for
people who experience different types of stressful or traumatic experiences?

We can answer this question by examining the findings for similar studies on emotional writing. Pennebaker
and Francis (1996) replicated the procedures used in several other experiments on the effects of emotional
writing. These other experiments involved different samples (e.g., other college students, adults in the
community) and different types of trauma (e.g., layoff from a job). The findings from these different studies
were all similar. In each case, emotional writing, compared with superficial writing, caused better outcomes.
Therefore, we can say that the findings for emotional writing are reliable.

What's the next step in research on emotional writing?

The first step in answering this question requires another literature review. A few years have passed since
Pennebaker and Francis' (1996) experiment. Since then, other emotional writing studies have been
conducted. The next step in pursuing this research topic would be to conduct a literature search to see what's
new in emotional writing research. As you read more about recent findings, perhaps you'll spark new
questions and hypotheses that you can test.

Search terms for PsycINFO might include:

• emotional writing and health

• emotional writing and Pennebaker 76


• emotional writing and Pennebaker and health

Step 10: Take the Next Step

No single study will provide a definitive answer about behavior or mental processes. Research studies always
have limitations and present possibilities for future research. These questions will help you to evaluate the
next step for a research project:

• Does the researcher discuss the external validity of the findings-that is, the extent to which the findings may
apply only to particular people, settings, or conditions?

• Has the researcher replicated the findings or report that others have replicated the findings?

• Does the researcher describe the limitations of the research?

• Does the researcher suggest possibilities for future research?


Glossary

applied research Applied research seeks knowledge that will modify or improve the present situation.

archival Source of evidence based on records or documents relating the activities of individuals, institutions,
governments, and other groups; used as an alternative to or in conjunction with other research methods.

baseline The first stage of a single-case experiment, in which a record is made of an individual's behavior 77
prior to any intervention.

basic research Basic research mainly seeks knowledge about nature simply for the sake of understanding it
better and to test theories.

causal inference Identification of the cause or causes of a phenomenon.

confederate Someone in the service of a researcher who is instructed to behave in a certain way in order to
help produce an experimental treatment.

confidence intervals Intervals that indicate the range of values in which we can expect a population value to
fall with a specified degree of confidence (e.g., .95).

control Key component of the scientific method whereby the effect of various factors possibly responsible for
a phenomenon are isolated.

convenience sample A sample of research participants that is selected because individuals are available and
willing to participate in the research project.

correlational research Research in which the goal is to identify predictive relationships among naturally
occurring variables.

debriefing The process following a research session through which participants are informed about the
rationale for the research in which they participated, about the need for any deception, and about their specific
contribution to the research. Important goals of debriefing are to clear up any misconceptions and to leave
participants with a positive feeling toward psychological research.

deception Intentionally withholding information about significant aspects of a research project from a
participant or presenting misinformation about the research to participants.
dependent variable A measure of behavior used by a researcher to assess the effect (if any) of the
independent variables.

descriptive statistics Numerical measures of sample characteristics, such as the mean (average score) and
standard deviation (degree of dispersal around the mean).

effect size An index of the strength of the relationship between the independent variable and dependent
variable.
78
empirical approach Approach to acquiring knowledge that emphasizes direct observation and
experimentation as a way of answering questions.

experimental research design A research study in which a treatment (intervention) is implemented with a high
degree of control, permitting an appropriate comparison (e.g., between treatment and control groups) such
that an unambiguous decision can be made concerning the effect of the treatment.

external validity The extent to which the results of a research study can be generalized to different
populations, settings, and conditions.

holding conditions constant A method for conducting a controlled experiment in which only the independent
variable is allowed to vary; all other potential factors are the same for participants in different conditions of the
experiment.

hypothesis A tentative explanation for a phenomenon.

independent variable A factor the researcher manipulates with at least two levels in order to determine the
effect on behavior.

inferential statistics Statistical procedure for testing whether the differences in a dependent variable that are
associated with various conditions of an experiment are reliable-that is, larger than would be expected on the
basis of chance alone.

informed consent The explicitly expressed willingness to participate in a research project, based on clear
understanding of the nature of the research, of the consequences of not participating, and of all factors that
might be expected to influence willingness to participate.

Institutional Animal Care and Use Committee (IACUC) A committee that evaluates the risks and benefits of
research proposals involving animal subjects.
Institutional Review Board (IRB) A committee that evaluates the risks and benefits of proposals involving
research with human participants.

internal validity The degree to which differences in performance can be attributed unambiguously to an effect
of an independent variable, as opposed to an effect of some other (uncontrolled) variable.

mean The average score in a distribution of scores; calculated by adding all of the scores and dividing by the
number of scores.
79
minimal risk A research participant is said to experience minimal risk when probability and magnitude of harm
or discomfort anticipated in the research are not greater than that ordinarily encountered in daily life or during
the performance of routine tests.

null hypothesis testing A statistical procedure in which, as the first step in statistical inference, the
independent variable is assumed to have had no effect.

observational research Observation of naturally occurring behavior, with the goal of describing behavior.

operational definition A procedure whereby a concept is defined solely in terms of the operations used to
produce and measure it.

quasi-experiments Procedures that resemble the characteristics of true experiments, for example, an
intervention or a treatment is used and a comparison is provided, but procedures lack the degree of control
found in true experiments.

random assignment The most common technique for forming groups as part of an independent groups
design; the goal is to establish equivalent groups by balancing individual differences in the participants across
the conditions of the experiment.

random sample A sample in which every member of the population has an equal chance of being selected for
the research project.

reliability A measurement is reliable when it is consistent.

replication Repeating the exact procedures used in an experiment to determine whether the same results are
obtained.

risk/benefit ratio The subjective evaluation of the risk of the proposed research relative to the benefit, both to
the individual and to society.
scientific method Approach to knowledge that emphasizes empirical rather than intuitive processes, testable
hypotheses, systematic and controlled observation of operationally defined phenomena, data collection using
accurate and precise instrumentation, valid and reliable measures, and objective reporting of results;
scientists tend to be critical and, most importantly, skeptical.

standard deviation A measure of variability or dispersion that indicates how far, on average, a score is from
the mean.

statistically significant When the probability of an obtained difference in an experiment is smaller than would 80
be expected if chance alone were assumed to be responsible for the difference, the difference is statistically
significant.

theory A logically organized set of propositions that defines events, describes relationships among events,
and explains the occurrence of these events; scientific theories guide research and organize empirical
knowledge.

validity The "truthfulness" of a measure; a valid measure is one that measures what it claims to measure.

variable A condition (factor) that can vary, either quantitatively or qualitatively, along an observable
dimension. Researchers both measure and control variables.
References

Ajzen, I., & Madden, T. (1986). Prediction of goal-directed behavior: Attitudes, intentions, and perceived
behavioral control. Journal of Experimental Social Psychology, 22, 453-474.

Anderson, J. R. (1990). The adaptive character of thought. Hillsdale, NJ: Erlbaum.

Anderson, J. R. (1993). Rules of the mind. Hillsdale, NJ: Erlbaum. 81


Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited. American Psychologist,
40, 165-174.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Findler, N. V. (1998). A model-based theory for deja vu and related psychological phenomena. Computers in
Human Behavior, 14, 287-301.

Kazdin, A. E. (1999). Overview of research design issues in clinical psychology. In P. C. Kendall, J. N.


Butcher, & G. N. Holmbeck (Eds.), Handbook of research methods in clinical psychology (2nd ed.) (pp. 3-30).
New York: Wiley.

Kimble, G. A. (1989). Psychology from the standpoint of a generalist. American Psychologist, 44, 491-499.

Pennebaker, J. W., & Francis, M. E. (1996). Cognitive, emotional, and language processes in disclosure.
Cognition and Emotion, 10, 601-626.

Simons, D. J., & Levin, D. T. (1998). Failure to detect changes to people during a real-world interaction.
Psychonomic Bulletin and Review, 5, 644-649.

Sternberg, R. J. (1986). A triangular theory of love. Psychological Review, 93, 119-135.

Вам также может понравиться