Вы находитесь на странице: 1из 30

Quality & Quantity (2005) 39: 267–296 Ó Springer 2005

DOI 10.1007/s11135-004-1670-0

Taking the ‘‘Q’’ Out of Research: Teaching


Research Methodology Courses Without
the Divide Between Quantitative and Qualitative
Paradigmsq
1,?
ANTHONY J. ONWUEGBUZIE and NANCY L. LEECH2
1
Department of Educational Measurement and College of Education, University of South
Florida, Tampa, FL 33620, USA Research; 2University of Colorado at Denver

Abstract. The purpose of this paper is to provide evidence that the debate between quanti-
tative and qualitative is divisive and, hence, counterproductive for advancing the social and
behavioral science field. We advocate that all graduate students learn to utilize and to
appreciate both quantitative and qualitative research methodologies. As such, students will
develop into pragmatist researchers who are able to utilize both quantitative and qualitative
techniques when conducting research. We contend that the best way to accomplish this is by
eliminating quantitative research methodology and qualitative research methodology courses
from curricula and replacing these with research methodology courses at different levels that
simultaneously teach both quantitative and qualitative techniques within a mixed methodo-
logical framework.

Key words: research methods courses; teaching research; qualitative research; quantitative
research; mixed methods; pragmatist researcher

1. Introduction
The last several decades have witnessed intense and sustained debates about
quantitative and qualitative research paradigms. Unfortunately, this has led
to a great divide between quantitative and qualitative researchers in general
and between instructors of quantitative and qualitative research methodol-
ogy courses in particular, who often view themselves as being in competition
with each other. This polarization has promoted what Onwuegbuzie (2000a)
has termed ‘‘uni-researchers,’’ namely, researchers who restrict themselves
q
An earlier version of this article received the 2003 Southwest Educational Research
Association (SERA) Outstanding Paper Award.
?
Author for correspondence: Anthony J. Onwuegbuzie, Department of Educational Mea-
surement and Research, College of Education, University of South Florida, 4202 East Fowler
Avenue, EDU 162, Tampa, FL 33620-7750. E-Mail: tonyonwuegbuzie@aol.com
268 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

exclusively either to quantitative or to qualitative research methods.


Tashakkori and Teddlie (2003: 64) imply that these individuals are unable to
conduct ‘‘bilingual research’’. Yet, relying on only one research paradigm can
be extremely limiting. As such, uni-research is a threat to the advancement of
the social and behavioral sciences. Indeed, as long as we stay polarized in
research, how can we expect stakeholders who rely on our research findings
to take our work seriously?
With this in mind, the purpose of this paper is to provide evidence that the
debate between quantitative and qualitative is divisive and, hence, counter-
productive for advancing the social and behavioral science field. We advocate
that all graduate students learn to utilize and to appreciate both quantitative
and qualitative research methodologies. As such, students will develop into
bi-researchers, or what Onwuegbuzie and Leech (in press-a) call pragmatist
researchers. We contend that the best way to accomplish this is by eliminating
quantitative research methodology and qualitative research methodology
courses from curricula and replacing these with research methodology
courses at different levels that simultaneously teach both quantitative and
qualitative techniques within a mixed methodological framework.
The current paper is divided into several sections. First, we provide the
historical context for the quantitative–qualitative debate. Second, we sum-
marize what has been identified as the major differences between quanti-
tative and qualitative research paradigms. The distinctions that we discuss
include epistemological, ontological, axiological, and rhetorical differences.
Third, we describe the major misconceptions held by both quantitative and
qualitative researchers. To understand these differences better, we outline
the similarities between quantitative and qualitative research paradigms in
general and quantitative and qualitative research methods in particular. In
so doing, we assert that a false dichotomy exists between the two para-
digms.
Fourth, although many research procedures typically are linked to cer-
tain paradigms, we contend that this linkage between research paradigm
and research methodology is not sacrosanct. Moreover, we argue that the
most effective way of attaining epistemological universality involves using
mixed-methodological approaches. Thus, in this section, we describe what
mixed-methods research is and present the major advantages of using this
paradigm. Finally, we argue that we should no longer distinguish quanti-
tative and qualitative research but strive towards methodological pluralism.
An important way of accomplishing this is to re-frame the concept of
research in the social and behavioral sciences by de-emphasizing the terms
quantitative and qualitative research and, instead, sub-dividing research
into exploratory and confirmatory methods (Onwuegbuzie and Teddlie,
2003).
TAKING THE ‘‘Q’’ OUT OF RESEARCH 269

1.1. HISTORICAL CONTEXT

For slightly more than 100 years, research methodology in the social and
behavioral sciences has undergone four major phases. The first phase, ending
just prior to the late 19th century, was characterized by the popularization of
the quantitative research paradigm. During this era, positivism prevailed, in
which mathematical and statistical procedures were utilized to explore, to
describe, to explain, to predict, and to control social and behavioral phe-
nomena (Smith, 1983; Smith and Heshusius, 1986; Onwuegbuzie, 2000a; cf.
Johnson and Onwuegbuzie, 2004). Social science positivists promoted re-
search studies that were value-free, using rhetorical neutrality that resulted in
discoveries of social laws, from which in time and context-free generaliza-
tions ensued.
The beginning of the 20th century marked the second research method-
ology phase in the social and behavioral sciences. This phase involved the
emergence of the qualitative research paradigm. Proponents of this school of
thought rejected the positivistic use of the traditional scientific method to
study social observations (Hodges, 1944, 1952; Ermarth, 1978; Crotty, 1998).
Instead, they advocated the use of interpretive/hermeneutic approaches in the
social and behavioral science field. Moreover, they contended that social
reality was constructed and thus was subjective. As such, this phase of history
is distinguished by the polarization of the quantitative and qualitative re-
search paradigms (Hughes, 1958; Outhwaite, 1975, 1983; Smith, 1983; Smith
and Heshusius, 1986). As the status and visibility of the qualitative research
paradigm rose between the turn of the 20th century and the World War II,
positivism in its purist form (i.e., logical positivism) in the social and
behavioral sciences became increasingly discredited (Tashakkori and Teddlie,
1998).
The third research methodology phase in the social and behavioral sci-
ences came to the forefront during the late 1950s and 1960s. This phase,
which grew out of an attempt to reject some of the tenets of logical posi-
tivism, saw the emergence of post-positivism (e.g., Hanson, 1958; Popper,
1959; Campbell and Stanley, 1966). Post-positivism represented a compro-
mise between the quantitative and qualitative research paradigms. While
believing that reality is constructed and that research is value-laden, post-
positivists also believed that some relatively stable relationships exist.
However, they tended to emphasize the importance of the scientific method
in general and methodological appropriateness in particular (Cook and
Campbell, 1979; Smith, 1994).
During this third phase, more radical philosophies also arose. These
philosophies, which included post-structuralism and post-modernism, con-
tended even more strongly that no objective social reality existed; rather, they
maintained that multiple realities existed such that interpretation was
270 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

dependent on the interpreter. Moreover, post-structuralists, post-modernists,


and the like, ascribed to the Incompatibility Thesis (Howe, 1988). This theory
posited that the quantitative and qualitative paradigms could not coexist.
Arguing for the exclusive superiority of their qualitative orientation, they
contended that these two paradigms could not and should not be mixed in
any way (Smith, 1983; Lincoln and Guba, 1985; Guba, 1987).
The fourth research methodology phase in the social and behavioral sci-
ences began in the 1960s. This phase saw the emergence of the pragmatist
paradigm (Howe, 1988). Pragmatists challenged quantitative and qualitative
purists’ support of the Incompatibility Thesis, asserting that quantitative and
qualitative paradigms were neither mutually exclusive nor interchangeable.
As such, their philosophy gave rise to the Compatibility Thesis, in which the
relationship between the quantitative and qualitative paradigms consisted of
isolated events lying on a continuum of scientific research (Howe, 1988;
Reichardt and Rallis, 1994; Tashakkori and Teddlie, 1998). Pragmatists also
believed that the role of theory was central for both quantitative and qual-
itative paradigms. They believed in the existence of both subjective and
objective orientations, utilizing both deductive and inductive logic
(Onwuegbuzie, 2002a).
In the 1980s, mixed methods as a research approach gained momentum.
In the 1990s, came the emergence of mixed model studies, that is, combining
quantitative and qualitative approaches within different stages of the research
process (Creswell, 1995; Tashakkori and Teddlie, 1998). Thus, presently,
three research paradigms prevail in the social and behavioral sciences:
quantitative, qualitative, and pragmatist.

1.2. THE CASE FOR PARADIGM WARS

Despite the gains made by pragmatists in advancing mixed methodologies,


quantitative and qualitative purists still abound. These purists continue to
emphasize the differences between the two major paradigms. According to
theorists and researchers from these camps, quantitative and qualitative
researchers have uncompromisingly different world-views. More specifically,
purists contend that the Incompatibility Thesis stems from the fact that
quantitative and qualitative research paradigms operate under different
ontological, epistemological, and axiological assumptions about the goal and
nature of research (Bryman, 1984; Tashakkori and Teddlie, 1998).
Ontological differences between the two major paradigms revolve around
the perceived nature of reality. Whereas positivists believe in a single reality
that can be measured reliably and validly using scientific principles, inter-
pretivists believe in multiple constructed realities that generate different
meanings for different individuals, and whose interpretations depend on the
researcher’s lens. Epistemological differences between the two major
TAKING THE ‘‘Q’’ OUT OF RESEARCH 271

paradigms are based on the relationship between the researcher and objective
of study (e.g., participant). Whereas positivists assert that researchers should
separate themselves from the object of study, interpretivists contend that
these two entities are dependent on one another and that qualitative
researchers should take advantage of this relationship better to understand
phenomena. Finally, axiological differences focus on the role of values in
research. Positivists maintain that research should be value-free, whereas
interpretivists posit that research is influenced to a great extent by the values
of the researcher.
These major differences in understanding the world have caused great rifts
between positivists and interpretivists (Creswell, 1998). On both sides, many
staunch advocates believe that due to these major differences in beliefs, there
is no middle ground at which to meet. Instead, advocates of positivist and
interpretivist paradigms believe their way is the only way, and thus, cannot
see beyond their own back yards.

1.3. THE CASE AGAINST PARADIGM WARS

The most disturbing feature of the paradigm wars is the unending focus on
the two orientations. These orientations have evolved into subcultures in the
research world: the subculture of positivistic quantitative thinking and the
subculture of interpretivist qualitative thinking (Sieber, 1973). As noted by
Newman and Benz (1998), rather than representing bi-polar opposites,
quantitative and qualitative research represent an interactive continuum.
Even though there is a substantial rift between the two paradigms, there are
many more similarities than there are differences between these two orien-
tations.
One of the most basic similarities between the two paradigms is that they
both include the use of research questions. Furthermore, the research ques-
tions in both paradigms are addressed through some type of observation.
Sechrest and Sidani (1995: 78) describe how both paradigms ‘‘describe their
data, construct explanatory arguments from their data, and speculate about
why the outcomes they observed happened as they did’’.
Another similarity is how both paradigms interpret data. Quantitative
researchers use an array of statistical procedures and generalizations to
determine what their data mean, whereas qualitative researchers use phe-
nomenological techniques and their worldviews to extract meaning. That is,
researchers from both paradigms use analytical techniques to find meaning
(Dzurec and Abraham, 1993). In addition, techniques to verify data are
utilized by researchers in both paradigms.
Both sets of researchers attempt to reduce the dimensionality of their data.
For example, quantitative researchers use data-reduction methods such as
factor analysis and cluster analysis, whereas interpretivists conduct thematic
272 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

analyses (Onwuegbuzie, 2003). As such, factors that are derived from mul-
tivariate analyses are analogous to emergent themes that are extracted from
thematic analyses.
As outlined above, similarities between quantitative and qualitative re-
search abound. Moreover, regardless of epistemology, all research in the
behavioral and social sciences represent an attempt to understand human
behavior (Onwuegbuzie and Leech, 2004a). Thus, it is clear that if differences
prevail between quantitative and qualitative researchers, these discrepancies
do not arise from different goals. Instead they occur because the two groups
of investigators have operationalized their strategies differently for reaching
these goals (Dzurec and Abraham, 1993). This suggests that methodological
pluralism should be enhanced. The best way to accomplish this is for as many
investigators as possible to become pragmatist researchers.

1.4. CURRENT SCENE

It is not unusual for quantitative and qualitative research methodology


instructors to debate how many quantitative courses (e.g., statistics, mea-
surement) and qualitative courses that graduate students should be required
to take. Further, when teaching introductory-level research courses,
instructors with quantitative leanings and those with qualitative orientations
often are at odds as to how much emphasis should be placed on quantitative
and qualitative approaches. Specifically, the ratio of quantitative to quali-
tative content in these courses is largely dependent on whether a quantitative
or qualitative methodologist is teaching the course. Bearing in mind that an
introductory-level course might be the only formal exposure to research
methodology for most Master’s students, such curricula likely will render a
student one-dimensional with regard to knowledge of the research process.
As such, these students will not graduate while being adequately competent
in both areas of research. That is, they will not graduate as pragmatist
researchers. As stated eloquently by Newman and Benz (1998):

while to some the [quantitative–qualitative] debate has ended, to others,


especially those we encounter in researcher-practitioner programs, the
debate has either not yet materialized to the full extent of its fury or
continues unabated. Our strong sense is twofold. First, we continue to
prepare students for an ‘either-or’ world, a dichotomous world, that no
longer exists. We still prepare students who leave our colleges and uni-
versities with a monolithic perspective. Either they become well trained
statisticians, or they become cultural anthropologists, methodologically
weak in asking research questions in justifying one or the other set of
strategies. Second, researchers in education and in the social sciences
TAKING THE ‘‘Q’’ OUT OF RESEARCH 273

have not yet constructed a way to ensure their success in utilizing both
paradigms (pp. 7–8).

1.5. CURRENT METHODS BOOKS

An examination of research methodology textbooks in the social and


behavioral science field published over the last few decades provides evidence
of a shift in the balance of power between the quantitative and qualitative
paradigms (Culbertson, 1988; Newman and Benz, 1998; Tashakkori and
Teddlie, 2003). Prior to the 1990s, the vast majority of research methodology
textbooks, including the most popular and most frequently used ones, did not
contain a chapter or even a section on qualitative research (e.g., Ary et al.,
1972; Bordens and Abbott, 1988; Durson and Mellgren, 1989). Although
some contemporary introductory-level research methodology textbooks still
make scant reference to the qualitative paradigm (e.g., Leary, 1995; Liebert
and Liebert, 1995; Rosnow and Rosenthal, 1996; Salkind, 2000), an
increasing number of leading books in this area contain one or more chapters
on qualitative methods (e.g., Fraenkel and Wallen, 2000; Ary et al., 2002;
Creswell, 2002; Onwuegbuzie and Leech, 2005b; Johnson and Christensen,
2004).
An example of the increase of qualitative methods included in an intro-
ductory methods book is exemplified by Lorraine Gay’s text. This text has
been a leader in the educational research field since 1976. The four early
editions of her book (Gay, 1976, 1981, 1987, 1992) did not contain a chapter
on qualitative research. In fact, it was not until the fifth edition (i.e., Gay,
1996) – merely 7 years ago, that she included a chapter on qualitative re-
search methodology. This edition contained one chapter on qualitative re-
search (Chapter 7). The sixth edition (i.e., Gay and Airasian, 2000) contained
two chapters on qualitative research (Chapter 6: Data Collection; Chapter 7:
Data Analysis). The seventh edition, the latest (i.e., Gay and Airasian, 2003)
contains three chapters on qualitative research (Chapter 6: Characteristics of
Qualitative Research; Chapter 7: Data Collection; Chapter 8: Data Analysis).
This trend of increased qualitative research chapters in Gay’s textbook
over the last quarter of a century, which is typical of most research texts, is
encouraging inasmuch as this text offers the potential for novice researchers
to be trained in both quantitative and qualitative methodologies within the
same course. However, in many research methods books, the number of
chapters devoted to quantitative methodologies far outweighs that devoted
to qualitative research methodologies. For instance, Gay and Airasian’s
(2003) present their text as having a 4-chapter section on qualitative research
(i.e., Chapters 6–9) and a 7-chapter section on quantitative methodologies
(i.e., Chapters 10–16). However, a closer examination of the book reveals
that this is misleading. Specifically, Chapter 5, entitled ‘‘Selecting Measuring
274 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

Instruments,’’ not included in the quantitative research section, exclusively


contains concepts and terminologies (e.g., scales of measurement, measuring
instruments, reliability, content-related validity, criterion-related validity,
construct-related validity) that are traditionally associated with the quanti-
tative paradigm. Because the content of this chapter is quantitative, this
chapter either should have been placed in the Quantitative Research section
of Gay and Airasian’s book, or should include qualitative measures as well as
quantitative measures.
In a similar fashion, the majority of Chapter 4, entitled, ‘‘Selecting a
Sample,’’ which also is not included in the quantitative research section of
Gay and Airasian’s book, discusses quantitative research concepts
(i.e., probability and non-probability sampling, determining sample size for
various quantitative research designs). In fact, only one page (i.e., six para-
graphs) of this chapter is devoted to qualitative sampling. The qualitative
portion appears in the last section of the chapter, perhaps giving students the
impression that it was included as an afterthought. Disturbingly, although all
the types of quantitative sampling designs were discussed in this chapter, only
5 of the 16 qualitative sampling schemes identified by Miles and Huberman
(1994) were presented very briefly in a table. Further, these authors did not
cite Miles and Huberman (1994) or any other qualitative textbook for readers
interested in learning more about qualitative sampling designs. Therefore,
Chapter 4 is heavily weighted towards the quantitative research paradigm.
Finally, Gay and Airasian (2003) incorrectly present action research as a
qualitative research paradigm, when this form of research also can utilize
quantitative techniques (see for example, Creswell, 2002; Daniel and On-
wuegbuzie, 2002a; Daniel et al., 2002; Daniel et al., 2003). Indeed, in their
action research chapter (i.e., Chapter 9), the authors discussed collecting test
scores and attitudinal scale data, which are more often associated with the
quantitative paradigm. Therefore, Gay and Airasian (2003) actually present
three purely qualitative chapters and eight purely quantitative chapters
within their text, with an additional chapter on sampling that is predomi-
nantly quantitative (i.e., Chapter 4: Selecting a Sample). Therefore, the ratio
of purely quantitative methodology chapters to purely qualitative method-
ology chapters is 8:3. In other words, 72.7% of the chapters exclusively
represent the quantitative paradigm. However, it should be noted that Gay
and Airasian (2003) present more chapters on qualitative methodologies than
do many other research textbooks, which, we feel is a positive and helpful
addition.
Other texts have the same issue of having more quantitative information
than qualitative. In McMillan and Schumacher’s (2001) book, the ratio of
purely quantitative to purely qualitative research chapters is 6:3. In Wallen
and Fraenkel’s (2001) book, the ratio is even more disproportional at 12:2.
Thus, despite the increase in the number of qualitative methods chapters
TAKING THE ‘‘Q’’ OUT OF RESEARCH 275

occurring in many research textbooks, most of these books are still domi-
nated by the quantitative research paradigm. Encouragingly, however, some
research methodology textbooks now contain a section (e.g., McMillan and
Schumacher, 2001; Gay and Airasian, 2003) or even a whole chapter
(e.g., Punch, 1999; Creswell, 2002; Johnson and Christensen, 2004) devoted
to mixed methodology research. Nevertheless, for the remaining chapters in
most textbooks, the quantitative and qualitative approaches are presented
separately. Unfortunately, by presenting these two sets of methods in sepa-
rate sections and chapters, research methodology instructors who possess
little background or knowledge of one paradigm may not be able to resist the
temptation to omit one of the sections of the research text. Students enrolled
in such one-dimensional classes then would not be educated to become
pragmatist researchers.
It could be argued that separating quantitative and qualitative techniques
makes the book read as if it contains two books in one. Further, by sepa-
rating the two paradigms in research textbooks, students may form the
impression that research represents a dichotomy of choices rather than an
integrative, interactive, and systematic process for the purpose of generating
new knowledge or validating or refuting existing knowledge. Further, such
separation does little to advance epistemological ecumenism. Yet, not only is
such a quantitative–qualitative dichotomy false, but as Miles and Huberman
(1984: 21) stated, ‘‘epistemological purity doesn’t get research done’’.
Moreover, as advanced by methodologists (e.g., Newman and Benz, 1998;
Onwuegbuzie, 2003), rather than representing a dichotomy, quantitative and
qualitative research traditions lie on an epistemological continuum. In fact,
all the various dichotomies that are used to distinguish quantitative and
qualitative paradigms should be re-conceptualized as lying on continua.
These continua include realism vs. idealism, objective vs. subjective, imper-
sonal vs. personal, deductive reasoning vs. inductive reasoning, logistic vs.
dialectic, rationalism vs. naturalism, reductionistic vs. holistic, generalization
vs. uniqueness, causal vs. acausal, macro vs. micro, quantifiers vs. describers,
and numbers vs. words. Understanding and using such a re-conceptualiza-
tion would allow all research methodology instructors to focus more on
teaching research strategies rather than on paradigmatic issues.
The prevailing practice of presenting quantitative and qualitative in sep-
arate sections of textbooks likely has shaped the way many research meth-
odology instructors teach their classes. As stated by Tashakkori and Teddlie
(2003):

Although no research data are available, one may assume that pedagogy
has undergone the same transitions as well; that is, that current teachers
of research methods cover both types of methods in their general
research courses, and do so in a separate manner. (p. 63)
276 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

Tashakkori and Teddlie (2003) subsequently recommended that

research methods should be taught in an integrated, complementary


manner (qualitative + quantitative), not in an artificially separated
manner (qualitative, then quantitative, or vice versa). There needs to be a
shift in the way research methods are taught, and also in the way re-
search method texts are structured in order to provide the best training
and education to our students in this area. This is especially the case with
introductory research methods textbooks. (p. 63)

We agree with Tashakkori and Teddlie that research methods textbooks and
courses should be presented in a more integrated and interactive manner.
Further, we agree that ‘‘those who teach social/behavioral research meth-
odology have to stop identifying themselves as qualitative or quantitative
researchers’’ (p. 22). Also, we applaud these authors for providing a frame-
work for accomplishing this (cf. Tashakkori and Teddlie, 1998, 2003).
However, our position is even stronger. Specifically, we believe that the best
way to develop pragmatist researchers is by eliminating quantitative research
methodology courses (e.g., including statistics courses) and qualitative re-
search methodology courses from curricula and replacing these with research
methodology courses at different levels that simultaneously teach both
quantitative and qualitative techniques within a mixed methodological
framework. Further, we advocate that the terms ‘‘quantitative’’ and ‘‘quali-
tative’’ be eliminated from research textbooks as much as possible. Next, we
provide a framework for reaching our goals.

2. Towards a Framework for Deconstructing Quantitative and Qualitative


Research
A content analysis conducted by the present authors revealed seven steps of
the research process that are presented in the majority of research method-
ology textbooks. These components also appear to be taught in many
research methodology courses. These steps are:
1. formulating a research problem and research objective;
2. develop research purpose, research questions(s), and hypotheses;
3. select a research design/method;
4. collect data;
5. analyze data;
6. interpret/validate data; and
7. communicate findings.
We will illustrate how quantitative and qualitative research can be blended or
integrated in such a way that reference to the terms ‘‘quantitative’’ and
‘‘qualitative’’ is no longer needed.
TAKING THE ‘‘Q’’ OUT OF RESEARCH 277

2.1. FORMULATING A RESEARCH OBJECTIVE

As noted by Tahsakkori and Teddlie (2003), virtually all research textbooks


give the impression that quantitative research methods are associated with
deductive reasoning and qualitative methods are associated with inductive
reasoning. For example, Gay and Airasian (2003: 4) state: ‘‘An inductive
research approach is typically qualitative in nature, while a deductive re-
search approach is typically quantitative in nature’’. However, this represents
an over-simplification of the nature of the quantitative and qualitative
research paradigms.
The research objective in quantitative studies can be classified as falling on
a continuum from exploratory to confirmatory. A quantitative research
objective is exploratory if the goal of the study is to examine patterns from
data collected by the investigator or the researcher. From exploratory
studies, quantitative researchers are able to make statements such as ‘‘the
pattern of student responses suggested that the reading test contains two
dimensions, vocabulary and comprehension.’’ Conversely, a quantitative
research objective is confirmatory if the goal of the investigation is to use the
underlying data collected to test hypotheses of interest. From confirmatory
studies, quantitative researchers are able to make statements such as the
following: ‘‘As hypothesized, ninth-grade girls reported higher levels of
mathematics anxiety than did ninth-grade boys.’’ Consequently, the use of
both exploratory and confirmatory research objectives in quantitative
research has a long history.
With respect to qualitative research, use of exploratory research objectives
is extremely commonplace. Exploratory research objectives allow qualitative
researchers to make statements such as ‘‘An analysis of the focus group re-
sponses revealed four themes.’’ Thus, use of exploratory research objectives in
qualitative research is well established. On the other hand, confirmatory re-
search objectives are not as commonly associated with qualitative studies. In
fact, many qualitative researchers (e.g., Lincoln and Guba, 1985) contend that
confirmatory studies cannot be undertaken in qualitative research because
realities are not replicable. However, we contend, as do others (e.g., Patton,
1990; Tashakkori and Teddlie, 1998; Dooley, 2001; Sandelowski, 2001; On-
wuegbuzie and Teddlie, 2003), that qualitative research objectives can be
confirmatory. Simply put, hypotheses can be tested in qualitative research. For
example, Witcher et al. (2001) conducted a qualitative study to determine
preservice teachers’ perceptions of characteristics of effective teachers. A
qualitative analysis revealed that the characteristics of effective teaching fell
into the following six themes: student-centeredness, enthusiasm for teaching,
ethicalness, classroom and behavior management, teaching methodology, and
knowledge of subject. Minor et al. (2002) replicated Witcher et al. (2001)
study. In this follow-up study, the same six themes were confirmed, with an
278 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

additional theme emerging (personableness). The confirmation of these six


themes demonstrated that qualitative findings could be replicated.
As such, consistent with the compatibility thesis, both qualitative and
quantitative studies can have either exploratory or confirmatory research
objectives, or both. Such a conceptualization has been proposed by Ta-
shakkori and Teddlie (1998) and Onwuegbuzie and Teddlie (2003). Explor-
atory studies typically attempt to develop theories about how and why a
phenomenon operates as it does. In contrast, confirmatory studies involve
testing hypotheses that arise from new or existing theories. Simply put,
exploratory research objectives center on theory initiation and theory build-
ing, whereas confirmatory research objectives focus on theory testing and
theory modification. Thus, as contended by Newman and Benz (1998),
exploration and confirmation are linked by theories. Because theories play a
central role in the behavioral and social science field in general and in quan-
titative and qualitative research in particular, designing studies around an
exploratory-confirmatory research objective continuum plays an important
role in deconstructing the quantitative and qualitative research paradigms.
Therefore, instead of emphasizing deductive/quantitative and inductive/
qualitative combinations, as often occurs in the opening chapter of intro-
ductory-level research texts, we recommend that authors describe the
exploratory-confirmatory research objective continuum outlined above. Such
a re-conceptualization would instill in students that the research objective
plays a central role in all research studies and, thus, immediately reduce the
barriers between quantitative and qualitative research from the onset. That
is, research objectives drive studies, not the paradigm or method. Further, re-
organizing the opening chapter in introductory methods books would unify
the language used by quantitative and qualitative researchers around a
common theme (i.e., research objective).

2.2. DEVELOP RESEARCH PURPOSE, RESEARCH QUESTIONS(S), AND


HYPOTHESES

Once the research objective has been determined, the next step is to determine
the purpose of the research. The research purpose indicates what will be/was
studied. More than one research purpose can ensue from a research objective.
Clearly, there is no need to distinguish the quantitative and qualitative
research paradigms here.
Alongside the research purpose is the research question, which also stems
from the research objective. Just as all studies contain one or more research
purposes, they include one or more research questions, although these
questions may not be explicitly stated in the final report. Regardless of
paradigm, research questions are open-ended general questions being asked
by the investigator(s). Research questions can be exploratory-based or
TAKING THE ‘‘Q’’ OUT OF RESEARCH 279

confirmatory-based (i.e., based on theory and extant literature). Therefore,


again, no explicit distinction needs to be made between quantitative and
qualitative research.
Finally, the discussion of hypotheses also should center on the research
objective, not on the paradigm. Whereas hypotheses are needed for confir-
matory research objectives, no hypotheses are needed for exploratory re-
search objectives. Hypotheses also could be linked to the reasoning process.
In particular, the role of inductive and deductive reasoning in forming
hypotheses could be delineated in the research study. Moreover, inductive
hypotheses (i.e., generalizations based on specific observations) and deduc-
tive hypotheses (i.e., derived from theory) could be described, particularly
with how they relate to research objectives. Consequently, hypotheses will
not be linked only to the quantitative research paradigm, as has traditionally
been the case in research text.

2.3. SELECT A RESEARCH/DESIGN/METHOD

As noted above, a common misconception among novice and experienced


researchers alike is that there is a one-to-one correspondence between re-
search paradigm and research techniques used. That is, there is a general
tendency among researchers to treat epistemology (i.e., paradigm/episte-
mology) and method as being synonymous (Bryman, 1984). This practice is
flawed because the logic of justification does not dictate what specific re-
search methods (e.g., data collection, data analysis) should be used by
researchers. Indeed, differences in paradigm (i.e., logic of justification) do not
prevent quantitative researchers from utilizing procedures more typically
associated with qualitative research, nor does it prevent qualitative
researchers from employing techniques more routinely linked with quanti-
tative research (Onwuegbuzie, 2003; Onwuegbuzie and Teddlie, 2003). For
example, quantitative researchers often collect observational or interview
data; on the other hand, qualitative researchers can collect empirical infor-
mation for their studies (e.g., number of participants, average age of par-
ticipants). This lack of dependence between the logic of justification and
procedure justifies the combining of quantitative and qualitative research
designs/methods.
There is little doubt that this fallacy is exacerbated by the fact that authors
of introductory-level research methodology textbooks do not make a dis-
tinction between research method as a technique (i.e., research design) and
research method as a logic of justification (i.e., research paradigm). In par-
ticular, research designs are presented in textbooks as if only specific types of
data can be collected and analyzed for each research design. For example, in
outlining experimental designs, virtually all research texts give the impression
that only quantitative data can be collected when these designs are used. Yet,
280 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

why cannot qualitative data (e.g., interview responses) be collected as part of


the experimental design? Why cannot experimental and control groups be
compared with respect to qualitative information? In fact, as noted by On-
wuegbuzie (2000a), pharmaceutical companies, which routinely utilize true
experimental designs, collect both quantitative data (e.g., cure rates from
clinical trials) and qualitative data (e.g., side effect data) as a matter of
course. Similarly, why cannot quantitative data (i.e., numbers) be collected as
part of a grounded theory design or any other qualitative design?
In order for the barriers between quantitative and qualitative research to
be broken down, we recommend instead of subdividing designs in textbooks
using the quantitative/qualitative dichotomy, authors classify research de-
signs according to research objective: exploratory and confirmatory. As such,
mixed method designs identified in the literature (Greene et al., 1989; Patton,
1990; cf. Caracelli and Greene, 1993; Creswell, 1995, 1998, 2002; Tashakkori
and Teddlie, 1998; Creswell, et al., 2002; Maxwell and Loomis, 2002; On-
wuegbuzie and Johnson, 2004; Onwuegbuzie and Leech, 2005a) also could be
included in these two sections, depending on the research objective. We
suggest that the sections containing these two sets of research designs be
called exploratory-related and confirmatory-related research designs, in order
to reflect the fact that research objectives lie on a continuum.

2.4. COLLECT DATA

The two central issues routinely presented in research texts that pertain to
data collection issues are sampling and instrumentation. Chapters on the two
topics appear in virtually all research methodology textbooks. We will now
discuss how we believe these sections/chapters should be re-organized.

2.4.1. Sampling
In the quantitative research section of textbooks, authors correctly differ-
entiate probability (i.e., random) sampling designs from non-probability
(non-random) sampling designs. Typically, four probability sampling de-
signs are presented: simple random sampling, stratified random sampling,
cluster random sampling, and systematic random sampling. In addition,
multistage random sampling designs are discussed, comprising some com-
bination of these four designs (e.g., Ary et al., 2002; Gay and Airasian,
2003). In addition, some combination of the following four non-probability
sampling designs commonly is presented: convenience/accidental/haphazard
sampling designs, purposive/judgmental sampling designs, quota sampling
designs, and network/snowball sampling designs. However, because these
eight designs are presented as being used for the purpose of generalizing
findings from the sample to the population, these sampling schemes are
TAKING THE ‘‘Q’’ OUT OF RESEARCH 281

viewed as belonging to the quantitative domain. Yet, is it never appro-


priate for qualitative studies to select random samples? Clearly this is not
the case, especially in cases when qualitative research findings lead to
policy making.
Furthermore, because it is argued that qualitative research never seeks to
generalize to populations (e.g., Connolly, 1998), some authors do not even
discuss the issue of sampling in qualitative research. Yet, as noted by On-
wuegbuzie and Daniel (2003), some qualitative researchers find it difficult to
refrain from generalizing their findings (e.g., thematic representations) to the
population from which the sample was selected, in which case, not only is
sampling particularly an issue, but random sampling should be seriously
considered. Disturbingly, omitting discussion of qualitative sampling tech-
niques can leave the novice researcher believing that (a) sampling is not an
issue in qualitative research, and (b) sampling designs belong only to the
quantitative domain (Onwuegbuzie and Leech, in press-b). However, in re-
cent years, an increasing number of research textbooks have included dis-
cussions about qualitative sampling designs. These authors appear to agree
that there are numerous qualitative sampling designs. These authors con-
sistently label qualitative sampling designs as representing purposive sam-
pling. In fact, Miles and Huberman (1994) have identified the following 16
purposeful sampling designs: maximum variation sampling, homogeneous
sampling, critical case sampling, theory-based sampling, confirming and
disconfirming cases sampling, snowball/chain sampling, extreme case sam-
pling, typical case sampling, intensity sampling, politically important cases
sampling, random purposeful sampling, stratified purposeful sampling, cri-
terion sampling, opportunistic sampling, mixed purposive sampling, and
convenience sampling. Onwuegbuzie and Leech (in press-b) identified an
additional two qualitative sampling designs: multi-stage purposeful random
sampling and multi-stage purposive sampling, yielding 18 purposive sam-
pling schemes. Interestingly, convenience sampling and network/snowball
sampling, which appear as quantitative sampling schemes in many research
methods texts, according to Miles and Huberman, also represent qualitative
sampling designs. This suggests that redundancies prevail.
Even more disturbing is the fact that if students only were to read about
sampling conceptualized as belonging to the quantitative paradigms, either
because the course textbook does not have a chapter on qualitative sam-
pling design or because their instructor for whatever reason decides not to
cover this topic, they would be left with the impression that there was only
one way to sample in qualitative studies, purposively – yet, there are at
least 18 ways! Similarly, recognizing that probability samples typically are
more optimal, why cannot quantitative researchers use any of these 18
sampling schemes? Another cause for concern is that the 18 qualitative
sampling schemes listed above do not include any probabilistic sampling
282 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

designs, giving the message that population representation is never an issue


in qualitative research – which is not the case, especially in the field of
program evaluation. Also, why cannot qualitative investigators use quota
sampling techniques?
Consequently, we recommend strongly that all sampling schemes,
regardless of orientation, be presented together in the same chapter/section
and under the same umbrella. This would eliminate redundancies (i.e., con-
venience sampling, purposeful sampling, network sampling), such a chapter/
section would involve the presentation of 24 sampling designs comprising 5
probability sampling schemes (4 random + multi-stage sampling), and 19
non-probability sampling schemes (18 schemes presented above + quota
sampling). Thus, instead of distinguishing quantitative sampling schemes
from qualitative sampling schemes, these 24 sampling methods will only be
classified according to whether they represent probability or non-probability
sampling. As such, both quantitative and qualitative researchers will have a
larger pool of sampling methods from which to choose.
Another disturbing trend in the area of sampling refers to choice of sample
size. In regard to quantitative research, many authors provide arbitrary
minimum sample sizes for quantitative sample sizes. Often, minimum sample
sizes of 30 are recommended for correlational studies, 30 for causal-com-
parative/quasi-experimental designs, and 15 for experimental designs. Yet,
these cut-points, which have existed for many years, are extremely misleading
because they can lead to low statistical power. In fact, for correlational
designs, Onwuegbuzie et al. (2004) stated:

Authors of research methodology textbooks (e.g., Creswell, 2002; Gay


and Airasian, 2000, 2003) generally consider 30 participants to be the
minimum acceptable sample size for correlational research. However, a
minimum sample size of 82 is desirable in order to attain sufficient sta-
tistical power (i.e., 0.80) to detect a moderate relationship (i.e., r ¼ 0.30)
between two variables (i.e., statistical significance) at the 5% level of
significance (Erdfelder et al., 1996). (p. 93)

With regard to causal-comparative studies, Onwuegbuzie et al. (2004)


declared:

Because of the lack of manipulation, when using quasi-experimental


research designs, many textbook authors (e.g., Gay and Airasian, 2000,
2003; Creswell, 2002) recommend using at least 30 participants per
group. However, a sample size of 30 often will be too small to detect a
statistically significant difference of moderate size. That is, a sample size
of 30 typically will yield statistical power that is too low to detect a
moderate difference between two groups.
TAKING THE ‘‘Q’’ OUT OF RESEARCH 283

Therefore, we recommend that at least 64 participants per group be used


if the researcher is testing a two-tailed hypothesis (i.e., hypothesizes that
a difference between the two groups exists but does not specify the exact
nature of that difference), and at least 51 participants per group be used if
the researcher is testing a one-tailed hypothesis (i.e., hypothesizes that
the scores of one group, on average, are statistically significantly higher
than are scores of the other group). These sample sizes will provide a
probability of at least 80% (i.e., 0.80) that the researcher identifies a
moderate difference as being statistically significant if such a difference
really exists. In other words, sample sizes of 64 and 51 for two-tailed and
one-tailed hypotheses involving two groups, respectively, using a 5%
level of significance, yields a statistical power of 0.80 or greater for
detecting a moderate difference (Erdfelder et al., 1996). Group sizes of 30
only yield a statistical power of 0.61 for detecting a moderate difference,
which is inadequate because statistical power values lower than 0.80 are
deemed inadequate (McNemar, 1960; Cohen, 1965; Onwuegbuzie and
Leech, in press) (p. 101).

With regard to experimental designs, Onwuegbuzie et al. offered the fol-


lowing advice:

Because experimental studies result in the maximum amount of control,


some textbook authors (e.g., Gay and Airasian, 2000, 2003; Creswell,
2002;) say that as few as 15 participants per group is considered sufficient
to detect true differences between the experimental group(s) and the
control group(s). However, we believe that this recommendation is
flawed because it typically leads to inadequate statistical power
(i.e., < 0.80). Assuming that the treatment/intervention truly is effective,
when randomization takes place, it is reasonable to expect a large dif-
ference to be found between the experimental and control groups.
However, at least 21 participants per group are needed in order to yield a
statistical power of 0.80 or greater for detecting a large (one-tailed) dif-
ference at the 5% level of significance (Erdfelder et al., 1996). Group sizes
of 21 only yield a statistical power of 0.69 for detecting large differences
pertaining to a one-tailed hypothesis (i.e., declaring that the experimental
group is superior to the control group). Thus, we recommend that group
sizes of 21 or greater always be used in experimental studies (p. 104).

Thus, clearly, the cut points for minimum sample sizes appearing in intro-
ductory-level textbooks need to be revised.
Disturbingly, typically no discussion takes place in introductory-level re-
search methodology textbooks about sample sizes needed for qualitative
research designs. Yet, as contended by Onwuegbuzie and Leech (in press-b),
284 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

contrary to popular opinion, sample size is just as important for qualitative


researchers as it is for quantitative researchers. In fact, qualitative researchers
advocate prolonged engagement, persistent observations, and triangulation
(e.g., Lincoln and Guba, 1985). By increasing the representativeness and the
size of the qualitative data collected, these three strategies depict sampling
procedures. As stated by Onwuegbuzie (2003):

A common goal of qualitative researchers, especially when interviewing


and focus techniques are used, is to capture the voice of the person(s)
being studied. Regardless of the number of interviews conducted
(i.e., single vs. multiple), the length of each interview, type of interviews
(e.g., unstructured, partially structured, semi-structured, structured, to-
tally structured), and format of interviews (e.g., formal vs. informal),
words collected represent a mere sample of the interviewees’ voice
(i.e., truth space). Thus, when conducting thematic analyses, inferences
are made from the sample of words to the interviewee’s truth space. Just
as quantitative researchers hope that their sample is representative of the
population, qualitative researchers hope that the sample of words is
representative of the truth space. However, if the sample of words col-
lected is not representative of the interviewee’s total truth space, then the
voice sampling error will be large. Consequently, any subsequent anal-
yses of the sample of words will likely lead to untrustworthy findings
(p. 400).

Indeed, concerned about the spate of qualitative dissertations in which


interview responses from a few participants are generalized to the population,
Onwuegbuzie and Leech (in press-b) have called for qualitative researchers to
conduct qualitative power analysis. They define qualitative power analyses as
a method of deciding upon the minimum unit needed to capture the voice or
to understand the underlying phenomenon. According to these theorists, the
term ‘‘unit’’ could refer to participants, interview/focus group length, inter-
view/focus group frequency, and so forth. Onwuegbuzie and Leech also call
for meta-analyses to be conducted on sample sizes across qualitative studies
to determine ranges of typical sample sizes for various designs.
We believe that sample sizes should be discussed in qualitative research.
Indeed, some researchers have provided guidelines for sample sizes in qual-
itative studies. For example, Creswell (1998: 113) recommends that 20–30
persons should be interviewed in grounded theory designs in order to
‘‘achieve data in the theory’’. (For a review of qualitative samples sizes, see
Onwuegbuzie and Leech, in press-b). In any case, by not discussing quanti-
tative and qualitative sampling schemes and sample sizes in separate sections/
chapters, but instead presenting sampling concepts in a holistic, inter-
changeable manner, we believe that students will have a more comprehensive
TAKING THE ‘‘Q’’ OUT OF RESEARCH 285

understanding of the role of sampling in research. Moreover, students will


realize that sampling decisions should be driven by the research objective and
research purposes and not by the research paradigm.

2.4.2. Instrumentation
Selecting measuring instruments plays a central role in all research textbooks.
However, discussion of instrumentation in quantitative research always is
kept separate from any discussion of data collection tools in qualitative re-
search. Yet, an increasing number of instruments generate both closed-ended
(e.g., attitudinal scores) and open-ended (e.g., words) items. Moreover, data
collection concepts and issues pertaining to the quantitative paradigm are
treated as being diametrically opposite to those relating to the qualitative
paradigm. This practice is disturbing because it leads to the creation of
assumptions that are grossly in error. For example, research textbooks ad-
vance the myth that quantitative data methods are objective and qualitative
data procedures are subjective. Yet, the former could not be farther from the
truth. As noted by Onwuegbuzie (2000a):

As noted by Onwuegbuzie (in press-b), positivists claim that the essence


of science is objective verification, and that their methods are objective.
However, positivists disregard the fact that many research decisions are
made throughout the research process that precede objective verification
decisions made. For example, in developing instruments that yield
empirical data, psychometricians select items in an attempt to represent
the content domain adequately (Onwuegbuzie and Daniel, 2000b, 2001).
Yet, choosing these items represents a subjective decision at every stage
of the instrument-development process. Thus, although the final version
of the instrument can lead to objective scoring, because of the subjec-
tivity built into its development, any interpretations of the scores yielded
cannot be 100% objective. Simply put, ‘‘SUBJECTIVITY +
OBJECTIVITY ¼ SUBJECTIVITY’’ (pp. 11–12)

Not only do authors portray quantitative data collection methods as being


objective, but properties of reliability and validity are assigned to (quanti-
tative) measuring instruments. For example, on page 185, Creswell (2002)
includes a section that he entitles, ‘‘Choosing a Reliable, Valid Instrument.’’
Similarly, on pages 135 and 141, Gay and Airasian (2003) include sections
they call ‘‘Validity of Measuring Instruments’’ and ‘‘Reliability of Measuring
Instruments,’’ respectively. Also, on page 249, Ary et al. (2002) refer to
‘‘reliability of a measuring instruments.’’ Such statements are extremely
misleading because reliability is a function of scores and not of instruments
(Wilkinson and the Task Force on Statistical Inference, 1999; Thompson and
286 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

Vacha-Haase, 2000; Onwuegbuzie and Daniel, 2002a, 2002b, 2003, 2004). It


is no wonder then that the majority of quantitative researchers do not report
reliability coefficients for data from their samples (Willson, 1980; Meier and
Davis, 1990; Thompson and Snyder, 1998; Vacha-Haase et al., 1999; On-
wuegbuzie, 2002b; Onwuegbuzie and Daniel, 2002a, 2002b, 2003, 2004), even
though this has been recommended by authoritative and influential sources
(e.g., American Educational Research Association, American Psychological
Association, and National Council on Measurement in Education, 1999;
Wilkinson and the Task Force on Statistical Inference, 1999).
In addition to giving the impression that instruments can be reliable and
valid, textbook authors make the mistake of not discussing the issue of
reliability in qualitative research. Yet, reliability is relevant for qualitative
data (Madill et al., 2000; Daniel and Onwuegbuzie, 2002b). In qualitative
research, information extracted (e.g., observations, interview responses) must
be ‘‘trustworthy’’ (Lincoln and Guba, 1985; Eisenhart and Howe, 1992);
otherwise any themes that emerge from these data will not be credible. An
important component of trustworthiness is ‘‘dependability’’ (Lincoln and
Guba, 1985), which, in turn, is analogous to reliability (Eisenhart and Howe,
1992). In fact, it is commonplace for qualitative researchers to report either
intrarater (e.g., consistency of a given rater’s scores or observations) or in-
terrater (e.g., consistency of two or more independent raters’ scores or
observations) reliability estimates. Consistent with this assertion, a leading
theory-building qualitative software program called NUD.IST (Non-
numerical Unstructured Data Indexing Searching and Theorizing), currently
named N6, allows data analysts to determine inter-coder reliability (QSR
International Pty Ltd., 2002).
Another myth perpetuated by textbook authors with respect to instru-
mentation is that (a) only one type of data (i.e., quantitative data or quali-
tative data) can be collected in one study; (b) observations belong exclusively
to the qualitative domain, and (c) observations represent a research method
instead of a data collection technique (Tashakkori and Teddlie, 2003). As is
the case for sampling schemes, we recommend that authors blend what have
been traditionally regarded as quantitative and qualitative instruments. In
fact, we believe that the method of data collection should stem from the
research objective and research purpose. Such a strategy would break down
further the barriers between quantitative and qualitative research and, at the
same time, provide students with more options for collecting data.

2.5. ANALYZE DATA

The data analysis techniques outlined in most research methodology text-


books represent the greatest divide between the quantitative and qualitative
research paradigms. Although having separate chapters for quantitative
TAKING THE ‘‘Q’’ OUT OF RESEARCH 287

analyses and for qualitative analyses may appear logical; this can be mis-
leading because it gives the impression that statistical analyses should be used
exclusively for quantitative studies and qualitative data analyses should be
used solely for qualitative inquiries. In particular, it does not take advantage
of the fact that both statistical analyses and qualitative data analyses can be
used to explore and to confirm the collected data.
Consequently, we believe that the type of analysis selected should be
driven by the research objective and not by the research paradigm. If the
research objective is exploratory, then quantitative exploratory data analyses
(e.g., descriptive statistics, exploratory factor analysis, cluster analysis) and
exploratory qualitative data analyses (e.g., traditional thematic analyses)
could be used. On the other hand, if the research objective is confirmatory,
then quantitative data-analytical techniques can be selected from the array of
inferential statistics. Also, qualitative data-analytic methods can be used that
involve confirmatory thematic analyses, in which replication qualitative
studies are conducted to assess the replicability of previous emergent themes
(i.e., research driven) or to test an extant theory (i.e., theory driven), when
appropriate. Thus, the research objective unites quantitative and qualitative
data analytical techniques under the same framework (Onwuegbuzie and
Teddlie, 2003).
In addition, as recommended by several methodologists (e.g., Patton,
1990; Miles and Huberman, 1994; Tashakkori and Teddlie, 1998;
Onwuegbuzie and Teddlie, 2003), analysts can convert one type of data to
another. Specifically, Tashakkori and Teddlie (1998) uses the term quanti-
tizing to describe the conversion of qualitative data (e.g., interview responses)
to quantitative data (e.g., frequencies). They use the term qualitizing to de-
note the conversion of quantitative data (e.g., test scores) to qualitative data
(e.g., profiles). Further, Onwuegbuzie (2003) advocates the use of effect sizes
for qualitative data, which, in its most basic form, involves obtaining counts
of the frequency of an observed phenomenon. We agree with Tashakkori and
Teddlie (2003) that using these conversions breaks down the quantitative–
qualitative dichotomy.
If a study’s research objective is both exploratory and confirmatory, then
mixed-methodological data analyses techniques could be used. On-
wuegbuzie and Teddlie (2003) identified seven stages of the mixed-meth-
odological data analysis process. These are (a) data reduction, (b) data
display, (c) data transformation, (d) data correlation, (e) data consolida-
tion, (f) data comparison, and (g) data integration. Data reduction, the
initial stage, involves reducing the dimensionality of the quantitative data
(e.g., via descriptive statistics, exploratory factor analysis) and qualitative
data (e.g., via exploratory thematic analysis, memoing). Data display, the
second stage, involves describing pictorially the quantitative data (e.g., ta-
bles, graphs) and qualitative data (e.g., matrices, charts, graphs, networks,
288 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

lists, rubrics, and Venn diagrams). This stage is followed by data trans-
formation, the third stage, in which data are quantitized and/or qualitized.
The fourth stage is represented by data correlation, in which the quanti-
tative data are correlated with the qualitized data. This is followed by data
consolidation, the fifth stage, whereby both quantitative and qualitative
data are combined to create new or consolidated variables or data sets. The
sixth stage, data comparison, involves comparing data from the quantita-
tive and qualitative data sources. Data integration marks the seventh stage
of Onwuegbuzie and Teddlie’s model, whereby both quantitative and
qualitative data are integrated into a coherent whole, or two separate sets
(i.e., quantitative and qualitative) of coherent wholes. Onwuegbuzie and
Teddlie (2003) note, however, that although these seven stages are some-
what sequential, they are not linear. Including such mixed-methods data-
analytic frameworks in this chapter will help to break down further the
barriers between quantitative and qualitative approaches that currently
prevail in research textbooks.

2.6. INTERPRET/VALIDATE DATA

Interpretations of findings are only as good as the data are valid or legiti-
mate. Thus, validity represents the most important stage of the research
process. In recognition of this fact, research text authors tend to provide
detailed discussion of validity in quantitative research. This discussion usu-
ally includes a detailed explanation of internal and external validity. How-
ever, many introductory-level research textbooks pay scant attention to
legitimation or verification in qualitative research. When validity is discussed,
it is presented in direct contrast to the discussion of validity in quantitative
research, thereby representing yet another dichotomy. Yet, comparing
components of validity/legitimation between quantitative and qualitative
design reveals several similarities. In particular, legitimation for both para-
digms involves eliminative induction, which is a systematic attempt to elimi-
nate rival hypotheses until the only one hypothesis remains as a viable
explanation (e.g., Manicas and Kruger, 1976). In addition, it could be argued
that some of the validity components are interchangeable. For example,
reactive arrangements (feelings and attitudes of the participants involved),
which traditionally has been associated with the quantitative paradigm, also
is applicable to the qualitative paradigm.
Similarly, theoretical validity (i.e., the degree to which a theoretical
explanation developed from research findings fits the data) and generaliz-
ability (i.e., the extent to which a researcher can generalize the account of a
particular situation or population to other individuals, times, settings, or
context), both legitimation concerns in qualitative studies (Maxwell, 1992),
also are applicable in quantitative investigations.
TAKING THE ‘‘Q’’ OUT OF RESEARCH 289

Also, many of the procedures for assessing validity/legitimation in


quantitative and qualitative research are interchangeable. For instance, tri-
angulation which has been defined as the process of corroborating evidence
from different individuals, types of data, or data collection methods
(Creswell, 2002), is a useful way of validating data for both types of research.
Consequently, textbook authors should refrain from presenting validity in
the quantitative and qualitative research as representing a dichotomy. In-
deed, we recommend that authors present issues of legitimation within the
same chapter. One way of undertaking this is by re-conceptualizing validity
from both quantitative and qualitative research as representing either inter-
nal credibility or external credibility (cf. Onwuegbuzie, 2000b; Onwuegbuzie
and Leech, in press-c). Internal credibility can be defined as the truth value,
applicability, consistency, neutrality, dependability, and/or credibility of
interpretations and conclusions within the underlying setting or group.
Conversely, external credibility refers to the degree that the findings of a
study can be generalized across different populations of persons, settings,
contexts, and times. That is, external credibility pertains to the confirmability
and transferability of findings and conclusions (Onwuegbuzie, 2000b). Such a
re-framing is much more inclusive.

2.7. COMMUNICATE FINDINGS

Once all conclusions have been formulated and assessed for validity, the
researcher is ready to communicate the findings. The writing styles in qual-
itative and quantitative research often represent some major differences. In
particular, quantitative researchers tend to advocate rhetorical neutrality,
consisting of an exclusively formal writing style utilizing the impersonal voice
and specific terminology. In contrast, the writing style of qualitative
researchers typically is informal, characterized by use of the personal voice
and limited definitions (Onwuegbuzie and Johnson, 2004; Onwuegbuzie and
Leech 2005a).
Nevertheless, quantitative and qualitative research reports contain many
similarities. Regardless of paradigm, a well-written report should be highly
descriptive of all first seven phases of the research process outlined earlier
(i.e., from formulating a research problem/objective to interpreting/validat-
ing data). Most importantly, researchers, regardless of orientation, always
should contextualize their reports. That is, they should carefully communi-
cate the context in which the study took place. Although qualitative
researchers routinely accomplish this, many quantitative reports are devoid
of adequate description of context. As noted by Tashakkori and Teddlie
(2003: 72): ‘‘We are struck by how much some of the ‘quantitative’ papers are
void of any reference to the cultural context of the behaviors/phenomenon
under study’’. Contextualization not only helps researchers to examine how
290 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

the present findings relate previous results, but it also helps the readers to
know the extent to which they can generalize the findings. Also, where
possible, both quantitative and qualitative research reports should be
holistic.
Consequently, there is no reason why discussion about quantitative and
qualitative research reports should take place in separate chapters in intro-
ductory research methods texts. In fact, we recommend that textbook au-
thors emphasize the importance of all components of research reports being
aligned to the research objective. Again, keeping the research objective in
mind throughout the research report can deter students from being distracted
by the quantitative–qualitative dichotomies.

3. Summary and Conclusions


The present paper has provided evidence that the debate between quantita-
tive and qualitative is divisive and, hence, counterproductive for advancing
the social and behavioral science field. We advocate that all graduate students
learn to utilize and to appreciate both quantitative and qualitative research
methodologies. Thus, we appeal to textbook authors to take steps to break
down the barriers between quantitative and qualitative research so that
graduate students do not feel compelled to choose a methodological path
(i.e., qualitative vs. quantitative) early in their programs of study. In this
article, we have outlined how these barriers can be reduced.
However, a change in the format of textbooks, by itself, is insufficient to
develop students into pragmatist researchers. Research methodology
instructors, serving as delivery systems, also have an important role to play in
deconstructing quantitative and qualitative research. Because few research
methodology instructors currently are adept at both quantitative and qual-
itative research, we advocate that qualitative and quantitative research
methods faculty team-teach these mixed-methods courses. We believe that
such courses would send a strong message to students that applied quanti-
tative and qualitative research, for the most part, have the same goal, namely
to understand phenomena systematically and coherently. As such, students
enrolled in these courses will come to regard research as being a collaborative
undertaking. Additionally, these courses would allow students to focus on
the similarities of quantitative and qualitative research outlined in the paper,
rather than on the differences. However, most importantly, such courses will
help to develop pragmatist researchers who are able to conduct bilingual
research.
Instructors must strive to teach students to be flexible in their investigative
techniques, as they attempt to address a range of research objectives and
questions that arise. Consequently, students who complete such courses are
more likely to promote collaboration among researchers. Based on Newman
TAKING THE ‘‘Q’’ OUT OF RESEARCH 291

and Benz’s (1998) conceptualization of the role of theory in quantitative and


qualitative inquiries, pragmatist researchers who emerge are more likely to
view research as a holistic endeavor that requires prolonged engagement,
persistent observation, and triangulation (Lincoln and Guba, 1985).
By having a positive attitude towards both techniques, pragmatist
researchers are in a better position to use qualitative research to inform the
quantitative portion of research studies, and vice versa. In particular, prag-
matist researchers also are more able to combine empirical precision with
descriptive precision (Onwuegbuzie and Leech, in press-c). Armed with a bi-
focal lens, rather than with a single lens, pragmatist researchers will be able to
zoom in to microscopic detail or to zoom out to indefinite scope (Willems and
Raush, 1969). Further, pragmatist researchers have the opportunity to
combine the macro and micro levels of a research issue. Most importantly,
they are more likely to be cognizant of all available research techniques and
to select methods with respect to their values for addressing the underlying
research questions, rather than with regard to some preconceived biases
about which paradigm is a hegemony in social science research. As such,
teaching research methodology courses without the divide between quanti-
tative and qualitative paradigms offers great potential.

References
American Educational Research Association, American Psychological Association, and Na-
tional Council on Measurement in Education (1999). Standards for Educational and Psy-
chological Testing, Revised edn Washington: American Educational Research Association.
Ary, D., Jacobs, L. C. & Razavieh, A. (1972). Introduction to Research in Education. New
York, NY: Holt, Rinehart and Winston.
Ary, D., Jacobs, L. C. & Razavieh, A. (2002). Introduction to Research in Education (6th edn).
Belmont, CA: Wadsworth/Thomson Learning.
Bordens, K. S. & Abbott, B. B. (1988). Research Design and Methods: A Process Approach.
Mountain View, CA: Mayfield.
Bryman, A. (1984). The debate about quantitative and qualitative research: a question of
method or epistemology? British Journal of Sociology 35: 78–92.
Campbell, D. T. & Stanley, J. (1966). Experimental and Quasi-Experimental Design for
Research. Chicago, IL: Rand McNally.
Caracelli, V. W. & Greene, J. C. (1993). Data analysis strategies for mixed-method evaluation
designs. Educational Evaluation and Policy Analysis 15: 195–207.
Cohen, J. (1965). Some statistical issues in psychological research. In: B. B. Wolman (ed.),
Handbook of Clinical Psychology. New York: McGraw-Hill, pp. 95–121.
Collins, R. (1984). Statistics versus words. In R. Collins (ed.), Sociological theory San
Francisco, CA: Jossey-Bass, pp. 329–362.
Connolly, P. (1998). ‘Dancing to the wrong tune’: ethnography generalization and research on
racism in schools. In: P. Connolly and B. Troyna (eds.), Researching Racism in Education:
Politics, Theory, and Practice. Buckingham, UK: Open University Press.
Cook, T. D. & Campbell, D. T. (1979). Quasiexperimentation: Design and Analysis Issues for
Field Settings. Boston, MA: Houghton Mifflin.
292 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

Creswell, J. W. (1995). Research design: Qualitative and Quantitative Approaches. Thousand


Oaks, CA: Sage.
Creswell, J. W. (1998). Qualitative Inquiry and Research Design: Choosing Among Five Tra-
ditions. Thousand Oaks, CA: Sage.
Creswell, J. W. (2002). Educational Research: Planning, Conducting, and Evaluating Quanti-
tative and Qualitative Research. Upper Saddle River, NJ: Pearson Education.
Creswell, J. W., Plano Clark, V. L., Gutmann, M. L. & Hanson, W. E. (2002). Advanced
mixed methods research designs. In: A. Tashakkori and C. Teddlie (eds.), Handbook of
Mixed Methods in Social and Behavioral Research. Thousand Oaks, CA: Sage, pp. 209–
240.
Crotty, M. (1998). The Foundations of Social Research: Meaning and Perspective in the Re-
search Process. Thousand Oaks, CA: Sage.
Culbertson, J. A. (1988). A century’s quest for knowledge base. In: N. J. Boyan (ed.) Hand-
book of Research on Educational Administration. New York, NY: Longman, pp. 3–26.
Daniel, L. G., Dellinger, A. & Onwuegbuzie, A. J. (2003, April). An Expanded Model for
Categorizing Action Research Strategies: Assessing Research Complexity and Determining
Concomitant Research Training Needs. Paper to be presented at the annual meeting of the
American Educational Research Association, Chicago, IL.
Daniel, L. G. & Onwuegbuzie, A. J. (2002a, April). Training Teachers as Inquirers: Defining a
Continuum of Strategies for Teacher Self-reflection and Improvement. Paper presented at
the annual meeting of the American Educational Research Association, New Orleans, LA.
Daniel, L. G. & Onwuegbuzie, A. J. (2002b, November). Reliability and Qualitative Data: Are
Psychometric Concepts Relevant within an Interpretivist Research Paradigm? Paper pre-
sented at the annual meeting of the Mid-South Educational Research Association,
Chattanooga, TN.
Daniel, L. G., Onwuegbuzie, A. J. & Dellinger, A. (2002, November). A Model for Identifying
the Level of Complexity of Action Research. Paper presented at the annual meeting of the
Mid-South Educational Research Association, Chattanooga, TN.
Dooley, D. (2001). Social Research Methods. Upper Saddle River, NJ: Prentice Hall.
Durson, F. T. and Mellgren, R. L. (1989). Thinking About Research. St. Paul, MN: West
Publishing Company.
Dzurec, L. C. & Abraham, J. L. (1993). The nature of inquiry: Linking quantitative and
qualitative research. Advances in Nursing Science 16: 73–79.
Eisenhart, M. A. & Howe K. R. (1992). Validity in educational research. In: M. D. LeCompte,
W. L. Millroy and J. Preissle (eds.), The Handbook of Qualitative Research in Education.
San Diego, CA: Academic Press, pp. 643–680.
Erdfelder, E., Faul, F. & Buchner, A. (1996). GPOWER: A general power analysis program.
Behavior Research Methods, Instruments, and Computers 28: 1–11.
Ermarth, M. (1978). Wilhelm Dilthey: The Critique of Historical Reason. Chicago, IL: Uni-
versity of Chicago Press.
Fraenkel, J. R. & Wallen, N. E. (2000). How to Design and Evaluate Research in Education.
Boston, MA: McGraw-Hill.
Gay, L. R. (1976). Educational Research: Competencies for Analysis and Application.
Columbus, OH: Merrill.
Gay, L. R. (1981). Educational Research: Competencies for Analysis and Application, 2nd edn.
Columbus, OH: Merrill.
Gay, L. R. (1987). Educational Research: Competencies for Analysis and Application, 3rd edn.
Columbus, OH: Merrill.
Gay, L. R. (1992). Educational Research: Competencies for Analysis and Application, 4th edn.
New York: Merrill.
TAKING THE ‘‘Q’’ OUT OF RESEARCH 293

Gay, L. R. (1996). Educational Research: Competencies for Analysis and Application, 5th edn.
Englewood Cliffs, NJ: Merrill.
Gay, L. R. & Airasian, P. (2000). Educational Research: Competencies for Analysis and
Application, 6th edn. Upper Saddle River, NJ: Prentice-Hall.
Gay, L. R. & Airasian, P. (2003). Educational Research: Competencies for Analysis and
Application, 7th edn. Upper Saddle River, NJ: Pearson Education.
Greene, J. C., Caracelli, V. J. & Graham, W. F. (1989). Toward a conceptual framework for
mixed-method evaluation designs. Educational Evaluation and Policy Analysis 11: 255–274.
Guba, E. G. (1987). What have we learned about naturalistic evaluation? Evaluation Practice,
8: 23–43.
Hanson, N. R. (1958). Patterns of discovery: An Inquiry into the Conceptual Foundations of
Science. Newbury Park, CA: Sage.
Hodges, H. (1944). Wilhelm Dilthey: An Introduction. London: Routledge and Kegan Paul.
Hodges, H. (1952). The Philosophy of Wilhelm Dilthey. London: Routledge and Kegan Paul.
Howe, K. R. (1988). Against the quantitative–qualitative incompatability thesis or dogmas die
hard. Educational Researcher 17: 10–16.
Hughes, H. (1958). Consciousness and Society. New York: Knopf.
Johnson, R. B. & Christensen, L. B. (2004). Educational Research: Quantitative, Qualitative,
and Mixed Approaches. Boston, MA: Allyn and Bacon.
Johnson, R. B. & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm
whose time has come. Educational Researcher 33(7), 14–26
Leary, M. R. (1995). Introduction to Behavioral Research Methods. Pacific Grove, CA: Brooks/
Cole.
Liebert, R. M. & Liebert, L. L. (1995). Science and Behavior: An Introduction to Methods of
Psychological Research. Englewood Cliffs, NJ: Prentice-Hall.
Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage.
Madill, A., Jordan, A. & Shirley, C. (2000). Objectivity and reliability in qualitative analysis:
Realist, contextualist and radical constructionist epistemologies. The British Journal of
Psychology 91: 1–20.
Manicas, P. Z. & Kruger, A. N. (1976). Logic: The Essentials. New York: McGraw-Hill.
Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard Educa-
tional Review 62: 279–299.
Maxwell, J. A. & Loomis, D. M. (2002). Mixed methods design: An alternative approach. In:
A. Tashakkori and C. Teddlie (eds.), Handbook of Mixed Methods in Social and Behavioral
Research. Thousand Oaks, CA: Sage, pp. 241–271.
McMillan, J. H. & Schumacher, S. (2001). Research in Education: A Conceptual Introduction,
5th edn. New York, NY: Longman.
McNemar, Q. (1960). At random: Sense and nonsense. American Psychologist 15: 295–300.
Meier, S. T. & Davis, S. R. (1990). Trends in reporting psychometric properties of scales used
in counseling psychology research. Journal of Counseling Psychology 37: 113–115.
Miles, M. & Huberman, A. M. (1984). Qualitative Data Analysis: An Expanded Sourcebook.
Thousand Oaks, CA: Sage.
Miles, M. & Huberman, A. M. (1994). Qualitative Data Analysis: An Expanded Sourcebook,
2nd edn. Thousand Oaks, CA: Sage.
Minor, L. C., Onwuegbuzie, A. J., Witcher, A. E. & James, T. L. (2002). Preservice teachers’
educational beliefs and their perceptions of characteristics of effective teachers. Journal of
Educational Research 96: 116–127.
Newman, I. & Benz, C. R. (1998). Qualitative–quantitative research Methodology: Exploring
the Interactive Continuum. Carbondale, Illinois: Southern Illinois University Press.
294 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

Onwuegbuzie, A. J. (2000a, November). On Becoming a Bi-researcher: The Importance of


Combining Quantitative and Qualitative Research Methodologies. Paper presented at the
annual meeting of the Association for the Advancement of Educational Research (AAER),
Ponte Vedra, Florida.
Onwuegbuzie, A. J. (2000b, November). Validity and Qualitative Research: An Oxymoron?
Paper presented at the annual meeting of the Association for the Advancement of
Educational Research (AAER), Ponte Vedra, Florida.
Onwuegbuzie, A. J. (2002a). Positivists, post-positivists, post-structuralists, and post-mod-
ernists: Why can’t we all get along? Towards a framework for unifying research paradigms.
Education 122(3): 518–530.
Onwuegbuzie, A. J. (2002b). Common analytical and interpretational errors in educational
research: an analysis of the 1998 volume of the British Journal of Educational Psychology.
Educational Research Quarterly 26(1): 11–22.
Onwuegbuzie, A. J. (2003). Effect sizes in qualitative research: a prolegomenon. Quality &
Quantity: International Journal of Methodology 37(4): 393–409.
Onwuegbuzie, A. J. & Daniel, L. G. (2002a). Uses and misuses of the correlation coefficient.
Research in the Schools 9: 73–90.
Onwuegbuzie, A. J. & Daniel, L. G. (2002b). A framework for reporting and interpreting
internal consistency reliability estimates. Measurement and Evaluation in Counseling and
Development 35: 89–103.
Onwuegbuzie, A.J. & Daniel, L.G. (2003, February 12). Typology of analytical and inter-
pretational errors in quantitative and qualitative educational research. Current Issues in
Education [On-line], 6(2). Available: http://cic.ad.asu.edu/volumeb/number2/
Onwuegbuzie, A. J. & Daniel, L. G. (2004). Reliability generalization: The importance of
considering sample specificity, confidence intervals, and subgroup differences. Research in
the Schools 11(1): 61–72.
Onwuegbuzie, A. J., Jiao, Q. G. & Bostick, S. L. (2004). Library Anxiety: Theory, Research,
and Applications. Lanham, MD: Scarecrow Press.
Onwuegbuzie, A. J. & Johnson, R. B. (2004). Mixed method and mixed model research. In:
R. B. Johnson and L. B. Christensen, (eds.), Educational research: Quantitative, Qualita-
tive, and Mixed Approaches. Needham Heights, MA: Allyn and Bacon, pp. 408–431.
Onwuegbuzie, A. J. & Leech, N. L. (2004a). Enhancing the interpretaion of ‘‘significant’’
findings: The role of mixed methods research. The Qualitative Report, 9(4), 770–792. Re-
trieved March 8, 2005, from http://www.nova.edu/ssss/QR/QR9-4/onwuegbuzie.pdf
Onwuegbuzie, A. J. & Leech, N. L. (2004b, February). A Call for Qualitative Power Analyses.
Paper presented at the annual meeting of the Southwest Educational Research Associa-
tion, Dallas, TX. (This paper was recipient of the 2004 Southwest Educational Research
Association Outstanding Paper Award.)
Onwuegbuzie, A. J. & Leech, N. L. (2004b). Post-hoc power: A concept whose time has come.
Understanding Statistics 3(4): 201–230.
Onwuegbuzie, A. J., & Leech, N. L. (2005b, March 10). A typology of errors and myths
perpetuated in educational research textbooks Current Issues in Education [On-line], 8(7).
Available: http://cie.ed.asu/volume8/number7/
Onwuegbuzie, A. J., & Leech, N. L. (in press-a). On becoming a pragmatist researcher: The
importance of combining quantitative and qualitative research methodologies. Interna-
tional Journal of Social Research Methodology: Theory & Practice.
Onwuegbuzie, A. J., & Leech, N. L. (in press-b). A call for qualitative power analyses:
Considerations in qualitative research. Quality & Quantity: International Journal of
Methodology.
TAKING THE ‘‘Q’’ OUT OF RESEARCH 295

Onwuegbuzie, A. J., & Leech, N. L. (in press-c). Validity and qualitative research: An oxy-
moron? Quality & Quantity: International Journal of Methodology.
Onwuegbuzie, A. J. & Teddlie, C. (2003). A framework for analyzing data in mixed methods
research. In: A. Tashakkori and C. Teddlie (eds.), Handbook of Mixed Methods in Social
and Behavioral Research. Thousand Oaks, CA: Sage, pp. 351–383.
Outhwaite, W. (1975). Understanding Social Life: The Method Called Verstehen. London:
Allen and Unwin.
Outhwaite, W. (1983). Concept Formation in Social Science. London: Routledge and Kegan Paul.
Patton, M. Q. (1990). Qualitative Research and Evaluation Methods, 2nd edn. Newbury Park,
CA: Sage.
Popper, K. R. (1959). The Logic of Scientific Discovery. New York: Basic Books.
Punch, K. F. (1999). Introduction to Social Research: Quantitative and Qualitative Approaches.
Thousand Oaks, CA: Sage.
QSR International Pty Ltd (2002). N6 (Non-numerical Unstructured Data Indexing Searching
& Theorizing) Qualitative Data Analysis Program. (Version 6.0) [Computer software].
Melbourne, Australia: QSR International Pty Ltd.
Reichardt, C. S. & Rallis, S. F. (1994). Qualitative and quantitative inquiries are not
incompatible: A call for a new partnership. In: C. S. Reichardt & S. F. Rallis (eds.), The
Qualitative–Quantitative Debate: New Perspectives. San Francisco, CA: Jossey-Bass,
pp. 85–92.
Rosnow, R. L. & Rosenthal, R. R. (1996). Beginning Behavioral Research: A Conceptual
Primer. Englewood Cliffs, NJ: Prentice Hall.
Salkind, N. J. (2000). Exploring Research. Upper Saddle River, N.J: Prentice Hall.
Sandelowski, M. (2001). Real qualitative researchers do not count: The use of numbers in
qualitative research. Research in Nursing & Health 24: 230–240.
Sechrest, L. & Sidani, S. (1995). Quantitative and qualitative methods: Is there an alternative?
Evaluation and Program Planning 18: 77–87.
Sieber, S. D. (1973). The integration of fieldwork and survey methods. American Journal of
Sociology 73: 1335–1359.
Smith, J. K. (1983). Quantitative versus qualitative research: An attempt to clarify the issue.
Educational Researcher 12: 6–13.
Smith, J. K. & Heshusius, L. (1986). Closing down the conversation: The end of the quan-
titative–qualitative debate among educational inquirers. Educational Researcher 15: 4–13.
Smith, M. L. (1994). Qualitative plus/versus quantitative: The last word. In: C. S. Reichardt &
S. F. Rallis (eds.), The Qualitative–Quantitative Debate: New Perspectives. San Francisco:
Jossey-Bass, pp. 37–44.
Tashakkori, A. & Teddlie, C. (1998). Mixed Methodology: Combining Qualitative and Quan-
titative Approaches. Thousand, CA: Sage.
Tashakkori, A. & Teddlie, C. (2003). Issues and dilemmas in teaching research methods
courses in social and behavioral sciences: a US perspective. International Journal of Social
Research Methodology 6(1): 61–77.
Thompson, B. & Snyder, P. A. (1998). Statistical significance and reliability analyses in recent
JCD research articles. Journal of Counseling and Development 76: 436–441.
Thompson, B. & Vacha-Haase, T. (2000). Psychometrics is datametrics: The test is not reli-
able. Educational and Psychological Measurement 60: 174–195.
Vacha-Haase, T., Ness, C., Nilsson, J. & Reetz, D. (1999). Practices regarding reporting of
reliability coefficients. A review of three journals. The Journal of Experimental Education
67: 335–341.
Wallen, N. E. & Fraenkel, J. R. (2001). Educational Research: A Guide to the Process.
Mahwah, N.J.: Lawrence Erlbaum.
296 ANTHONY J. ONWUEGBUZIE AND NANCY L. LEECH

Wilkinson, L. & the Task Force on Statistical Inference. (1999). Statistical methods in psy-
chology journals: Guidelines and explanations. American Psychologist 54: 594–604.
Willems, E. P. & Raush, H. L. (1969). Naturalistic Viewpoints in Psychological Research. New
York: Holt, Rinehart, & Winston.
Willson, V. L. (1980). Research techniques in AERJ articles: 1969 to 1978. Educational
Researcher 9(6): 5–10.
Witcher, A. E., Onwuegbuzie, A. J. & Minor, L. C. (2001). Characteristics of effective
teachers: Perceptions of preservice teachers. Research in the Schools 8: 45–57.

Вам также может понравиться