Вы находитесь на странице: 1из 13

Nepal Open University

Manbhawan, Lalitpur

EDRES702: Research Methodology I in


Education (Quantitative)

Lecture 10: Reliability and Validity

Assoc. Professor Jiban Khadka, PhD


Faculty of Social Sciences and Education

2019
CRITERIA OF GOOD MEASUREMENT/ GOODNESS OF
MEASURE
(RELIABILITY AND VALIDITY)
Reliability & validity as criteria of good measurement
validity and reliability are applied in both types of research: QUAN or
QUAL research
validity and reliability are addressed, using different instruments for
data collection.
Basic purpose of testing validity and reliability is to calculate
measurement error
Reliability is a necessary but not sufficient condition for validity in
research; reliability is a necessary precondition of validity, and
validity may be a sufficient but not necessary condition for reliability.
Four possible relations between reliability and validity:
i. Higher reliability but low validity
ii. Higher validity but low reliability
iii. Low validity and low reliability and
iv. High validity and high reliability
1. Validity
Validity as ascertaining the accuracy of the instruments of
data collection
Refers to truthfulness of findings
the extent that it measures what it purports to measure
In quantitative data validity might be improved through
careful sampling, appropriate instrumentation and
appropriate statistical treatments of the data.
In qualitative data validity might be addressed through the
honesty, depth, richness and scope of the data achieved,
the participants approached, the extent of triangulation
and the disinterestedness or objectivity of the researcher
Basic three types of validity: Content validity, Construct
validity and Criterion-related validity
Other are: face, internal-external validity etc.
1.1. Content Validity
concerns with the coverage of contents that meet the
objectives of a study
quality that covers the domain under the investigation
necessarily to be considered and careful sampling of
items is required while ensuring content validity
aims to "establish an instrument’s credibility, accuracy,
relevance, and breadth of knowledge regarding the
domain“
the experts' opinion or judgment are incorporated to
establish the content validity
subjective opinion from such experts establishes—or
does not establish—the content validity of the instrument
Content Validity Index Formula by Amin (2005) can also
be used to assess the content validity.
1.2. Construct Validity
an abstract; this separates it from the previous types of validity
which dealt in actualities – defined content
Seeks agreement between a theoretical concept and a specific
measures or procedures
the constructs that a researcher wants to measure by the measuring
tools
To establish construct validity the researcher would need to be
assured that his or her construction of a particular issue agreed with
other constructions of the same underlying issue
construct validity is addressed by convergent and discriminant
techniques
Convergent techniques imply that different methods for researching
the same construct should give a relatively high inter-correlation
Discriminant techniques suggest that using similar methods for
researching different constructs should yield relatively low inter-
correlations
The accumulated evidence collected from face, content, predictive,
concurrent, convergent and divergent validity also verify the
construct validity
1.3. Criterion-related Validity
Demonstrate the accuracy of a measure by comparing it with
another measure (eg. Questionnaire, observation, interview and
documentation)
Differentiates individuals on a criteria it is expected to predict
Two methods: Concurrent and predictive validity
Concurrent validity compares the results yielded from two
different instruments measuring the same constructs at the
same time- a test to meet the criteria at the same time
(correlational method or regression method)
Predictive validity describes the relation between the results of
first round test with the second round. It is achieved if the data
acquired at the first round of research correlates highly with
data acquired at a future date
Concurrent validity is very similar to its partner – predictive
validity – in its core concept (i.e. agreement with a second
measure); what differentiates concurrent and predictive validity
is the absence of a time element in the former; concurrence can
be demonstrated simultaneously with another instrument.
2. Reliability
Ensures the consistency of a measure to be yielded by an
instrument over a period of time in the same group or
over different groups at the same or different time.
Concerned with precision and accuracy
A synonym for dependability, consistency and replicability
Degree of consistency or dependability on the
measurement of a construct
Name as credibility, neutrality, conformability,
dependability, consistency, applicability, trustworthiness
and transferability in qualitative research
Three principal types of reliability: stability, equivalence
and internal consistency
Coefficient of correlation is 0.7 or more than this for the

(
acceptance of reliability Nunnally & Bernstein, 1994).
2.1. Reliability as stability
a measure of consistency over time and over similar
samples
a test and then a retest were undertaken within an
appropriate time span, then similar results would be
obtained.
The researcher has to decide what an appropriate length of
time is
Reliability as stability can also be stability over a similar
sample
a test or a questionnaire simultaneously to two groups of
students who were very closely matched on significant
characteristics
correlation coefficient by test/retest method can be
calculated either for the whole test by using the Pearson
statistic, Spearman or a t-test)
2.2. Reliability as Equivalence
achieved first through using equivalent forms (also
known as alternative forms) of a test or data-gathering
instrument.
When an equivalent form of the test or instrument is
devised and yields similar results, then the instrument
can be said to demonstrate this form of reliability.
Reliability can be measured through a t-test, through
the demonstration of a high correlation coefficient
Reliability as equivalence may be achieved through
inter-rater reliability
All researchers must be achieved, through ensuring
that each researcher enters data in the same way
Number of actual agreements
Inter-rater agreement as a percentage:  100%
Number of possible agreements
Reliability as internal consistency

Internal consistency concerns with the degree of


homogeneity of items in the measuring instruments
Internal consistency demands that the instrument or
tests be run once only through the split-half method.
Marks obtained on each half should be correlated
highly with the other
2r
Spearman-Brown formula: Reliability =
1 r
Alternative measure of reliability as internal
consistency is the Cronbach’s alpha
The Cronbach alpha provides a coefficient of inter-
item correlations, that is, the correlation of each
item with the sum of all the other relevant items
Observe the Validity and Reliability

Reliable bot not Neither Reliable Reliable and Valid


Valid nor Valid

Вам также может понравиться