Академический Документы
Профессиональный Документы
Культура Документы
1.Pre-experimental designs:
There are three designs under this – one short case study where observation is
taken after the application of treatment, one group pre test-post test design where
one observation is taken prior to the application of treatment and the other one
after the application of treatment, and static group comparison, where there are two
groups – experimental group and control group. The experiment group is subjected
to treatment and a post test measurement is taken. In the control group
measurement is taken at the time when it was done for experimental group. These
do not make use of any randomization procedures to control the extraneous
variables. Therefore, the internal validity of such designs is questionable.
2.Quasi-experimental designs : In these designs the researcher can control when
measurements are taken and on whom they are taken. However, this design lacks
complete control of scheduling of treatment and also lacks the ability to randomize
test units’ exposure to treatments. As the experimental control is lacking, the
possibility of getting confounded results is very high. Therefore, the researchers
should be aware of what variables are not controlled and the effects of such
variables should be incorporated into the findings.
3.True experimental designs: In these designs, researchers can randomly assign test
units and treatments to an experimental group. Here, the researcher is able to
eliminate the effect of extraneous variables from both the experimental and control
group. Randomization procedure allows the researcher the use of statistical
techniques for analysing the experimental results.
4.Statistical designs: These designs allow for statistical control and analysis of
external variables. The main advantages of statistical design are the following:
•The effect of more than one level of independent variable on the dependent
variable can be manipulated.
•The effect of more than one independent variable can be examined.
•The effect of specific extraneous variable can be controlled.
Ans. Reliability
Reliability is concerned with consistency, accuracy and predictability of the scale. It
refers to the extent to which a measurement process is free from random errors. The
reliability of a scale can be measured using the following methods: Test–retest
reliability: In this method, repeated measurements of the same person or group using
the same scale under similar conditions are taken. A very high correlation between the
two scores indicates that the scale is reliable. The researcher has to be careful in
deciding the time difference between two observations. If the time difference between
two observations is very small it is very likely that the respondent would give same
answer which could result in higher correlation. Further if the difference is too large,
the attitude might have undergone a change during that period, resulting in a weak
correlation and hence poor reliability. Therefore researcher have to be very careful in
deciding the time difference between observation. Generally, a time difference of
about 5-6 months is considered as an ideal period.
Split-half reliability method: This method is used in the case of multiple item scales.
Here the number of items is randomly divided into two parts and a correlation
coefficient between the two is obtained. A high correlation indicates that the internal
consistency of the construct leads to greater reliability.
Validity
The validity of a scale refers to the question whether we are measuring what we want
to measure. Validity of the scale refers to the extent to which the measurement
process is free from both systematic and random errors. The validity of a scale is a
more serious issue than reliability. There are different ways to measure validity.
Content validity: This is also called face validity. It involves subjective judgement
by an expert for assessing the appropriateness of the construct. For example, to
measure the perception of a customer towards Kingfisher Airlines, a multiple item
scale is developed. A set of 15 items is proposed. These items when combined in an
index measure the perception of Kingfisher Airlines. In order to judge the content
validity of these 15 items, a set of experts may be requested to examine the
representativeness of the 15 items. The items covered may be lacking in the content
validity if we have omitted behaviour of the crew, food quality, and food quantity,
etc., from the list. In fact, conducting the exploratory research to exhaust the list of
items measuring perception of the airline would be of immense help in such a case.
Predictive validity:
This involves the ability of a measured phenomena at one point of time to predict
another phenomenon at a future point of time. If the correlation coefficient between
the two is high, the initial measure is said to have a high predictive ability. As an
example, consider the use of the common admission test (CAT) to shortlist candidates
for admission to the MBA programme in a business school. The CAT scores are
supposed to predict the candidate’s aptitude for studies towards business education.
Sensitivity
Sensitivity refers to an instrument’s ability to accurately measure the variability in a
concept. A dichotomous response category such as agree or disagree does not allow
the recording of any attitude changes. A more sensitive measure with numerous
categories on the scale may be required. For example, adding ‘strongly agree’,
‘agree’, ‘neither agree nor disagree’, ‘disagree and ‘strongly disagree’ categories will
increase the sensitivity of the scale. The sensitivity of scale based on a single question
or a single item can be increased by adding questions or items. In other words,
because composite measures allow for a greater range of possible scores, they are
more sensitive than a single-item scale.
Q.3 What are the advantages and disadvantages of the questionnaire method?
Illustrate with suitable examples.
Ans. The questionnaire has many advantages over the other data collection
methods discussed earlier.
•Probably the greatest benefit of the method is its adaptability. There is, actually
speaking, no domain or branch for which a questionnaire cannot be designed. It
can be shaped in a manner that can be easily understood by the population under
study. The language, the content and the manner of questioning can be modified
suitably. The instrument is particularly suitable for studies that are trying to
establish the reasons for certain occurrences or behaviour.
•The second advantage is that it assures anonymity if it is self-administered by the
respondent, as there is no pressure or embarrassment in revealing sensitive data. A
lot of questionnaires do not even require the person to fill in his/her name.
Administering the questionnaire is much faster and less expensive as compared to
other primary and a few secondary sources as well. There is considerable ease of
quantitative coding and analysis of the obtained information as most response
categories are closed-ended and based on the measurement level. The chance of
researcher bias is very little here.
•Lastly, there is no pressure of immediate response, thus the subject can fill in the
questionnaire whenever he or she wants.
•The questionnaire is the most economical method as it can be administered
simultaneously to a number of respondents. Thus a large amount of data can be
collected within a short time through a questionnaire. However, the method does
not come without any disadvantages.
•The major disadvantage is that the inexpensive standardized instrument has
limited applicability, that is, it can be used only with those who can read and write.
•The questionnaire is an impersonal method and sometimes for a sensitive issue it
may not reveal the actual reasons or answers to the questions that you asked .The
return ratio, i.e., the number of people who return the duly filled in questionnaires
are sometimes not even 50 per cent of the number of forms distributed.
•Skewed sample response could be another problem. This can occur in two cases;
one, if the investigator distributes the same to his friends and acquaintances and
second, because of the self-selection of the subjects. This means that the ones who
fill in the questionnaire and return it might not be the representatives of the
population at large. In case the person is not clear about a question, clarification
with the researcher might not be possible.
Ans. Data editing is the process that involves detecting and correcting errors
(logical inconsistencies) in data. After collection, the data is subjected to
processing. Processing requires that the researcher must go over all the raw data
forms and check them for errors. The significance of validation becomes more
important in the following cases:
•In case the form had been translated into another language, expert analysis is done
to see whether the meaning of the questions in the two measures is the same or not.
• The second case could be that the questionnaire survey has to be done at multiple
locations and it has been outsourced to an outside research agency.
• The respondent seems to have used the same response category for all the
questions; for example, there is a tendency on a five point scale to give 3 as the
answer for all questions.
•The form that is received back is incomplete, in the sense that either the person
has not filled the answer to all questions, or in case of a multiple-page
questionnaire, one or more pages are missing.
•The forms received are not in the proportion of the sampling plan. For example,
instead of an equal representation from government and private sector employees,
65 per cent of the forms are from the government sector. In such a case the
researcher either would need to discard the extra forms or get an equal number
filled-in from private sector employees. Once the validation process has been
completed, the next step is the editing of the raw data obtained. While carrying out
the editing the researcher needs to ensure that:
•The data obtained is complete in all respects.
•It is accurate in terms of information recorded and responses sought.
•Questionnaires are legible and are correctly deciphered, especially the open-ended
questions.
•The response format is in the form that was instructed.
•The data is structured in a manner that entering the information will not be a
problem .The editing process is carried out at two levels, the first of these is field
editing and the second is central editing.
Ans. At the data analysis stage, the first step is to describe the sample which is
followed by inferential analysis. In the descriptive analysis, we describe the sample
whereas the inferential analysis deals with generalizing the results as obtained from
the sample.
Ans. Whatever the type of report, the reporting requires a structured format and by
and large, the process is standardized. As stated above, the major difference
amongst the types of reports is that all the elements that make a research report
would be present only in a detailed technical report. Usage of theoretical and
technical jargon would be higher in the technical report and visual presentation of
data would be higher in the management report.