Академический Документы
Профессиональный Документы
Культура Документы
158942-19
Submission date for MAIHRM students who were passed to the dissertation stage in SEPT 2009
Coursework is receipted on the understanding that it is the student's own work and that it has not,
in whole or part, been presented elsewhere for assessment. Where material has been used from
other sources it has been properly acknowledged in accordance with the University's Regulations
regarding Cheating and Plagiarism.
000529696
Tutor's comments
Grade Final
For Office Use Only__________
Awarded___________ Grade_________
1
A study on measurement methods of training program
evaluation in Indian BPO Industry: A Case study of
IBM-Daksh
Title
‘EVALUATION OF TRAINING ‘
MA IHRM
29 JANUARY 2010
2
ACKNOWLEDGEMENT
3
TABLE OF CONTENTS
COURSE WORK HEADERSHEET
TITLE
ACKNOWLEDGEMENT
ABSTRACT
INTRODUCTION
LITERATURE REVIEW
METHODOLOGY
FINDING
4
ABSTRACT
The aim of this research was firstly examine the existing theories of
evaluation of training programmes on the whole and secondly
explore a case study (IBM-Daksh) on the relevance of an
extensively established academic model (Kirkpatrick Model) to
evaluate training programs in Indian BPO Industry. The research
has achieved following research objectives: to assess the need of
training program evaluation; to identify and evaluate various
measurement methods of training program evaluation; and to
assess the effectiveness of Kirkpatrick Model for methodical
evaluation of training. The research attempted to get dome
following research questions: why measurement method of training
program evaluation is imperative and how effective is Kirkpatrick
Model for methodical evaluation of training; why is the need of
training program evaluation; what are various measurement
methods of training program evaluation; and how effective is
Kirkpatrick Model for methodical evaluation of training.
5
area needs to be centrally focused by IBM-Daksh. Kirkpatrick Model
of training evaluation is highly effective for IBM-Daksh. Learning and
results as dimensions of Kirkpatrick model training evaluation
require to be most focused by IBM-Daksh in order to get the desired
results in relation to training evaluation. Certainly Kirkpatrick model
training evaluation is highly effective in evaluating e-learning of
IBM-Daksh. Definitely Kirkpatrick model training evaluation is cost
effective and efficient in controlling staff turnover in IBM-Daksh.
6
Chapter 1
INTRODUCTION
7
employees and evaluating the performance of interns who are going
under training process for the company. Many companies do not
apply the pattern of training at work, particularly in areas where the
trainers and personnel department do not have sufficient time or
resources to implement so. The training techinique should must be
improved and provided for the evaluation of available resources and
to compete with market rivals .Lack of assessment decreases the
efficiency in the working of an organization on the other hand, this
also hampers the quality of production. Evaluation of training
depends on numbers of issues however, there is needed to be
realistic targets. The appraisal needs more elaborated information,
where there is a need for huge investment. Management training, in
general should be made clear to all for fulfilling expectations for
everyone. Furthermore, the training process helps in the planning to
review the capability of employees’ potential on a certain level of
work. Extensive management training should be made friendlier to
caution and ensure the requirement of employees and trainer at the
time actual working within the organization. While the company
needs a regular check-up about the performance of interns at the
time of work.
8
The evolution of performance in almost every organization is a
regular phenomenon in evaluating the performance of employees.
Chen and Rossi (2005) explain that the evaluation of literature
depends more on the practices. The example says that most of the
training is taken from the Kirkpatrick model. But at present is based
on the market demand. The information for evaluation at this level
is usually collected through the methods of questionnaire at the end
of the training program.
9
The function of research is to identify the areas of improvement
within the organizations in Indian BPO industry. This refers to the
case study of, a leading BPO organization like IBM-Daksh in India.
Evaluation helps in calculating the degree accessibility to which
methods and agendas of the company is gained. Evaluation is said
to a course of action where the individual at work learn, gain
experience (Torres, Preskill and Piontek, 1996). However in this
form of study the evaluation is done to find out the cause and its
consequence applied to carry out work. One key model used in this
study is Kirkpatrick. This model was elaborated in the year 1952,
and remains unique till date (Stufflebeam, 2001). The model is
made up on the four bases to to judge the behavior, learning,
results and reaction used in the process of training (Kirkpatrick,
1996).
10
To assess the need of training program evaluation
11
Chapter 2
LITERATURE REVIEW
2.1 INTRODUCTION
12
important investments of time and money for success of any
organization. This logic is also accepted by Lingham et al. (2006)
who believes that training helps in simplifying relations on the work
front. Training validation is different to that of training evaluation.
Validation may be defined as a process of checking people at work
for completion of specific work. Evaluation is much broader sence
defined in the year 1994 in Industrial Society Report.
13
(1996). Evaluation is not only the process of measuring reactions to
the training; but it is the method which help participants in giving
better results with the new skills or knowledge so the purpose of
training is to calculate the rate of change found by the use of
knowledge and learning at workplace. Numerous models are given
by researchers for training evaluation in the literature Abernathy
(1999) states but the most famous model of training evaluation is
given in the literature by the American Professor Donald L
Kirkpatrick (1975) which was initially created in 1959. This method
is certified by Nickols (2005) who said that “Current method of
evaluating training is derived by the Kirkpatrick Model”. This is
further affirmed by Canning (1996 p. 5) who said this model of
training is “Rather like a reptile, it brings incremental changes in the
process of training.
Although the Kirkpatrick model was firstly evolved in the year 1959,
after that number of practices is done in the form of functional
training to evaluation the capacity of various academics and
practitioners. Many of the training evaluation models were in use
since Kirkpatrick have evolved a new way of thinking and bringing
14
variations in the role model. The Kirkpatrick model is supposed as
the base of training and evaluation in literature. This method is also
supported by Wang and Wang (2005 p. 22) who said “No matter
how controversial it may appear, the four-level evaluation proposed
by Kirkpatrick began a chapter of measurement and evaluation in
the field of human resource development.” The main element for an
effective training model is highlighted by Tamkin (2005). Tamkin
explains that a number of things should be taken in account while
scrutinizing the process of training evaluation models. The first thing
must be kept in mind so that the other factors must be identified
and properly utilized in the process of training. Then we must
determine whether the model adds value to the process of
production so that its worker understands the contribution needed
by various aspects of Human Resource practice. This also helps the
organization to understand the value of employees and to short out
the differences coming in the process of work. It could be also
argued that the literature must examine all the relative information
gathered to understand the organizational setup in terms of
production at lower cost, the literature to effective training must
work in order to bring all the hidden aspects to solve challenges
occurring at work place. Management should must be persuaded to
invest money, staffing in regard to the time for operation to increase
the level of business value. If there is nobody within the
organization who can conduct evaluations then it is very difficult for
organization to set up target for operation and remain in the
business. Lack of senior manager is one more drawback especially if
there is no training supervisor in the decision making process.
Absence of strategic planning and effective direction in the way of
increasing performance might hamper the working at every level.
15
stressed on the view that senior management should must be
authorized to take an active part in the organization for improving
results so that to make a congenial atmosphere in the organization
for proper functioning. If the organization doesn’t have a good
culture to encourage its employee then the evaluation part will
create a problem in the organization to find out information
regarding drawback in the system. Culture may be termed as an
obstacle to increase the performance of employees (Holton, 1996;
Holton et al., 2000). The State government of Louisiana founds that
culture have an adverse impact over the performance based
training. Reinhardt (2001) focuses that in her research on
identifying barriers to measure the performance of employee at
work in learning. Reinhart also highlights the loopholes in the
organizational set-up as a important aspect to measure the impact
on performance. This issue of culture is termed as an imperative
barrier.
16
effectiveness and evaluation (IMTEE). Sometimes training
effectiveness and training evaluation are used interchangeably; but,
these are two different terms. A real illustration may be helpful to
explain these differences. In recent times, a government agency for
employment was instructed by a court to revamp and oversee
selection assessment for about 30 jobs. It was time bound for that
staff needed to work for several months included many hours
overtime in a row. Due to that in a year into that project many
employees left the agency and many were falling sick time after
time. To stop employee turnover further and to provide them
support to complete remain project on time, the agency initiated a
training program for the employees for dealing with exhaustion
problem. All the employees included supervisors and their
subordinates attended the training. The multi purposes training
program was specially prepared, which included humor, lecture, and
real time practice of many stress-reducing techniques. This training
program also included trainees to improve worker and supervisor
relation even after training. In the end, the superiors were pushed to
share their self developed stress-reduction techniques and methods
with their juniors/ subordinates.
17
Evaluation of a training program is the assessment of its
success/failure in of views its design and content, changes in
organizational, and learners’ productivity. The training evaluations
methods used to review depends on the evaluation model, as there
are four models for evaluation. First of them, Kirkpatrick’s behavior,
learning, reactions, and results typology, is the easiest and
frequently used technique for reviewing and understanding training
evaluation. In this four dimensional method, learning outcome is
measured at the time of training which mean behavioral attitudinal,
and cognitive learning. Behavioral learning measures on-the-job
performance after the training. The second model expanded by
Tannenbaum et al. (1993), on Kirkpatrick’s four dimensional
typology, he included two more aspects, post training attitude and
further dividing behavior into two training outcome for evaluation;
transfer performance and training performance. In this extended
model, training reactions and post-training attitudes are not
associated with any other evaluation. No doubt that learning is
associated with training program performance, and training
program performance is associated with training transfer
performance, and transfer performance is associated with training
outcomes.
18
multidimensional areas for evaluation: changes in trainees (i.e.,
cognitive, behavioral, and affective) training design and system (i.e.,
design, validity, and delivery of training), and organizational
outcome (i.e., results, job performance, and transfer climate).
Feedback from the trainees is considered an assessment technique
for measuring how effective a training program design and system
were for the learner. Kraiger stated, feedback measures are not
associated with the changes in trainees/employees or organizational
outcomes, but those changes or leanings in employees are
associated with organizational outcomes.
19
secondary variables that affect the training outcomes. Holton’s
model put forward that the all these characteristics are related to
transfer and learning performance. However, indirect relationships
are also there because of the interactions between these
characteristics. For an example, Holton recommended that
motivation interacts with organizational and training characteristics,
in this way it influencing the training outcomes. However, Holton
has given valuable inputs for assessing training effectiveness, a few
studies (Holton, 2003; Holton, Bates, and Ruona, 2000) have
measured the various outcomes recommended by the author. All
these authors have developed a Learning Transfer System with
effectiveness variables summarize in a model and found it
supportive for the model’s construction.
20
et al., 1993). In this way it become difficult to measure how training
outcomes affected by changes in motivation. In addition to that, this
approach can’t help training experts to study different aspects of
training content, design and the organizational culture that may
affect motivation for transferring or learning. As a result, a method
is required for assessing changes in motivation.
21
Result is the last and final dimension of training evaluation. It refers
to trainees’ quantifiable behavioral changes (Kraiger, 2002). For
example, organizational outcomes form the trainings’ transfer
performance may also include enhanced safety measures, morale,
efficiency, and quality/quantity of outcome.
22
Kirkpatrick’s first level is least difficult, for measuring performance.
No studies can prove that the one method is suitable for all
application for evaluation of knowledge.
Warr et al. (1999) studied the link between the modified Kirkpatrick
framework measurement levels to study the behavior and results on
job status. The three levels which were studied were reactions,
learning, and job behavior. Trainees were given all the appropriate
knowledge about the work .The questionnaire based information
was mailed after one month to review the performance on the basis
of information collected at all level. Later all questionnaire data
were transformed into another set of measurement level. The
reaction level was gathered after the training given for proper
functioning of work so that to know about their perceptions for
23
usefulness of the training and measure to eradicate problems of
training. The learning level was measured by all three
questionnaires. Since the main objective of training was to improve
and motivate employees towards the goal of the organization in
regards to the use of latest technology. Because experience comes
after passing of certain time on a particular work these researchers
measured the amount capability gained during the course of
learning. Change in scores was compared between and before
training and after training. Warr et al accepts that there is a
correlation between the six individual trainee and factors of
motivation. Correlation at work station helps to predict change in
training, Job behavior, and the desired objective of measurement
level on and after training. Multiple regression helps in analyzing
different level scores gained in the process of training so that a
strong relationship could be made among trainer and trainees
Warr et al. (1999) explained that relationship build between the six
individual trainee features on organizational predictors for
evaluation of performance at every level. At first level participants
are given prior training before going on actual work so that their
capacity can be measured after training. At the second level the
other factors like motivation, confidence, and strategy works in
bringing change within the system of learning. Learning level
reflects the changes which were strongly predicted after the process
of training at all level. Research suggests that there is a possible
link between reactions and learning so that it may be identified with
the use of more differences in the opinion level. In the third point
the training build confidence in trainee and helps in transfer support
to predicted job behavior. Transfer support was measured as a part
of organizational trend to coordinate for the satisfaction of the
organization. Transfer support is said to be the amount of support
given by trainers to their trainee and colleagues for improving the
quality of work in the organization. Warr et al. suggested that an
24
analysis of pretest scores might explain reasons for the behavior
and helps in the improvement of organizational behavior.
25
evaluation of learners behavior at all four levels – in this article, we
use a ‘mid- range theory approach’ to focus only on one part of
Kirkpatrick’s four-stage framework.
26
Holton (1996) accepted that primary outcomes should not be
considered as part reactions for the evaluation procedure – On the
other hand reactions are a benchmark for the suitability of a training
program. Holton (1996) says, the applicable evaluation objectives
are learning, learning should lead to transfer and then transfer
should lead to result. The evaluation of a training program is always
associated with its effectiveness (Alvarez et al, 2004). Evaluation
check does it works and effectiveness checks ‘why’ it works (Ford,
1997). Kraiger (2002) gives three multidimensional objectives for
training evaluation. The evaluation procedures include evaluation of
training design and system, changes in organizational and trainees’
outcomes. Changes in trainees may be cognitive, behavioural or
emotional. Organizational outcomes include the transfer culture,
results and job performance.
27
changes in the trainees as they participates in the training
programme. Alvarez et al’s (2004) model divides these changes into
three parts: training performance post-training self value, and
cognitive learning. Alvarez et al defines post-training efficiency as a
post-training mind-set. Self value means an individual’s beliefs
about himself to perform a particular task and show self value and
confidence while performing the task.
2.5 SUMMARY
28
two different techniques to assess a training programme outcome.
As training evaluation is associated with the training content and
training plan, training effectiveness is associated with whole
learning system. In brief, training evaluation provide a micro-view
on the training outcomes while training effectiveness provides
macro-view on the training outcomes. Donald Kirkpatrick has
suggested a model for training evaluation. Kirkpatrick presented a
four steps training measurement process, learning, behavior,
reaction, and results. It is widely used tool especially for measuring
technical training outcome. This model was reviewed by many
researchers; some of them found it most suitable.
29
Chapter 3
M E T H O D O L O GY
30
reason is that philosophy might help out the researcher to be
creative and pioneering in either choice or adaptation of methods
that were formerly exterior to experience. Putting these three
reasons at the center of research strategy, the researcher firstly
decided about the research philosophy to be applied so that nothing
inappropriate was taken into consideration.
31
So whilst deciding about the methodology of this research, the first
step was to decide about research philosophy. This was because the
research had to choose a suitable path and approach of the
research. The choice was open to choose from positivism and
interpretivism. But choosing the research philosophy was rather
concerned the nature of research problem, as whether the research
was scientific research or social research or the research was
experimental or exploratory. Certainly this research was exploratory
and not experimental. Notably, the questions of the research were
to examine the existing theories of evaluation of training
programmes on the whole and to explore a case study (IBM-Daksh)
on the relevance of an extensively established academic model
(Kirkpatrick Model) to evaluate training programs in Indian BPO
Industry. Therefore, the philosophy of this research was decided as
interpretivism where the facts regarding the above issues were
explored and interpreted in accordance with the research objectives
and research questions.
32
research is deductive (Patton, 2002). One of the most differentiating
points between qualitative and quantitative research is that in
qualitative research, a hypothesis is not required to start on
research. But, every one quantitative research necessitates a
hypothesis prior to research can start on.
33
Once decided about the choi8ce of philosophy and discussed the
available research methods, the researcher was very much clear
about choosing the appropriate research method. Initially the
research had option to either choose only quantitative method or
qualitative method or both, but in the end there was only one choice
of method and it was qualitative method. As stated above when a
research is to be carried having interpretivism philosophy the
method of data collection and data analysis ought to be qualitative.
So this research was conducted putting into application the
qualitative method. The qualitative data collection and data analysis
was carried out to respond the developed research questions as
why measurement method of training program evaluation is
imperative and how effective is Kirkpatrick Model for methodical
evaluation of training; why is the need of training program
evaluation; what are various measurement methods of training
program evaluation; and how effective is Kirkpatrick Model for
methodical evaluation of training. In the further sections, the data
collection and data analysis tools are discussed and detailed.
34
Nevertheless as a common ruling, a methodical research of the
secondary data ought to be carried out proceeding to carrying out
primary data. The secondary data offers a functional setting and
make out foremost questions and issues that require to be attended
by the primary data. Secondary data is classified in relation to its
source either internal or external. Internal data is secondary data
obtained within the group where research is being carried out.
External secondary data is get from exterior sources (Robson,
2000).
35
after it, there are many factors which might be effortlessly
unnoticed at this vital first phase (Sekaran, 2003).
There are a range of primary data sources where the prominent are
focus group, interview, questionnaire and observation. However,
questionnaire is the cheapest, efficient and most frequently used
primary data collection method. A questionnaire is a set of written
questions relating to the problem or issues under research for which
the researcher necessitates answers from respondents (Sekaran,
2003). Formulating of a questionnaire plays a vital role in rewarding
the points of primary data collection. Questionnaire design is a
lengthy process that requires persistence and reasonable analysis. It
is an influential and well-organized assessment method and must
not be taken flippantly. Designing of questionnaire must be
performed in a phased approach (Robson, 2000).
3.3.3 Sampling
36
answering the research questions. This is called sampling. Sampling
techniques are classified as probability and non-probability. In
probability sampling, the first step is to choose the population of
interest, that is, the population the researcher looks for the results
about. The sample may well be chosen in numerous stages.
Understandably the probability of getting every sample the
researcher chooses, can as well work out a sampling error for the
results (Punch, 2003). On the other hand, non-probability sampling
is a sampling technique in which the samples are chosen in a
process that does not offer every one of the individuals in the
population equivalent probability of being chosen (Punch, 2003).
Triangulation was found the most fitting data analysis method for
this qualitative research. Triangulation is a method applied in
qualitative research to confirm and ascertain validity of the
research. Triangulation data analysis can be carried through five
37
types of methods explicitly data triangulation, investigator
triangulation, theory triangulation, methodological triangulation, and
environmental triangulation (Marshall and Rossman, 2002).
However, the most common mode of triangulation applied in
academic researches is data triangulation.
38
Chapter 4
4.1 INTRODUCTION
The aim of this research was firstly examine the existing theories of
evaluation of training programmes on the whole and secondly
explore a case study (IBM-Daksh) on the relevance of an
extensively established academic model (Kirkpatrick Model) to
evaluate training programs in Indian BPO Industry. The research
attempted to get dome following research questions: why
measurement method of training program evaluation is imperative
and how effective is Kirkpatrick Model for methodical evaluation of
training; why is the need of training program evaluation; what are
various measurement methods of training program evaluation; and
how effective is Kirkpatrick Model for methodical evaluation of
training. The data analysis or analysis of the findings in this research
gets done these research questions.
39
The organisation must have certain standard on which the quality
of work may be improved by using data gathered from the
evaluation. (3) The purpose of training help participants to recognize
the true value of task they are assigned for. Easterby-Smith and
Mackness stressed on the purposes and importance of training
cycles on different stakeholders. In the light of these propositions, it
was examined in this research as for what principal purpose training
evaluation is needed for IBM-Daksh. The data collected in this
context reveals that training evaluation is needed for IBM-Daksh for
the principal purposes of training results and implementation control
(see table and figure 4.1). As for majority of the total research
participants, they find that their firm needs training evaluation for
the principal purposes of training results and implementation
control .
Table 4.1:
As per the data shown in the above table for majority of the total
respondents (68% of the total 25), they find that their firm needs
training evaluation for the principal purposes of ‘training results’
(32%) and ‘implementation control’ (36%); whereas for the
remaining respondents (32%), the find that their firm needs training
evaluation for the principal purposes of ‘skill development’ (16%)
40
and ‘business goals’ (16%). By and large, these data conclude that
training evaluation is needed for IBM-Daksh for the principal
purposes of training results and implementation control.
41
Table 4.2:
As per the data shown in the above table for respondents in greater
majority of the total (72% of the total), they either ‘strongly agree’
(44%) or ‘agree’ (28%) of the fact that training evaluation of their
firm should be centrally focused towards measuring changes in
knowledge and appropriate knowledge transfer; whereas for the
remaining respondents (28%), they either ‘disagree’ (20%) or
conclude that certainly training evaluation of IBM-Daksh should be
centrally focused towards measuring changes in knowledge and
appropriate knowledge transfer.
42
propositions, it was examined in this research as training evaluation
of IBM-Daksh should be measured through which performance
criteria. The data collected in this context reveals that training
evaluation of IBM-Daksh should be measured through the criteria of
qualitative performance and not through the quantitative
performance (see table and figure 4.3). As for majority of the
total research participants they find that training evaluation of their
firm should be measured through the criteria of qualitative
performance.
Table 4.3:
43
evaluation. This was focused by Rae (1999) that effective evaluation
needs a “training quintet” to make senior management, aware with
the new technique used for progress of a company. Rae also
stressed on the view that senior management should must be
authorized to take an active part in the organization for improving
results so that to make a congenial atmosphere in the organization
for proper functioning. If the organization doesn’t have a good
culture to encourage its employee then the evaluation part will
create a problem in the organization to find out information
regarding drawback in the system. Culture may be termed as an
obstacle to increase the performance of employees (Holton, 1996;
Holton et al., 2000). The State government of Louisiana founds that
culture have an adverse impact over the performance based
training. Reinhardt (2001) focuses that in her research on
identifying barriers to measure the performance of employee at
work in learning. Reinhart also highlights the loopholes in the
organizational set-up as a important aspect to measure the impact
on performance. This issue of culture is termed as an imperative
barrier. In the light of these propositions, it was examined in this
research as what is the major challenge of training evaluation for
IBM-Daksh. The data collected in this context reveals that lack of
accountability is the major challenge of training evaluation for IBM-
Daksh (see table and figure 4.4). As for majority of the total
research participants, lack of accountability’ is the major challenge
of training evaluation for IBM-Daksh.
Table 4.4:
44
As according to the data shown in the above table, for majority of
the total respondents (60% of the total 25), ‘lack of accountability’ is
the major challenge of training evaluation for their firm; whilst for
the remaining (40%), ‘cultural resistance’ is the major challenge of
training evaluation for their firm. Overall, these data conclude that
lack of accountability is the major challenge of training evaluation
for IBM-Daksh.
45
measuring learning outcomes. The data collected in this context
reveals that certainly training evaluation of IBM-Daksh should be
methodological approach of measuring learning outcomes (see
table and figure 4.5). As for research participants in greater
majority, they either strongly agree or agree to the fact that training
evaluation of IBM-Daksh should be methodological approach of
measuring learning outcomes.
Table 4.5:
As per the data shown in the above table for respondents in greater
majority of the total (76% of the total 25), they either ‘strongly
agree’ (40%) or ‘agree’ (36%) of the fact that training evaluation of
their firm should be methodological approach of measuring learning
outcomes; whereas for the remaining respondents (24%), they
‘disagree’ to the fact that training evaluation of their firm should be
methodological approach of measuring learning outcomes. By and
large, these data conclude that certainly training evaluation of IBM-
Daksh should be methodological approach of measuring learning
outcomes.
46
organizational, and learners’ productivity. The training evaluations
methods used to review depends on the evaluation model, as there
are four models for evaluation. First of them, Kirkpatrick’s behavior,
learning, reactions, and results typology, is the easiest and
frequently used technique for reviewing and understanding training
evaluation. In this four dimensional method, learning outcome is
measured at the time of training which mean behavioral attitudinal,
and cognitive learning. Behavioral learning measures on-the-job
performance after the training. The second model expanded by
Tannenbaum et al. (1993), on Kirkpatrick’s four dimensional
typology, he included two more aspects, post training attitude and
further dividing behavior into two training outcome for evaluation;
transfer performance and training performance. In this extended
model, training reactions and post-training attitudes are not
associated with any other evaluation. No doubt that learning is
associated with training program performance, and training
program performance is associated with training transfer
performance, and transfer performance is associated with training
outcomes. In the light of these propositions, it was examined in this
research as which dimension of training evaluation target area
needs to be centrally focused by IBM-Daksh. The data collected in
this context reveals that changes in learners and organizational
payoff as dimensions of training evaluation target area needs to be
centrally focused by IBM-Daksh (see table and figure 4.6). As for
research participants in greater majority, they find that changes in
learners and organisational payoff as dimensions of training
evaluation target area needs to be centrally focused by IBM-Daksh.
47
Table 4.6:
As per the data shown in the above table for respondents in greater
majority of the total (80% of the total 25), they find that ‘changes in
learners’ (44%) and ‘organisational payoff’ (36%) as dimensions of
training evaluation target area needs to be centrally focused by
their firm; whereas for the remaining respondents (20%), they find
that ‘learning content and design’ as dimension of training
evaluation target area needs to be centrally focused by their firm.
By and large, these data conclude that changes in learners and
organizational payoff as dimensions of training evaluation target
area needs to be centrally focused by IBM-Daksh.
48
developed by Kraiger (2002). This model stress on three
multidimensional areas for evaluation: changes in trainees (i.e.,
cognitive, behavioral, and affective) training design and system (i.e.,
design, validity, and delivery of training), and organizational
outcome (i.e., results, job performance, and transfer climate).
Feedback from the trainees is considered an assessment technique
for measuring how effective a training program design and system
were for the learner. Kraiger stated, feedback measures are not
associated with the changes in trainees/employees or organizational
outcomes, but those changes or leanings in employees are
associated with organizational outcomes. In the light of these
propositions, it was examined in this research as how effective is
Kirkpatrick Model of training evaluation for IBM-Daksh. The data
collected in this context reveals that Kirkpatrick Model of training
evaluation is highly effective for IBM-Daksh (see table and figure
4.7). As for research participants in greater majority, they find that
Kirkpatrick Model of training evaluation is highly effective for IBM-
Daksh.
Table 4.7:
49
firm; whilst for the remaining respondents (28%), that find that
Kirkpatrick Model of training evaluation is ‘reasonably effective’ for
their firm. Overall, these data conclude that Kirkpatrick Model of
training evaluation is highly effective for IBM-Daksh.
50
results in relation to training evaluation (see table and figure
4.8). As for research participants in majority), they find that
learning and results as dimensions of Kirkpatrick Model training
evaluation require to be most focused by their firm in order to get
the desired results in relation to training evaluation.
Table 4.8:
51
at all five levels. The five levels are reaction, learning, behavior,
participation, and result. However the Kirkpatrick Model has been
applied for years to solve the problems arise on technical training;
recently this model was applied over nontraditional electronic
learning system. Horton (2001) published Evaluating E-Learning in
which he explains how to use the Kirkpatrick Model to evaluate e-
learning. Kirkpatrick (1998) suggests that there are as much
possibility of the four levels for evaluation of training .In order to
make full use of organizational resources there is a need of effective
training and capable manpower to execute the work in accordance
wit the desired objectives of the organization. In the light of these
propositions, it was examined in this research as how effective is
Kirkpatrick model training evaluation in evaluating e-learning for
IBM-Daksh. The data collected in this context reveals that certainly
Kirkpatrick model training evaluation is highly effective in evaluating
e-learning of IBM-Daksh (see table and figure 4.9). As for
research participants in majority, Kirkpatrick model training
evaluation is highly effective in evaluating e-learning of IBM-Daksh.
Table 4.9:
52
conclude that certainly Kirkpatrick model training evaluation is
highly effective in evaluating e-learning of IBM-Daksh.
53
Table 4.10:
As per the data shown in the above table for respondents in majority
of the total (60% of the total 25), they find that Kirkpatrick Model
training evaluation is cost effective and efficient in controlling staff
turnover for their firm; whereas for the remaining respondents
(40%), they do not find that Kirkpatrick model training evaluation is
cost effective and efficient in controlling staff turnover for their firm.
By and large, these data conclude that definitely Kirkpatrick model
training evaluation is cost effective and efficient in controlling staff
turnover in IBM-Daksh.
4.3 SUMMARY
54
area needs to be centrally focused by IBM-Daksh. Kirkpatrick Model
of training evaluation is highly effective for IBM-Daksh. Learning and
results as dimensions of Kirkpatrick model training evaluation
require to be most focused by IBM-Daksh in order to get the desired
results in relation to training evaluation. Certainly Kirkpatrick model
training evaluation is highly effective in evaluating e-learning of
IBM-Daksh. Definitely Kirkpatrick model training evaluation is cost
effective and efficient in controlling staff turnover in IBM-Daksh.
Chapter 5
CONCLUSION
5.1 INTRODUCTION
The aim of this research was firstly examine the existing theories of
evaluation of training programmes on the whole and secondly
explore a case study (IBM-Daksh) on the relevance of an
extensively established academic model (Kirkpatrick Model) to
evaluate training programs in Indian BPO Industry. The research
has achieved following research objectives: to assess the need of
training program evaluation; to identify and evaluate various
55
measurement methods of training program evaluation; and to
assess the effectiveness of Kirkpatrick Model for methodical
evaluation of training.
56
is the major challenge of training evaluation for IBM-Daksh. As for
majority of the total research participants, lack of accountability’ is
the major challenge of training evaluation for IBM-Daksh.
57
majority), they find that learning and results as dimensions of
Kirkpatrick Model training evaluation require to be most focused by
their firm in order to get the desired results in relation to training
evaluation.
58
understanding in regards to the objectives formed by the
organization (Kraiger, 2002)
59
BIBLIOGRAPHY
60
Easterby-Smith, M. and Mackness, J. (1992), Personnel
Management, 42-45.
Eseryel D (2002), “Approaches to evaluation of training: theory and
practice”, Educational Technology and Society, Vol 5, No 2 pp
93–98.
Fayolle, A., Gailly, B., and Lassas-Clerc, N. (2006), “Assessing the
impact of entrepreneurship education programmes: a new
methodology”, Journal of European Industrial Training, Vol 30,
No 9, pp 701–720.
Ford, J.K. (1997), “Advances in training research and practice: an
historical perspective”, in Ford, J.K., Kozlowski, S., Kraiger, K.,
Salas, E., and Teachout, M., eds, Improving Training
Effectiveness in Work Organizations, Lawrence Erlbaum
Associates, Mahwah, NJ.
Frayne, C. A., & Geringer, J. M. (2000), “Self-management training
for improving job performance: A field experiment involving
salespeople”, Journal of Applied Psychology, 85, 361-372.
Holton, E. F., III, & Baldwin, T. T. (2000), Making transfer happen: An
action perspective on learning transfer systems. In E. F.
Holton, S. S. Naquin,&T. T. Baldwin (Eds.), Managing and
changing learning transfer systems: Advances in developing
human resources #8 (pp. 1-6). San Francisco, CA: Berrett-
Koehler.
Holton, E. F., III, Bates, R. A., & Ruona, W. E. A. (2000),
“Development of a generalized learning transfer system
inventory”, Human Resource Development Quarterly, 11, 333-
360.
Holton, E. F., III. (1996), “The flawed four-level evaluation model”,
Human Resource Development Quarterly, 7, 5-21.
Holton, E. F., III. (2003), What’s really wrong: Diagnosis for learning
transfer system change. In E. Salas et al. (Eds.), Improving
learning transfer in organizations (pp. 59-79). San Francisco,
CA: Jossey-Bass.
61
Hughes J (2000), The Philosophy of Social Research. Essex,
Longman
Kirkpatrick, D. (1996), “Great ideas revisited”, Training and
Development, 50, 1, pp.54-60.
Kirkpatrick, D. L. (1976), “Evaluation of training”. In R. L. Craig (Ed.),
Training and Development Handbook (2nd ed.). New York:
McGraw-Hill.
Kraiger, K. (2002). Decision-based evaluation. In K. Kraiger (Ed.),
Creating, implementing, and managing effective training and
development (pp. 331-375). San Francisco, CA: Jossey-Bass.
Krueger, N., and Carsrud, A. (1993), “Entrepreneurial intentions:
applying the theory of planned behavior”, Entrepreneurship
and Regional Development, Vol 18, No 1, pp 5-21.
Lynton, R. and Pareek, U. (2000), Training for Organizational
Transformation – For Policy Makers and Change Managers,
Sage Publications , London.
Marchington, M. and Wilkinson, A. (2000), Core Personnel and
Development, CIPD, London.
Marshall, C., and Rossman, G.B. (2002), Designing Qualitative
Research (3rd ed.), Thousand Oaks, CA: Sage
Nickols, F. W. (2005), Advances in Developing Human Resources, 7,
121-134.
Patton, M. Q. (2002), Qualitative evaluation and research methods
(3rd ed.). Sage Publications, Inc., Thousand Oaks, CA
62
Punch, K.F. (2003), Survey Research: The Basics, Sage Publications
, London
63
Torres, R. T.,&Preskill, H. (2001), “Evaluation and organizational
learning: Past, present, and future”, American Journal of
Evaluation, 22, 387-395.
Torres, R.T., Preskill H, and Piontek, M. (1996). Evaluation strategies
for communicating and reporting: Enhancing learning in
organizations, Sage Publications, Thousand Oaks, CA.
Wang, G. G. and Wang, J. (2005), Advances in Developing Human
Resources, 7, 22-36.
Warr, P., Allan, C., & Birdi, K. (1999), “Predicting three levels of
training outcome”, Journal of Occupational and Organizational
Psychology, 72(3), 351-375.
Appendix
Questionnaire
64
Qualitative performanceQuantitative performance
65
8. Which dimension of Kirkpatrick Model training evaluation
requires to be most focused by your firm in order to get the
desired results in relation to training evaluation?
LearningReactionsBehaviourResults
Highly effectiveEffectiveIneffective
Yes
No
66