Вы находитесь на странице: 1из 7
IN THIS ARTICLE HRD Program Beatuaton, Evaluation Instruments Great Ideas Revisited THEN Techniques for Evaluating Training Programs By DONALD KIRKPATRICK Here are the original four articles (condensed) from November through February 1959, published in T&D when it was the Journal for the American Society of ‘Training Directors. The series introduced Kirkpatrick's four-level model of evaluation. articles are designed to stimulate training direc- tors to increase their efforts in evaluating training programs. Step I: Reaction Reaction may best be defined as how well trainees like a particular training program. Evaluating in terms of reac- tion is the same as measuring trainees’ feelings, It doesn't measure any learning that takes place. Be- ‘cause reaction is easy to measure, nearly all training directors do it. But in this writer's opinion, many of their attempts don’t meet the following standards: » Determine what you want to find out. » Use a written comment sheet with the items determined in the task above. CELEBRATING au B22 Now Vevevrvvyy Revisiting Kirkpatrick’s Four-Level Model By DONALD KiRKPATRICK Ithas been more than 37 years since Kirkpatrick's classic four-level model was first published in the Journal for the American Society of Ti aining Directors. Here, Kirkpatrick takes another look at his creation. eginning with the No Be Development (then called the fournal of the American Society of Training Directors), I published a series of four articles, “Techniques for Evaluating Training Programs. Since then, T've written many articles and book chapters on evaluation and compiled 20 years’ worth of evaluation material in Evaluating Training Programs (American Soci ety for Training and Development, 1975) and More Bvae- ating Training Programs (ASTD, 1986), (Over the years, a lot of things have writing about and teaching ev ber 1959 issue of Trd jon. But the content has remained basically the same. I've made a few modifi ations in the guidelines for each of the four levels, as well as provided more and different forms and examples in my books. behavior, and re » Design the sheet so that re~ actions can be tabulated and quantified. iy » Obtain honest reactions by making the sheet anonymous. » Allow trainees to write addi- tional comments not covered by the questions designed to be tabulated and quantified. It’s important to determine how people feel about a pro- ‘gram because training decisions by top management are fre- quently made on the basis of cone or two comments from pat- ticipants. For example, a super- ‘visory training program may be canceled just because one super- visor told the plant manager that the was “for the birds.’ People must like a training program to obtain the most benefit. Cloyd Steinmetz, past president of ASTD, says, “It's ‘not enough to say, ‘Here’s the information, take it’ We must make it interesting and mo- tivate people to want to take it.” It’s important to measure participants’ reactions in an organized fashion using written comment sheets that have ‘been designed to obtain the desired reactions. The com- ‘ments should also be designed so that they can be tabulat- ed and quantified. The training coordinator, director, or other trained observer should make his own appraisal of the training in order to supplement participants’ reactions. ‘The combination of two evaluations is more meaningful When training directors effectively measure ! reactions and find them favorable, they can feel proud. But they should also feel humble; the evaluation has only just begun. Even though a training director may have done a masterful job measuring trainees’ reactions, that’s no assurance that any leaming has taken place. Nor is that an indication that participants’ behavior will change because of training. And still further away is any indication of results that can be attributed to the training. ‘The comment sheet in the box was used to measure conferees’ reactions at the 1959 American Society of Train- ing Directors Summer Institute. “The subsequent steps in the evaluation ing, behavior, and results—will be discussed in the next three articles in the series. Step 2: Learning From an analysis of reactions, training directors can deter- mine how well a program was accepted. They can also obtain comments and suggestions that will be helpful in improving future programs. It's important to obtain favor- able reactions because decisions on future training activi- ties are frequently based on the reactions of one or more sults have remained consiant Itall started in 1952, when I decided to write my dissertation on “evaluating a supervisory training program.” In analyzing my goals for the paper, I consid ered measuring participants’ re- action to the program, the amount of learning that took place, the extent of their change in behavior after they returned to their jobs, and any final re- sults that were achieved by par- ticipants after they returned to work. I realized that the scope of the research should be restricted to reaction and leaming and that behavior and results would have to wait. Thus, the concept of four levels was born. In the November 1959 art le, | used the term “four steps. But someone, I don’t know who, referred to the steps as levels.” The next thing I knew articles and books were referring to the four levels as the Kirkpatrick model, Defining the four levels In 1993, my friend and colleague Jane Halcomb urged me to write a book describing the modei. She said that many people ‘were interested in it but had trouble finding details, The book, Evaluating Training Programs: The Four Levels (Berrett Kochler, San Francisco, California, 1994), uses case studies to show how the four levels can be implemented—from such es as Motorola, Arthur Andersen, and Intel ic that the four-level mod- éLis too simple, “The Flawed Four-Level Evalu- tion Model,” written by Elwood F. Holton of Louisiana State University, will be published in Human Resource Development Quarterly this spring. Holton says that the model isn’t a model at all but a taxonomy, classification. Perhaps he is correct. { don't care whether its a model or taxon- omy as long as training professionals find it useful in evaluating training programs. People have asked me why the model is widely used, My answer: I's simple and practical, Many trainers aren't much interested in a scholarly, complex ap- proach, They want something they can understand and use. The model doesn’t provide details on how to imple- ‘ment all four levels. Its chief purpose isto clarify the mean- ing of evaluation and offer guidelines on how to get started and proceed, For those of you who are unfamiliar with the four lev- els, its time to deseribe them, Level |: Reaetion. This is measure of how participants feel about the various aspects of a training program, including and so forth. Reaction is bas- 's important be- about training the topic, speaker, schedule, cally & measure of customer satisfaction. It cause management often makes decisio Nal Ta Til dccatis Tretning &: Dievalépement, January 1006 8 THEN Now Sear, ‘ETT key persons In ation, the more favorable the reactions to. program, the more likely trainees are to pay attention the principles, facts, and techniques discussed. But favorable reactions don’t assure learning. Most of us have attended meetings in which the speaker used en- thusiasm, showmanship, visual aids, and illustrations to ‘make his presentation well-accepted. But a careful sis on the content would reveal that he said nothing of value, though he did it very well. Jes importang 1» detece tp eee a learning that takes place. For the purpose of the article, learning is defined in a rather limited way: What princi- 5 facts, and techniques were understood and absorbed trainees? We're not concerned with on-the-job use of the principles, facts, and techniques. quired, s Here are some guideposts for measuring learning: » Measure the learning of each trainee so that quantita- tive results can be determined. Level 4: Comment Sheet ‘The following evaluation was used to measure conferees’ reactions at the 1959 American Society of Training Directors Summer Institute, Leader __ Subject —— Date 1. Was the subject pertinent to your needs and interests? Qno Qho some extent Overy much so 2. How was the ratio of lecture to discussion? Ooo much lecture Qok too much discussion 3, How about the leader? a, How well did he state objectives? excellent Overy good good fair poor b. How well did he keep the session alive and interesting? Gercellent Overy good Ogood fir poor ©. How well did he use the blackboard, charts, and other aids? excellent Gi very good good fair poor d. How well did he summarize during the session? excellent 1 very good good fair I poor ¢. How well did he maintain a friendly and helpful manner? excellent very good good fair poor £. How well did he illustrate and clarify the points? excellent Avery good Agood fair Opoor . How was his summary at the close of the session? Dexcellem GQ very good Qgood fair poor What is your overall rating of the leader? Glexcellent Avery good Agood iti poor 4, What would have made the session more effective? _ 00 Tiida Elaine. Intinsy 1996. based on participants’ comments. Asking for participants reactions tells hem, “Were trying to help you become more effective, so we need to know whether we're helping you Another re participants are motivated and interested in learning. If they don't like a program, there's litle chance that they'll put forth an effort to lear. Level 2: Learning. This is a measure of the knowledge ac- Is improved, or atitudes changed due to taining. Generally, a training course accomplishes one or more of those three things. Some programs aim to improve trainees’ knowledge of concepts, principles, or tec tim to teach new skills or improve old ones, And some pro- ‘grams, such as those on diversity, try to change attitudes, Level 3: Behavior. This is « measure of the extent to which Participants change their on-the-job behavior because of on for measuring reaction is to ensure tha niques. Others training, I's commonly referred to as transfer of training, sults. This is a measure of the resulls that oc ‘cur due to training, including increased sales, higher productivity, bigger profits, reduced costs, less employee tumover, and improved Evaluation becomes more dificult, com- plicated, and expensive as it progresses from level 1 to level ¢—and more important and ‘meaningful. Some trainers bypass levels 1, 2, and 3 and go directly to level 4, Recently, 1 wwas asked by trainers in 2 consulting organi zation to skip a discussion of the first three levels and tell them how to do level 4 be: cause that’s what their customers want to know. [replied that understanding all four levels is necessary and that there are no easy answers for knowing how to measure resus, ‘The guidelines (see hox) were never in- tended to describe exactly what to do and how to do it, But they do provide an overview of the four levels and how to proceed. ‘Whether it’s called “Techniques for Eval- uating Training Programs” or Evaluating Training Programs: The Four Levels, i's ¢s- sentially the same story. Each source de. scribes the following reasons for evaluating training programs > to decide whether to continue offering a particular training program > to improve future programs » to validate your existence and job as a training professional. If the time, money, and expertise are available, its important to proceed through all four levels without skipping any. In some organizations, senior managers pay litle at tention to the training function. As long as they don't get negative vibes, they tend not to interfere or ask questions. But during times of downsizing, management must ter- minate people. Sometimes, the uainers are deemed expendable. The benefits from training may outweigh the costs, but unfor-

Вам также может понравиться