Вы находитесь на странице: 1из 7

Michael Yeap...PhD Candidate: 20101127 - Evaluation Resea ch !"ocialResea chMethods.

net#

Page 1 of 7

Share

#ore

,e&t -log.

valve/oem0gmail.com

1ashboard

'ign Out

Michael "ea#...$h% &andidate


Search This Blog
'earch

Friday, November 26, 2010

20101127 - Evaluation Research (SocialResearchMethods.net)


One specific form of social research -- evaluation research -- is of particular interest here. The Introduction to Evaluation Research presents an overview of what evaluation is and how it differs from social research generally. We also introduce several evaluation models to give you some perspective on the evaluation endeavor. Evaluation should not be considered in a vacuum. Here we consider evaluation as embedded within a larger !lanning-Evaluation "ycle. Evaluation can be a threatening activity. #any groups and organi$ations struggle with how to build a good evaluation capability into their everyday activities and procedures. This is essentially an organi$ational culture issue. Here we consider some of the issues a group or organi$ation needs to address in order to develop an evaluation culture that wor%s in their conte&t. 'ource( http())www.socialresearchmethods.net)%b)evaluation.php

Lin s
Mobile Learning portal [prototype] WAP Blog: Mobile Learning & Usability

Blog !rchive
* 2010 (144) + e!e"ber (4) * #o$e"ber (%&)
Mobile 'W('')* pros & !ons Mobile P+one is "ore !on$enient t+an #oteboo, -o"p... rea/ing Bible on Mobile p+one in !+0r!+ 20101121 2 -on!ept0al3'+eoreti!al 4ra"e5or, (5i,ip... Usability 4ra"e5or, (6+a!,el) 20101121 2 !on!epts & /e7intions o7 )$al0ation res... 20101121 2 )$al0ation resear!+3"et+o/ (Patton) 20101121 2 )$al0ation *esear!+ (Wi,ipe/ia) 20101121 2 )$al0ation *esear!+ (Po5ell) 20101121 2 )$al0ati$e *esear!+ (Winsett8 #A'-9) 20101121 2 '+e Planning2)$al0ation -y!le (6o!ial*e... 20101121 2 )$al0ation *esear!+ (6o!ial*esear!+Met+... 2010112& 2 iagra" 2 4o!0s :ro0p as a ;0alitati$e ... 2010112& 2 Pee, & 4ot+ergill8 Using 4o!0s :ro0ps 7...

Introduction to Evaluation
2$%1%201&

http:%%'ichael(eap.)logspot.co'%2010%11%2010112*-evaluation- esea ch.ht'l

Michael Yeap...PhD Candidate: 20101127 - Evaluation Resea ch !"ocialResea chMethods.net#

Page 2 of 7

2010112& 2 4o!0s :ro0p resear!+ "et+o/ (:ibbs) 2010112& 2 4o!0s :ro0p reporting ((o5a 6tate Uni$)... 2010112& 2 4o!0s :ro0p resear!+ "et+o/ ((o5a 6tate... 2010112& 2 4o!0s :ro0p 2 a <0alitati$e resear!+ (=... 2010112& > 4o!0s :ro0p resear!+ "et+o/ology (Up4ro... 2010112& 2 4o!0s :ro0p resear!+ "et+o/ology (Wi,ip... 20101122 2 ?an/ling -o""on 'as,s 7or iP+one 2 part... 20101121 2 -o""on 'as,s 7or iP+one 20101121 2 Usability & esign :0i/elines 7or iP+on... 20101121 > Apple8 iP+one ?0"an (nter7a!e :0i/eline... Prototype o7 Mobile Learning Portal@pls !riti!ise3... 20101120 2 6tart eAperiential.."obile apps & M2Lea... 20101120 2 'e!+*a/ar: Mobile 5eb /esign: plat7or" ... Mobile Learning portal [prototype] WAP Blog & )1 !lone 20101114 2 =oo8 9nline -ollaborati$e Learning... /B ay 1C #o$ 2010 Proposal3Wor, e7ense 6e"inar 2010110D 2 'ri7ono$a8 ..?oar/ing -ontent in Mobile... 5riting & s0b"itting Eo0rnal Paper...7ire2n27orget... Usability (nspe!tion -riteria 7or e2Learning Porta... Paper 7or Eo0rnal: Usability (nspe!tion -riteria 7... iE)' 2010

Evaluation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much. Here we introduce the idea of evaluation and some of the major terms and issues in the field.

Definitions of Evaluation
robably the most frequently given definition is! Evaluation is the systematic assessment of the worth or merit of some object "his definition is hardly perfect. "here are many types of evaluations that do not necessarily result in an assessment of worth or merit ## descriptive studies, implementation analyses, and formative evaluations, to name a few. $etter perhaps is a definition that emphasizes the information#processing and feedback functions of evaluation. %or instance, one might say! Evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object $oth definitions agree that evaluation is a systematic endeavor and both use the deliberately ambiguous term 'object' which could refer to a program, policy, technology, person, need, activity, and so on. "he latter definition emphasizes acquiring and assessing information rather than assessing worth or merit because all evaluation work involves collecting and sifting through data, making judgements about the validity of the information and of inferences we derive from it, whether or not an assessment of worth or merit results.

) 9!tober (42) ) 6epte"ber (&2) ) 200C (212)

The Goals of Evaluation


"he generic goal of most evaluations is to provide "useful feedback" to a variety of audiences including sponsors, donors, client#groups, administrators, staff, and other relevant constituencies. &ost often, feedback is perceived as 'useful' if it aids in decision#making. $ut the relationship between an evaluation and its impact is not a simple one ## studies that seem critical sometimes fail to influence short# term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise. (espite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback.

'ace(oo

Badge

http:%%'ichael(eap.)logspot.co'%2010%11%2010112*-evaluation- esea ch.ht'l

2$%1%201&

Michael Yeap...PhD Candidate: 20101127 - Evaluation Re ea!ch "#ocialRe ea!chMethod .net$

Page 3 of 7

Lion Mick Yap

Evaluation Strategies
'Evaluation strategies' means broad, overarching perspectives on evaluation. They encompass the most general groups or "camps" of evaluators; although, at its best, evaluation work borrows eclectically from the perspectives of all these camps. Four ma or groups of evaluation strategies are discussed here. Scientific-experimental models are probably the most historically dominant evaluation strategies. Taking their values and methods from the sciences !! especially the social sciences !! they prioriti"e on the desirability of impartiality, accuracy, ob ectivity and the validity of the information generated. #ncluded under scientific!e$perimental models would be% the tradition of experimental and quasi-experimental designs; objectives-based research that comes from education; econometrically-oriented perspectives including cost!effectiveness and cost!benefit analysis; and the recent articulation of theorydriven evaluation.

Create Your Badge

T)itter 'ollo)ers
with 2oogle Friend .onnect

The second class of strategies are management-oriented systems models. Two of the most common of these are PERT, the Program Evaluation and Review Techni&ue, and CPM, the Critical Path Method. 'oth have been widely used in business and government in this country. #t would also be legitimate to include the (ogical Framework or "(ogframe" model developed at ).*. +gency for #nternational ,evelopment and general systems theory and operations research approaches in this category. Two management!oriented systems models were originated by evaluators% the UT ! model where U stands for )nits, T for Treatments, for -bserving -bservations and ! for *ettings; and the C"PP model where the C stands for .onte$t, the " for #nput, the first P for /rocess and the second P for /roduct. These management-oriented systems models emphasi#e comprehensiveness in evaluation, placing evaluation within a larger framework of organi"ational activities. The third class of strategies are the qualitative/anthropological models. They emphasi"e the importance of observation, the need to retain the phenomenological &uality of the evaluation conte$t, and the value of sub ective human interpretation in the evaluation process. #ncluded in this category are the approaches known in evaluation as naturalistic or $%ourth &eneration$ evaluation; the various qualitative schools; critical theory and art criticism approaches; and, the $grounded theory$ approach o' &laser and !trauss among others. Finally, a fourth class of strategies is termed participant-oriented models. +s the term suggests, they emphasi"e the central importance of the evaluation participants, especially clients and users of the program or technology. Client-centered and sta(eholder approaches are e$amples of participant! oriented models, as are consumer-oriented evaluation systems. 0ith all of these strategies to choose from, how to decide1 ,ebates that rage within the evaluation profession !! and they do rage !! are generally battles between these different strategists, with each claiming the superiority of their position. #n reality, most good evaluators are familiar with all four categories and borrow from each as the need arises. There is no inherent incompatibility between these broad strategies !! each of them brings something valuable to the evaluation table. #n fact, in recent years

Members )*+

+lready a member1 *ign in

http:&&(ichael)eap.*log pot.co(&2010&11&2010112+-evaluation-!e ea!ch.ht(l

2%&1&201'

Michael Yeap...PhD Candidate: 20101127 - Evaluation Re ea!ch "#ocialRe ea!chMethod .net$

Page ' of 7

attention has increasingly turned to how one might integrate results from evaluations that use different strategies, carried out from different perspectives, and using different methods. .learly, there are no simple answers here. The problems are comple$ and the methodologies needed will and should be varied.

Types of Evaluation
There are many different types of evaluations depending on the ob ect being evaluated and the purpose of the evaluation. /erhaps the most important basic distinction in evaluation types is that between formative and summative evaluation. %ormative evaluations strengthen or improve the object being evaluated !! they help form it by e$amining the delivery of the program or technology, the &uality of its implementation, and the assessment of the organi"ational conte$t, personnel, procedures, inputs, and so on. !ummative evaluations, in contrast, examine the e''ects or outcomes o' some object !! they summari"e it by describing what happens subse&uent to delivery of the program or technology; assessing whether the ob ect can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the ob ect. Formative evaluation includes several evaluation types% F needs assessment determines who needs the program, how great the need is, and what might work to meet the need F evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness F structured conceptuali ation helps stakeholders define the program or technology, the target population, and the possible outcomes F implementation evaluation monitors the fidelity of the program or technology delivery F process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures Summative evaluation can also be subdivided% F outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes F impact evaluation is broader and assesses the overall or net effects !! intended or unintended !! of the program or technology as a whole F cost-effectiveness and cost-benefit analysis address &uestions of efficiency by standardi"ing outcomes in terms of their dollar costs and values F secondary analysis ree$amines e$isting data to address new &uestions or use methods not previously employed

http:&&(ichael)eap.*log pot.co(&2010&11&2010112+-evaluation-!e ea!ch.ht(l

2%&1&201'

Michael Yeap...PhD Candidate: 20101127 - Evaluation Re ea!ch "#ocialRe ea!chMethod .net$

Page 5 of 7

F meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary udgement on an evaluation &uestion

Evaluation Questions and Methods


Evaluators ask many different kinds of &uestions and use a variety of methods to address them. These are considered within the framework of formative and summative evaluation as presented above.

In formative research the major questions and methodologies are:


What is the definition and scope of the problem or issue, or what's the question? Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techni&ues, Delphi methods, brainwriting, stakeholder analysis, synectics, lateral thinking, input!output analysis, and concept mapping. Where is the problem and how big or serious is it? The most common method used here is "needs assessment" which can include% analysis of e$isting data sources, and the use of sample surveys, interviews of constituent populations, &ualitative research, e$pert testimony, and focus groups. How should the program or technolog be deli!ered to address the problem? *ome of the methods already listed apply here, as do detailing methodologies like simulation techni&ues, or multivariate methods like multiattribute utility theory or e$ploratory causal modeling; decision!making methods; and pro ect planning and implementation methods like flow charting, /E3T4./5, and pro ect scheduling. How well is the program or technolog deli!ered? "ualitati!e and quantitati!e monitoring techniques, the use of management information s stems, and implementation assessment would be appropriate methodologies here.

he questions and methods addressed under summative evaluation include:


What t pe of e!aluation is feasible?

http:&&(ichael)eap.*log pot.co(&2010&11&2010112+-evaluation-!e ea!ch.ht(l

2%&1&201'

Michael Yeap...PhD Candidate: 20101127 - Evaluation Re ea!ch "#ocialRe ea!chMethod .net$

Page 6 of 7

Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design. What was the effectiveness of the program or technology? One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi-experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources. What is the net impact of the program? Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts.

Clearly, this introduction is not meant to be exhaustive. Each of these methods, and the many not mentioned, are supported by an extensive methodological research literature. This is a formidable set of tools. ut the need to improve, update and adapt these methods to changing circumstances means that methodological research and development needs to have a ma!or place in evaluation wor". #ource$ http$%%www.socialresearchmethods.net%"b%intreval.php
Posted by Michael "ISO" at 5:53 PM Labels: evaluation research, research method, SocialResearchMethods.net

No comments:

Post a Comment

http:&&(ichael)eap.*log pot.co(&2010&11&20101126-evaluation-!e ea!ch.ht(l

2%&1&201'

Michael Yeap...PhD Candidate: 20101127 - Evaluation Re ea!ch "#ocialRe ea!chMethod .net$

Page 7 of 7

Enter your comment...


Sign out

Comment as:

Michael Valves (Google)


Preview

Publish

Notify me

Links to this post


"reate a Lin#

e!er Post Subscribe to: Post "omments %&tom'

$ome

Older Post

http:&&(ichael)eap.*log pot.co(&2010&11&20101126-evaluation-!e ea!ch.ht(l

2%&1&201'

Вам также может понравиться