Вы находитесь на странице: 1из 29

Evaluating

Organizational
Change: How
Dr Kate Mackenzie Davey
and
Why?
Organizational Psychology
Birkbeck, University of London
k.mackenzie-davey@bbk.ac.uk
Aims
Examine the arguments for evaluating
organizational change
Consider the limitations of evaluation
Consider different methods for evaluation
Consider difficulties of evaluation in
practice
Consider costs and benefits in practice

2
Arguments for
evaluating
organizational change
Sound professional practice
Basis for organizational learning
Central to the development of
evidence based practice
Widespread cynicism about fads
and fashions
To influence social or
governmental policy
3
Research and
evaluation
Research focuses on relations between
theory and empirical material (data)
Theory should provide a base for policy decisions
Evidence can illuminate and inform theory
Show what does not work as well as what does
Highlight areas of uncertainty and confusion
Demonstrate the complexity of cause-effect
relations
Understand predict control

4
Pragmatic Evaluation:
what matters is what
works
Why it works may be unclear
Knowledge increases complexity
Reflexive monitoring of strategy links
to OL & KM
Evidence and cultural context
May be self fulfilling
Tendency to seek support for policy
Extent of sound evidence unclear
5
Why is sound
evaluation so rare?
Practice shows that evaluation is
an extremely complex, difficult
and highly political process in
organizations.
Questions may be how many, not
what works

6
Evaluation models
1. Pre-evaluation
2. Goal based (Tyler, 1950)
3. Realistic evaluation (Pawson & Tilley,1997;
Sanderson, 2002)
4. Experimental
5. Constructivist evaluation (Stake, 1975)
6. Contingent evaluation (Legge, 1984)
7. Action learning (Reason & Bradbury, 2001)

A study should be technically sound,


administratively convenient and politically
defensible. Alec Rodger

7
1.1 Pre-evaluation
(Goodman & Dean, 1982)
The extent to which it is likely
that... A has an impact on b
Scenario planning
Evidence based practice
All current evidence thoroughly
reviewed and synthesised
Meta-analysis
Systematic literature review
Formative v summative (Scriven, 1967)

8
1.2 Pre-evaluation
issues
Based on theory and past
evidence: not clear it will
generalise to the specific case
Formative: influences planning
Argument: to understand a
system you must intervene (Lewin)

9
2. 1. Goal based evaluation
Tyler (1950)

Objectives used to aid planned change


Can help clarify models
Goals from bench marking, theory or
pre-evaluation exercises
Predict changes
Measure pre and post intervention
Identify the interventions
Were objectives achieved?
10
2.2 Difficulties with
Goal based evaluation
Who sets the goals? How do you identify
the intervention?
Tendency to managerialism (unitarist)
Failure to accommodate value pluralism
Over-commitment to scientific paradigm
What is measured gets done
No recognition of unanticipated effects
Focus on single outcome, not process

11
3.1 Realistic evaluation:
Conceptual clarity (Pawson
& Tilley,1997)
Evidence needs to be based on clear
ideas about concepts
Measures may be derived from theory

Examine definitions used elsewhere

Consider specific examples

Ensure all aspects are covered

12
3.2 Realistic evaluation
Towards a theory: What
are you looking for?
Make assumptions and ideas explicit
What is your theory of cause and effect?
What are you expecting to change
(outcome)?
How are you hoping to achieve this
change (mechanism)?
What aspects of the context could be
important?

13
3.3 Realistic evaluation
Context-mechanism-
outcome
Context: What environmental
aspects may affect the
outcome?
What else may influence the
outcomes?
What other effects may there
be?

14
3.4 Realistic evaluation
Context-mechanism-
outcome
Mechanism: What will you do to
bring about this outcome?
How will you intervene (if at all)?
What will you observe?
How would you expect groups to
differ?
What mechanisms do you expect to
operate?
15
3.5 Realistic evaluation
Context-mechanism-
outcome
Outcome: What effect or
outcome do you aim for?
What evidence could show it
worked?
How could you measure it?

16
4.1 Experimental
evaluation:
Explain, predict and control by identifying causal
relationships
Theory of causality makes predictions about
variables eg training increases productivity
Two randomly assigned matched groups:
experimental and control
One group experiences intervention, one does
not
Measure outcome variable pre-test and post-test
(longitudinal)
Analyse for statistically significant differences
between the two groups
Outcome linked back to modify theory
The gold standard

17
4.2 Difficulties with
experimental evaluation
in organizations
Difficult to achieve in organizations
Unitarist view
Leaves out unforeseen effects
Problems with continuous change
processes
Summative not formative
Generally at best quasi-experimental

18
5.1 Constructivist or
stakeholder evaluation
Responsive evaluation (Stake, 1975) or
Fourth generation evaluation (Guba &
Lincoln, 1989)
Constructivist interpretivist
hermeneutic methodology
Based on stakeholder claims concerns
issues
Stakeholders: agents, beneficiaries,
victims

19
5.2 Response to an IT
implementation
(Brown, 1998)

Theme The ward Laborato IT Team


ry
Goal Improve Improve Clinical
quality to quality for and
patients ward staff financial
benefits
Outcome Waste of No Technically
time and improvem competent
energy on ent to system -
a pointless adequate misconcei
system systems ved
project 20
5.3 Constructivist
evaluation issues
No one right answer
Demonstrates complexity of
issues
Highlights conflicts of interests
Interesting for academics
Difficult for practitioners to
resolve

21
6 A Contingent approach to
evaluation
(Legge, 1984)
Do you want the proposed change
programme to be evaluated?
(Stakeholders)
What functions do you wish its
evaluation to serve? (Stakeholders)
What are the alternative approaches to
evaluation? (Researcher)
Which of the alternatives best matches
the requirements? (Discussion)
22
7. Action research

Identify good practice


(Reason & Bradbury, 2001) Action research
Responds to practical issues in
organizations
Engages in collaborative relationships
Draws on diverse evidence
Value orientation - humanist
Emergent, developmental

23
Problems with realist
models
Tendency to managerialise
Over-commitment to scientific paradigm
Context stripping,
Over-dependence on measures
Coerciveness: truth as non-negotiable
Failure to accommodate value pluralism
Every act of evaluation is a political act,
not tenable to claim it is value free

24
Problems with
Constructionist
approach
Evaluation judged by who for

whom and in whose interests?


Identify different views, then what?
Who has power?
Leaves decisions open
May lead to ambiguity

25
Why not evaluate?
Expensive in time and resources
De-motivating for individuals
Contradiction between scientific
evaluation models and supportive,
organization learning models
Individual identification with activity
Difficulties in objectifying and
maintaining commitment
External evaluation off the shelf
inappropriate and unhelpful
26
Why evaluate?
(Legge, 1984)

Overt Covert
Aids decision Rally
making support/opposition
Reduce Postpone a
uncertainty decision
Learn Evade
responsibility
Control
Fulfil grant
requirements
Surveillance

27
Conclusion
Evaluation is very expensive,
demanding and complex
Evaluation is a political process: need
for clarity about why you do it
Good evaluation always carries the risk
of exposing failure
Therefore evaluation is an emotional
process
Evaluation needs to be acceptable to
the organization

28
Conclusion 2
Plan and decide which model of
evaluation is appropriate
Identify who will carry out the
evaluation and for what purpose
Do not overload the evaluation
process:judgment or development?
Evaluation can give credibility and
enhance learning
Informal evaluation will take place
whether you plan it or not

29

Вам также может понравиться