Академический Документы
Профессиональный Документы
Культура Документы
Evaluation Planning
Copyright
1997
Evaluation Unit
Federal Office of Public Health
CH-3003 Bern
Tel. +41 (0)31 323 87 61 or 66
Fax. +41 (0)31 323 88 05
EMail: Marlene.Laeubli@BAG.admin.ch
Photocopying of Checklists only is authorised without written permission
Motto
Table of Contents
Page
Introduction and Overview of Contents
11
17
25
27
29
31
33
37
39
41
43
45
47
51
53
55
59
Annexes:
1. References and Recommended Reading List
2. Glossary of Evaluation Terms
3. Characteristics of Conventional and Interpretive
Approaches to Social Science Research and Evaluation
4. Evaluation Questions:
An Example of Different Question Types
5. Guidelines for Developing or Assessing Medico-Social
Training/Evaluation Projects Swiss Federal Office of Public Health,
1995. Evaluation, Research and Training Section
61
62
68
72
73
N.B. A complete set of the Checklists is provided separately so that they can
be photocopied and used in contract negociations with external partners.
Foreword
Acknowledgements
In the process of designing and writing these guidelines, we are deeply indebted to a multitude of people, all of whom in their own way contributed to this
final product.
Several organisations and individuals were asked to review the original drafts of
these Guidelines. We are indeed grateful to the many people who contributed
to turning the original ideas into the final version. In the first case, our thanks
go to our many colleagues in the Federal Office of Public Health Medical
Division: the Evaluation, Research and Further Education Section; and the
AIDS, Illegal and Legal Drugs Sections of the Health Promotion Division.
Our thanks too for the helpful comments received from our external partners:
the Prevention Programme Evaluation Unit of the Social and Preventive
Medicine Institute, University of Lausanne, (UEPP de IUMSP, Lausanne); the
Drugs Research Institute in Zurich (ISF); and the Institute for Social and
Preventive Medicine, Zurich (ISPM).
The process of translating ideas into a published set of guidelines equally
involves the efficient and creative help of graphic designers and desk-top publishers. We are indebted to Satzart AG Bern who turned our basic word
processed document into its current polished, professional format.
Yet another book on evaluation principles and theory? No! That is certainly not
what you will find in this book! These Guidelines have been written to help staff
of the Federal Office of Public Health (FOPH) and its external partners reflect
on what needs to be included when planning an evaluation of their project/programmes. They are therefore of interest to the following:
staff of FOPH health promotion, prevention and campaign sections;
Project/programme planners;
Project/programme managers and implementers;
external evaluators.
These guidelines deal with the evaluation of the FOPHs prevention and health
promotion activities and therefore address issues relevant to project and programme evaluation 1 only.
A reading list is provided for those interested in finding out more (Annex 1).
Readers are encouraged to discuss more complex needs with the Evaluation
Unit of the FOPH.
The manual provides practical information on how evaluations are best planned,
organised, commissioned and used. We accept that for reasons beyond our
control, putting these principles into practice will sometimes prove difficult. As
guidelines, however, they set the standards we should strive to attain.
Refer to Glossary of Evaluation Terms for definions of Programme and Project. Despite differences, the evaluation principles outlined in this manual apply to both. For ease of reading, therefore, from now on we will use only the term Project for both.
Part Two
Part Three
Part Four
Part Five
Promoting and Using the Evaluation Results: The final part of the
Guidelines deals with identifying the key messages, key audiences to be informed and what we can do to use the results to
their best effect.
Annexes
N.B. A complete set of the Checklists is provided separately so that they can
be photocopied and used in contract negociations with external partners.
What is a Project?
What is a Programme?
What is Project/Programme
Evaluation?
For our purposes we have defined evaluation as the systematic collection and
analysis of information not necessarily routinely available, about a specific project or programme to enable its critical appraisal.
This refers to the evaluation of a total package: the global aims and objectives,
and the strategy, measures and actions taken towards attaining the global policy objectives. Analysis covers:
the strategy, measures, structural support and actions used to achieve the
policy aims and objectives;
key environmental factors (the socio-political and economic context);
the end results on target populations and/or settings;
the inter-relationship between the above.
What is Meta-evaluation?
What is Meta-analysis?
Not all interventions need or indeed should be professionally evaluated by outside specialists. Every project manager is expected to provide an internal selfevaluation of the projects development and results as part of his/her management tasks, for example to justify the use of public funding. Was the project implemented according to plan? Were the objectives attained? Were there
differences between what the project set out to do compared with what clients
really needed? These are just some of the questions which project managers
need to address in their interim and final reports. Self-evaluation is therefore an
integral part of good management and project planners should include a budget
of approximately 5% of total project costs for evaluation. Where needed some
12
of this money can then be used to call upon professional evaluators to help set
it up (what to do and how to do it).
A checklist of the main elements to be included in project self-evaluation is
provided at the end of Part 2. A proposed list of headings for the reports are
also included.
Many of the questions to be addressed in self-evaluations are, of course, the
same as those needed for external evaluations. So what is the difference?
Essentially, the main differences between self and external evaluations are the
evaluators (those who do it) and the scope covered by the evaluation study.
Project staff conduct self-evaluations. But they are constrained by time since
their principal task is to develop the project. The scope of what can be reviewed
during self-evaluations is therefore likely to be limited. Managers assessment
of the projects relevance, progress and effects is also likely to be influenced by
their interest in the success of the project itself. In other words, their closeness
to the project may make it difficult for them to provide an objective appraisal of
the situation. External evaluators, on the other hand, are specifically paid to
devote time and resources to this work. As externals, they should have no vested interests (e.g. financial, professional or personal) in the project being evaluated, and thus no self-interest in its outcome. In theory, therefore, they should
be more objective in their analysis.
Ideally however, external and self-evaluations should be designed to help
answer both project specific and the more global evaluation questions. The latter, of course, will vary according to the specific prevention package and the
level of prevention/target audience. Thus each level of evaluation should in turn,
contribute towards answering some particular and global questions. In such a
way the external global evaluation team should be able to synthesise and
analyse the collated information from a range of project self and external
evaluations. The FOPHs Evaluation Unit should be consulted for more information.
Essentially we (the FOPH) require evaluation for four reasons:
Why evaluate?
1. to improve our strategies and actions so that they are meeting clients
needs;
2. to help us make informed decisions in future planning;
3. to clarify the options available; and
4. to account for the expenditure of public funds.
Evaluation is not new, but it is being given greater emphasis and more systematic attention as part of the drive within the Civil Service world-wide to
improve accountability and financial management in general. Historically, professional programme evaluation developed in the USA during the 1960s. Its
major impetus came from the requirement for evaluation of the Great Society
Education Legislation in 1965. Similar demands were placed on other social reforms of the time.
With such ever-increasing demands, over the past three decades US evaluation
has moved from being a peripheral activity of academics, to a recognised profession and career with its own theories, standards and code of practice. A
similar trend can be traced in many other countries. The requirement for evaluation within the Swiss Civil Service has been a much more recent, but growing
phenomenon. A national research programme (no. 27) was funded to examine,
test and improve evaluation methods and methodologies relevant to the study
of state policy, strategies and measures, and their effects.
The need to assess and evaluate actions taken by the FOPH has now become
one of its guiding management principles (see Foreword). Within the Medical
Division, a range of projects are funded which aim at preventing disease, and
improving the populations general health by promoting healthy lifestyles. As
managers of public funds, we are responsible for ensuring that this money is
well spent. We are therefore accountable not only to the FOPH and the government, but equally to tax payers, and more particularly, the specific clients
13
for whom our activities are targeted. This means that we need to be assured
that appropriate systems are established to supply us with information which
can help us assess the development, acceptability, effectiveness and impact of
the projects we fund.
Principal evaluation
methodologies
There are many views on how evaluation should be approached ranging from
quantitative systems analysis to qualitative case studies. There is not just one
best way to evaluate health projects: some models are more appropriate
than others depending on the evaluations purpose, and the questions to be
addressed. The FOPH Evaluation Unit in fact routinely commissions a range of
different evaluation approaches, each being selected to suit a particular need.
To a large degree, the choice of the evaluation design depends on the purpose, and therefore the types of questions to be answered.
Generally speaking, however, there is a lack of consensus within the evaluation
community about which approach yields the most useful results. In general this
relates to the various philosophical assumptions held by evaluators, theoreticians and practitioners. At one extreme there is a strong belief in the need for
hard data and statistical proof. This type of approach (sometimes referred to
as the conventional approach) has its philosophical roots in logical positivism
and is characterised by quantitative measurement and experimental design to
test performance against objectives and a priori assumptions. Its strength is
that it can supply us with statistically measurable information from large population samples on a limited number of predetermined items. We therefore can
gain a broad set of findings from which, it is argued, we can make generalisable conclusions.
The relevance and applicability of this approach to real social settings have,
however, been increasingly questioned (e.g. Guba and Lincoln, 1989; Patton,
1979, 1980, 1987; Parlett and Hamilton, 1974; etc.). Some of its major beliefs
have been found untenable within the philosophy of science. Proponents of an
alternative evaluation methodology emphasise the need to provide description
and understanding of social phenomenon in its natural setting with no manipulation, control or elimination of situational variables. This type of approach is
grounded in phenomenology and adheres to the principles of inductive strategies for developing theory from the data during the course of the research
(Glaser and Strauss, 1967). In contrast to deductive strategies based on testing
pre-conceived assumptions, key evaluation issues and hypotheses emerge
from intensive on-site, qualitative, case study investigation and are systematically worked out in relation to the data during the course of the research. This
model is often referred to as the interpretative approach.
These two models conventional or positivistic evaluation and interpretative evaluation are therefore theoretical paradigms (methodology). Whilst the
conventional paradigm largely employs quantitative methods (the tools,
not the methodology), the interpretative approach mainly draws on qualitative methods. However, neither necessarily relies on only quantitative or qualitative methods and a combination of both can and is used by both paradigms.
The distinction between these two paradigms is steeped in their philosophical
tenets: deductive versus inductive. Characteristics of these two approaches are
provided in table form as Annex 3.
Summary
In this section we have set out our working definition of what we believe evaluation to be in order to meet our specific purposes. We have indicated that
whilst there is an array of strategies which can be adopted for conducting evaluations, each with its own merits and shortcomings, ultimately the appropriateness of an approach is determined by the specific questions and issues we
want addressed.
Linking project
programme and global
evaluation into
evaluation planning
From our definitions you can see that each type of evaluation has its own focus.
Project evaluation concentrates on the project, programme evaluation on the
programme, and global on the more global issues. With careful planning, each
can be linked one to the other. Programme evaluation can benefit from
14
analysing what has been learned from associated project evaluations, and global evaluations from the project and programme evaluations. Evaluation planning
should therefore be co-ordinated to include some general as well as more specific evaluation questions to ensure that each level of evaluation can contribute
to another.
15
16
(1)
Relevant Checklists
17
Having looked at what evaluation is and why we do it, in the following paragraphs we have set out some points for you to consider when planning an
evaluation study. Our objective is to get timely yet comprehensive and integrated information on how a project or programme is working. To do this we
have to work out what we need, why we need it and when feedback will prove
to be of most benefit.
When to evaluate?
All projects funded by the Federal Office of Public Health (FOPH) are contractually obliged to review, assess and report on their achievements. Each year
managers are asked to submit an Annual Report of their activities. This report
should be considered as a Self-assessment of the projects progress.
Sometimes, however, an internal, or self-assessment is not enough. The
FOPH may decide that an external evaluation is needed. But how do we
determine whether an external evaluation is needed? What criteria should we
use for making this decision? Checklist 2.1 at the end of Part 2 helps you to decide whether an external evaluation is warranted. If yes, this should then be discussed with the FOPHs evaluation specialists at the earliest opportunity, preferably during the early stages of negotiating the intervention to be evaluated.
Ideally projects should incorporate some degree of monitoring and evaluation
right from their start-up. Some of the reasons for this are obvious:
everything is fresh in the mind of both project sponsors and managers, particularly which key elements of the project will need to be carefully looked at;
the purpose and expectations of the evaluation can be defined and the
appropriate funds can be budgeted;
arrangements for getting which information from which groups can be set
up early on;
questions relating to the overall preventions strategy, measures and actions
can be included;
a good feedback procedure can be planned.
Since the aim of evaluation is to help improve the planning, development and
management of FOPH strategies and interventions, ideally it should provide us
with ongoing feedback throughout the various stages of a projects life: conception, development and implementation. In this manner, it can prove instrumental in the following ways:
to reduce the risk of projects generating unrealisable or inappropriate objectives and/or strategies;
to ensure that target populations receive relevant and effective health care
programmes;
to secure sustained political and financial support.
Evaluation should therefore be regarded as a cyclical process which involves
responding to a variety of clients on different aspects of the project throughout
its life cycle. Monitoring and evaluation feed into all stages of this development
cycle to help improve, re-orient or even st op an unsuccessful intervention. The
feedback process is therefore vital. Figure 2.1 below illustrates the key features.
Figure 2.1 shows how ideally evaluation can help appraise the life cycle of a
project. A similar procedure can be put into effect for programme evaluation
(i.e. the evaluation of several projects which share matching goals). However,
as the project(s) are refined and repeated on a regular basis, there will be more
need for the regular monitoring of progress and achievements and less need
for intensive evaluation. Periodic evaluation should, however, take place to reassesses whats happening.
18
Efficiency
Accountability
Relevance
EVALUATION
Effectiveness
Formulation
Progress
Implementation
plus:
Information dissemination & valorisation
Networking (incl. lobbying)
Negotiating
Co-ordinating
Stakeholders involved
politicians
public/taxpayers
media
target populations
Fieldworkers
special interest groups
researches
others
19
20
The context of a project includes information on where it is located, the political and social climate in which it is set (i.e. is it conducive or hostile towards the project), and the economic constraints, etc. An understanding of the context and its changes during the project's life, and its interrelationship with the project's development is essential. This helps audiences interpret evaluation
findings, and judge whether the project's context is similar to others, or quite specific. It therefore
reduces the likelihood of claiming wider applicability of findings than is otherwise justifiable.
Thus an analysis of the results, outputs and impacts grows in importance as the
intervention advances.
Whilst ideally evaluation should therefore be planned to take place in parallel
with the development of a project, this does not always happen. At worst
evaluations are often tagged on as an afterthought and expected to make a
summative appraisal of what has happened in hindsight. The focus of the evaluation should relate to the stage at which the project is at.
As we said above, evaluations can serve different purposes according to different needs. We therefore need to consider the following:
SMART objectives are those which are relevant of course, but also:
Specific,
Measurable,
Appropriate,
Realistic, and achievable within a defined
Time.
Far too often, objectives are not clearly set out, or are set out in so many different places within the text that it proves difficult to pick them out or rank
them in any rational order.
By the time a projects funding contract is drawn up, its objectives should have
been refined as far as circumstances permit. An evaluation at this stage can,
however, sometimes bring to light ambiguities or inconsistencies in the project
proposal which can then be corrected before the project is launched.
Determining the projects objectives at the outset need not necessarily prevent
their being changed later. Indeed it is to be expected that the external environment will change and that in turn, the projects objectives may need to be modified. The existence of a plan enables such changes to be noted explicitly and
allows the evaluation to take them into account. It is very difficult to evaluate a
project whose objectives have shifted if the changes and reasons for such have
not been documented.
So, an evaluation can help clarify, re-define or even focus the Projects aims
and objectives. Equally it can describe and analyse the reasons for such
change, and the consequences.
But even if the Projects aims and objectives do not change, we may want to
commission an evaluation to learn more about how certain projects develop
under certain conditions. In other words, we may want an evaluation to document and analyse how projects actually get put into practice. Our purpose for
the evaluation in this case, is to increase our understanding so that we can
improve our future planning.
21
On the other hand, we may call upon an evaluation to help us make decisions.
For example, a project may not be doing what it set out to do. Or maybe it is
doing something different from what was expected. In such cases an evaluation could be asked to identify how and why this has happened so that we can
decide what to do. Is it relevant but too expensive? Is it operating in a hostile
climate? Is it an ineffective method of prevention for the particular setting or
target population? How could it be modified? Should it be closed down? Is the
contract due for renewal? What kind of decision will have to be made once we
have the evaluation results? Once we have clarified why we want to have an
evaluation, we can then determine what questions we need to ask.
But before looking at the evaluation questions, we should also remember that
the FOPH is not the only beneficiary of evaluations. The project planners, managers and implementers have as much interest as the FOPH in learning about
their prevention efforts. Equally, those who are directly or indirectly affected by
the work are also potential evaluation audiences.
Identifying evaluation audiences
Who will be involved and/or
affected by the project?
The full range of groups likely to be involved and/or affected should be identified in the evaluation proposal. This will help focus its purpose, scope and ultimate audience. Such groups will include those interested in using the evaluation results to make decisions as well as the users, present and potential, of
the intervention itself. Groups likely to be affected are:
What is each group likely to want to learn from the evaluation findings? Whilst
it is unlikely that we can consult all stakeholders when planning the evaluation, we should at the very least, involve the key project staff. We should equally try to predict what type of information and findings will be of interest to other
potential evaluation audiences. In short, we should determine in advance not
only WHO will be interested in the evaluation findings, but also WHAT they are
likely to want the evaluation to inform them about.
Defining the evaluation questions
22
having protected sex with commercial sex workers, some turned their attention towards schoolgirls instead. 3
But we should also be aware that the way the question is posed will determine
the type of information we can expect. For example, the effectiveness of a project designed to train drug users as peer educators (mediators) amongst a subset of drug users such as iv drug user prostitutes. In this case we may want to
know what motivates the mediators to remain active for more than six months.
The question about motivation could be posed in a variety of ways. One could
look at the personality factors of those who remain and those who dont (e.g.
extrovert, shy, intelligent, domineering etc.). Equally one could consider the
social conditions of these two groups (e.g. social class, employment status, family life, schooling etc.). The type of data and data sources used for each would
be different and lead to a different type of analysis.
Formulating the right questions should start off by looking at a wide range of
potential questions for example about different aspects of the project (such as
the aims and objectives, inputs, management, infrastructure, social context,
etc.). Broad questions about these aspects can then be considered in terms of
priorities, how feasible they would be to answer and indeed how expensive
that might be. The Evaluation Unit can help formulate the initial evaluation questions pointing out what might be easily answered, what information would be
needed and what could and could not be expected from the answers to each.
However, we must remember that whilst we should determine the overall
questions we want addressed, ultimately it is the task of the evaluator to refine
these questions and identify the relevant sub-questions too. The evaluation
questions s/he poses will then help determine the design of the study and the
approach needed. Thus once an evaluator is appointed, (see Part 3 for how this
is done) the questions we initally put forward can then be refined and further
discussed until mutual agreement is reached.
The scope of information to be collected should be tailored to address the
agreed evaluation questions. (But the questions should not be determined by the
scope of information available!). However, to some degree this will depend on:
Moodie R, Katahoire A, Kaharuya F, et al An Evaluation Study of the Uganda National AIDS Control
Programs (NACP) Information, Education and communication Activities, NACP/WHO, Entebbe,
December 1991
23
Budgeting an evaluation
In general, between 10%15% of the total projects budget should be set aside
for financing an external evaluation study (approx. 5% for self-evaluations). In
exceptional cases, and in consultation with the Evaluation Unit, the budget
could even be set slightly higher. For example, an intensive, in-depth evaluation
study of a low budget project may be recommended to obtain the required
answers to questions. In this case, more time input from the evaluator will be
required and will therefore cost more. Budget items should include staff costs,
travel, equipment needs, and evaluation output costs (e.g. report translation and production charges).
The budget should reflect realistic costings of the proposed scope, methods
and procedure.
A separate budget for projected valorisation activities should also be
itemised in the evaluation proposal budget. It should cover charges for the
evaluators time and transportation costs only. This is needed because once
the final report has been accepted, depending on the findings, the FOPH
may organise targeted activities to disseminate and discuss the results with
a range of stakeholders.
The evaluation will certainly have to take place within the limitations of budget,
time and resources available. Whilst it would be interesting to have all our questions answered, it is likely that such factors will constrain the scope and depth
of the study. That is why we have to prioritise and be aware of what can and
will not be accomplished.
But even when the study has been clearly delineated at the time of contract,
the evaluator may find that what seemed feasible in the beginning does not
prove possible in reality. For example, it may well be that the evaluator cannot
interview all the different levels of those involved in a project originally planned
(absent, left the project, too busy etc.). Alternatives have to be considered, and
if no feasible options can be exploited, the limitations to the analysis should be
explained and discussed in reports (verbal or otherwise) to the FOPH.
Mistakes to be avoided
24
Only thinking about commissioning an evaluation for the first time when the
project is nearing its end. Evaluations are more helpful when they are
planned concurrently with the actual project.
Not discussing the evaluation purpose with the project staff.
Defining and prioritising the evaluation questions without consulting key
partner clients.
Neglecting to identify who are directly and indirectly involved and/or affected
by the evaluation results at the planning stage.
Expecting access to data and project co-operation without negotiating what
and how this can be feasibly achieved.
Data gathering should be achieved without over-burdening the workload of
the project staff and through causing minimal disruption to project activities.
Neglecting to discuss the purpose of the evaluation and negotiate access to
data with ALL relevant persons/groups. Planning and discussing the evaluation and data needs with the project manager does not mean that others
involved will be aware of what is happening!
Expecting too much from the evaluation with respect to the time and budget available.
Expecting the evaluation to make decisions. It is not the task of the evaluator to make decisions based on the evaluation findings: this is the contractors responsibility i.e. the FOPH.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
25
p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
27
Timely Feedback
28
p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
29
3. Recommendations
WHAT recommendations/advice would you give
WHICH groups/people about
the future of your project?
setting up a similar project within the same
canton/setting?
setting up a similar project within a different
canton/setting?
In particular, what would you recommend to the
Federal Office of Public Health about supporting
a similar project?
30
p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
31
32
Relevant Checklists
33
This section provides our partners with an overview of the principles used by
the Federal Office of Public Health (FOPH) when commissioning external
evaluations.
The Evaluation Unit provides FOPH staff with guidance on the step-by-step procedures used when commissioning evaluation studies, and is ultimately
responsible for the quality control of all external evaluation contracts. Its staff
help FOPH partners determine their evaluation needs and purposes, an appropriate evaluation approach and useful feedback schedule. It also advises on the
choice of external evaluators.
The following paragraphs briefly describe the procedures which
should be adopted for setting up the contractual agreement.
Step 1 The Evaluation Mandate
Once the need for an evaluation has been agreed (see Criteria Checklist for
external evaluations, Part 2) the basic requirements for the evaluation study are
set out in writing. In some cases (e.g. evaluation budget = less than frs.
100 000), this may only need to be a brief outline of the purpose, the general
evaluation questions, potential evaluation audiences, time scale and budget.
For more substantial studies, it should be more detailed. (see Checklist 3.1 to
see what information should be provided). The evaluation mandate is then discussed with the key partners involved (e.g. FOPH staff such as specialists from
the relevant prevention section and Evaluation Unit), and the project managers).
Once agreement is reached, a Call for Proposals for the evaluation may be
circulated to the evaluation community. The Call for Proposals package must
include not only the evaluation mandate, but also:
the major documents about the project;
an outline of the FOPHs relevant prevention strategy plus a short statement
on where the particular project to be evaluated is in relation to this;
copies of previous evaluation reports, research reports and/or theoretical
papers relevant to the proposed evaluation study (if available) or at least
details of where they can be obtained;
and the Evaluation Units Drafting a Proposal checklist (Checklist 3.2 of
these Guidelines).
Evaluators should take care to refer to these documents in drawing up their initial evaluation proposal.
Potential evaluators will then be invited to submit their proposals. The evaluation proposal is a key component of the evaluation contract as it sets out the
main elements of what will be done, by when, and how this will be achieved.
It is therefore annexed to the contract and forms an integral part of the agreement.
34
A good evaluation design is the result of direct negotiation between the key
stakeholders and the evaluator. It is based on a sound knowledge of the project, its context and the differing stakeholders concerns. Once commissioned,
the evaluators should therefore spend an initial period getting to know more
about the project, the context in which it is set and the key partners. A wider
range of project documents should be reviewed, and discussions with all key
client groups should be held in order to help focus and prioritise the questions
to be addressed. This is likely to lead to the production of a refined evaluation
proposal and workplan. What can and cannot be addressed should be clearly
described to indicate the limitations of the study.
Any refinement to the original proposal needs to be submitted in writing. Once
agreed by all parties concerned, it will be used as an integral part of the contract and annexed to the original proposal.
Once the evaluation is underway, it is a serious mistake to believe that nothing
needs to be done until the interim or final report comes in. It is possible, and
indeed highly likely in the case of experimental and pilot projects, that the original evaluation plan, i.e. its purpose and procedures, will change during the
course of the evaluation. This may be due to, for example, unanticipated issues
which come to light only once the evaluation is underway, to significant
changes in the projects setting, or to a conflict of interests between key partners.
The contractors and project managers need to be kept informed! Any modification to, or reorientation of the original evaluation plan should be mutually
agreed, preferably in writing, between the contractual partners. It is the FOPHs
responsibility to ensure that potential changes are fully discussed with the relevant partners, i.e. the relevant FOPH sections, such as the Prevention Section
and Evaluation Unit, and the project under study. Significant changes will
require a formal amendment to the contractual agreement, requiring the same
signatories as the original contract.
But we dont only want to hear about changes to the evaluation design: important findings, issues or concerns which come to light through the evaluation
process should also be fed back to us. Remember, we expect evaluations to
help us improve our prevention efforts: this means that we should be kept upto-date on findings, albeit even interim findings, as and when they become
available.
The project team, is busy, the evaluator is busy and we, the FOPH, are also
busy. But keeping in touch is important! So, be sure that a regular contact time
schedule is set up between the various partners right from the beginning and
keep to it!
Not identifying and consulting the key partners during the initial evaluation
planning. (FOPH)
Accepting the evaluation proposal without having referred to key partners for
comment. (FOPH)
Not identifying the range of potential evaluation audiences during evaluation
planning. (FOPH and external project partners)
Appointing an evaluator without prior assurance of his/her integrity and competence through e.g. reference to past evaluations and past contractors.
(FOPH)
Appointing an evaluator without having first establishing mutual trust.
(FOPH)
Failing to ensure that the main investigator is competent i.e. that an inexperienced assistant is assigned the work rather than the person appointed.
(FOPH)
Assigning evaluations to those who have little knowledge and experience of
the specific study setting (e.g. prisons, school system, government administrations etc.). (FOPH)
Not establishing preliminary agreement between FOPH, evaluators and key
project staff for feedback arrangements to whom, how and when?
Assuming that key partners will participate in the evaluation (and possibly
also provide evaluators with project collected data) without having first
secured their agreement. (FOPH and evaluators)
Assuming that the accepted evaluation proposal covers all aspects of the
contractual agreement. The draft contract and the annexed evaluation pro-
Mistakes to be avoided
35
36
Section 1:
Introduction and Background
A brief description of the project, including its aims
and objectives, budget, time period, and relationship
to the FOPHs global prevention strategy
Legal basis for commissioning the study
Section 2:
The Evaluation Mandate
The purpose of the evaluation, and intended use
of results according to which types of evaluation
audience groups
The initial evaluation questions as defined by the
FOPH and relevant project manager
(both the project specific and those of interest to the
relevant global evaluation study)
Major areas and levels of interest and concern for the
evaluation focus
Data currently being collected e.g. by project and what
other data and data sources are available
List of evaluation outputs expected
Section 3:
Time Plan
Time period for the evaluation study
Timetable of when critical decisions will be taken
about project development, or timing of other factors
which could affect the project (vital to help evaluator
organise study and project feedback schedule).
Section 4:
Diffusion and Valorisation of Evaluation Findings
List of intended evaluation audiences, i.e. key
audience groups to be informed of the evaluation
results grouped according to definite and potential
groups
Possible formats for report-back to which type of
groups
p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
37
Section 5:
Organisation Chart of Evaluation management and
responsibilities
Section 6:
Budget
Budget guidelines including a separate and specific
budget for valorisation
Annexes:
All relevant documents should be attached to the
mandate to help the evaluator prepare his/her proposal
(e.g. reports on similar projects, evaluation studies,
etc.). If not available, at least reference to what these
are and where they may be found.
Also include Evaluation Guidelines Checklists 3.2
Drafting an Evaluation Proposal: An Evaluators
Checklist and Checklist 3.3: Assessing the
Evaluation Proposal.
38
When responding to the FOPHs Call for Offers evaluators should use this checklist to determine what to
include in their proposal, and, wherever possible, how
it should be set out. This list is based on information
set out in the FOPH Guidelines for Project and
Programme Evaluation Planning.
Use this list to check what your proposal does and
does not yet cover.
p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
39
An FOPH review panel will assess all evaluation proposals submitted in response to a Call for Evaluation
Proposals. The standard criteria for scientific rigour
as well as the relevance of the design will be used as
the basis of the assessment.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
41
An FOPH review panel will assess all evaluation proposals submitted in response to a Call for Evaluation
Proposals. The standard criteria for scientific rigour
as well as the relevance of the design will be used as
the basis of the assessment.
However, the panel will also take into account the
points set out in this checklist.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
43
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
45
Relevant Checklists
47
The checklists provided with this section set out the principal requirements of
what should be covered in a technical evaluation report, and the criteria for its
overall appraisal.
In the main, we are interested in the following:
Does the report address the questions we asked, (and if not, was there
some good reason for not doing so)?
Does it present the information clearly?
Are the findings and recommendations useful to our future planning?
What the report should cover
The scientific rigour of the report is obviously an important criterion for judging
the report; its relevance and utility are equally important. The evaluation should
report information clearly enough for it to be easily understood. The discussion
should be comprehensive, but direct and focused on the evaluation questions
and issues. A good balance between text and graphic representations should
be provided. However, graphics should not be included just for their own sake:
they should be used to add value and understanding to the descriptive text.
Messages about the weaknesses and strengths of the project and its implementation have to be easily identifiable, succinct and well defended. We need
to understand;
The lessons learned through the experience should be identified and discussed
to help us understand how to improve our future strategies and measures.
48
Not addressing what the key partners want to know about the project.
Concentrating on theoretical rather than practical issues in the report.
Over use of technical jargon rather than clear, simple explanations.
Not providing a description of the project, and the context in which it operates.
Not analysing how the operating context may have influenced/shaped its
development and ultimate results.
Using graphics, tables and figures that add little value or understanding to
the descriptive text (and/or vice versa!)
Providing insufficient information about the methodology and the strengths
and weaknesses of the methods used, as well as the effects on the analysis.
Not maintaining confidentiality of information when this had been agreed.
Confusing what is meant by anonymity and confidentiality.
Modifying conclusions or recommendations to suit partner interests when
not justified by the data.
Not providing a clear summary of what was addressed and how, what the
Projects strengths and weaknesses were, and what were the main lessons
learned, particularly with respect to future project planning.
Making indefensible generalisations.
Not recognising nor discussing the possible limitations of the study:
why these occurred, what alternatives were considered, what were the consequences and how might these have affected the analysis and overall findings.
Assuming that the whole report will be read in detail.
Mistakes to be avoided
49
50
Use this list to check what has and has not been
covered in the report.
Part 1:
Introduction What the Project was meant to do
What the Evaluation was asked to do and why
Brief description of projects aims and objectives and
operational context
Terms of reference for the evaluation (purpose of
evaluation, over what period of time, principal
questions to be addressed and if modified, the
questions and issues actually addressed)
Part 2:
Evaluation Methodology What was evaluated; how
was it done; what data was obtained from which
sources.
A tabled summary of data collected, sources,
frequency, methods used etc. should be included to
illustrate the scope and weight of data collected.
Which methods were used to answer which evaluation
questions should also be shown in table form
The limitations of the study should be discussed in
detail (e.g. implications for the analysis of restricted
data access etc.).
Part 3:
Results and Discussion
p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
51
Part 4:
Conclusions and Summary of Main Lessons Learned
Part 5:
Recommendations
WHAT recommendations/advice would you give
WHICH groups/people about
the future of this project?
setting up a similar project within the same setting?
setting up a similar project within a different setting?
WHAT would you advise/recommend to the Federal
Office of Public Health about supporting a similar
project?
Annexes: These should include at least the following:
The original evaluation mandate and, where relevant,
the authorised changes
Examples of the evaluation tools used for data
collection and of data analysed e.g. excerpts from
qualitative interviews, questionnaires used etc.,
list of documents analysed
Examples of the project documents, especially those
referred to in the report e.g. brochures, teaching
materials etc.
52
Usefulness
Does it do what it said it would?
Were the right questions asked and answered?
Does the analysis of the projects strengths and
weaknesses improve our understanding?
Recommendations
Are they feasible, practical and useful for improving
this project/other projects?
Are they relevant to future developments at a
national level?
Do they help us determine what needs to be done to
improve our overall strategy, measures and actions?
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
53
Relevant Checklists
55
By now you will have understood that we encourage evaluations to be designed in consultation with the various partners involved or likely to be affected by
the results. The more this happens, the higher the chances of having the
results accepted and used by the various stakeholders.
From the design stage therefore, we have argued that the key partners and
evaluators need to identify the full range of possible evaluation audiences.
Ultimately this helps the evaluators reflect on which results have implications
for which target groups. Once results are available, evaluators together with
the FOPH and other key partners need to determine which findings need to be
brought to the attention of which target groups, and by which means. (What
are the key messages for which groups). Who are the key decision makers?
Who can best act on which findings?
When providing feedback on evaluation findings, evaluators need to review the
following:
Who needs to know?
Which groups?
Which key people?
Who can ultimately take action/decisons?
What information is likely to be relevant and/or of interest?
In what sequence?
In which type of format?
When?
What problems are likely to arise?
Can these be minimised?
How?
When should we be informed of
evaluation findings
In Part 1 we stated that essentially the FOPH requires project evaluation for
four reasons:
1.
2.
3.
4.
to improve our strategies and actions so that they are meeting clients needs;
to help us make informed decisions;
to clarify the options available; and
to account for the expenditure of public funds.
For this we need feedback from evaluators at an appropriate time. For example, if our strategies or measures are proving ineffective, inappropriate or inefficient, we want to know as soon as possible so that we can modify our
actions, redress the problems or even cancel contracts wherever necessary.
Evaluators should therefore be encouraged to provide timely feedback and
highlight how these findings apply to and/or affect different participant groups
or audiences. The evaluation design should have taken into account when key
decisions about the project would be taken, and consequently planned to provide feedback (wherever possible and relevant) accordingly.
In our contractual agreement with evaluators we sometimes request an interim as well as a final report. However, during preliminary negotiations, we also
emphasise our need to have feedback reported as and when significant findings come to light.
We should ensure that we are available and interested in receiving feedback
and willing to assess with the evaluators alternative courses of action and likely consequences.
Who should be informed
56
Deciding who should be informed is determined by what the findings are and
what key messages are detected for which audience group. As a starting point,
however, we can say that all those who participated in the evaluation should,
wherever possible, be informed of the results. Too often reporting tends to be
restricted to the sponsors and project managers. Significant results/messages
should also be communicated to the various groups affected by the outcome
through, for example, the mass media, publications etc. That is why we have
previously stressed the need to identify the range of potential users of evaluation results as well as those likely to be affected during evaluation planning.
This helps not only the focus of the evaluation study, but serves us as a point
of reference when reviewing the findings.
Evaluations should be designed to have an impact on our prevention strategies
and measures. In other words, they should provide us with useful, pertinent
and clear information based on the use of sound scientific method and analysis. The conclusions and recommendations should help different interest
groups see for example what achievements have been accomplished, where
improvements can be made, which more cost-beneficial approaches might be
employed or even that support for wasteful, unproductive efforts should be
withdrawn.
For example, the majority of FOPH funded projects are aimed at preventing
health problems. As such they are directed at social and therefore behavioural
change. The relevant evaluations should highlight the favourable conditions
needed to bring about such change. To do justice to the work performed on our
behalf, the FOPH should consider the evaluation recommendations and act
upon the results in terms of:
What conditions does the evaluation suggest are needed to help bring about
change?
What can be done to create the conditions needed to bring about such
change?
Which groups/institutions/associations/organisations are in the best position
to help this process?
What measures might be adopted which are feasible and appropriate to
engage their co-operation and support?
Written articles alone may not be sufficient! Information should also be presented viva voce to target evaluation audiences. Workshops, meetings, seminars should be organised by the FOPH to get the message over to those who
can act on the information. This is where the valorisation budget provided in
the original evaluation contract can be finally exploited! Be imaginative about
using the evaluation to its best advantage!
57
Mistakes to be avoided
58
Assuming the evaluation work stops when the report is completed and delivered. (FOPH and evaluators)
Neglecting to determine which findings might benefit which target group
what key messages are there for which audience groups? This should be
systematically worked out between the evaluators, FOPH staff and the key
project staff.
Neglecting to address the political audiences which decision-makers
should know about which evaluation findings. (FOPH)
Not agreeing who will do what to promote the results. Evaluators should
provide the right material (e.g. articles, oral presentations etc.), but the sponsors and project partners need to organise, co-ordinate and ensure that
results get fed to the right decision-makers and other interested parties.
Thinking that publications alone will suffice. Promoting the findings means
adopting interactive strategies to present and discuss the evaluation findings
with those who can best act upon the results. (FOPH)
Neglecting to present and discuss findings in a manner appropriate to the
target audience. The messages need to be clear, to the point and in the
cultural style of the target group. For example, the style used to address a
scientific audience will not be appropriate when addressing for example a
parents association. (evaluators)
Have we determined what is important in the evaluation findings for prevention planning?
Have we resolved what is important for prevention
implementation?
Have we distinguished the likely affects on the work
of the different groups involved?
Have we considered what action might be taken as
a result?
Have we worked out what would need to be done
to have this happen?
Have we identified which people/groups would be
the most effective for getting things done?
Have we determined how best to get the message
over to each of these groups?
Have we identified who would be the most
appropriate person/group to convey the message?
Have we defined what help will be needed
(human/ financial resources)?
Have we prioritised which people/groups should be
approached (strategic planning from the ideal to the
feasible)?
Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
59
The Program Evaluation Standards, 2nd Edition, Sage Publications, NY, 1994
Accompagner et mettre point avec succs les evaluations des mesures tatiques: Guide de rflexion, Editions Gorg S.A., Geneva, 1995 (French version)
Evaluationen staatlicher Massnahmen erfolgreich begleiten und nutzen: Ein
Leitfaden, Verlag Regger AG Chur/Zurich, 1995 (German version)
Bussmann, Werner
Empowerment Evaluation: Knowledge and Tools for Self-Assessment & Accountability, Sage Publications, NY, 1995
Workbook for Evaluation: A Systematic Approach, 5th Edition, Sage Publications, London, 1993
Fink, Arlene
Hammersley, Martyn
Professional Evaluation: Social Impact and Political Consequences, Sage Publications, London, 1993
House, Ernest R.
Imfeld, Josef et al
Scriven, Michael
Vogt, W. Paul
Given that EVALUATION is a relatively new discipline, there is as yet, no one, widely agreed-upon set of EVALUATION terms. Yet meanings of words are critical
because they influence what we do and how we do it. We have therefore provided definitions for the key terms we use in the EVALUATION Unit of the Federal
Office of Public Health to convey what we understand by EVALUATION; its tasks,
work and responsibilities. The glossary deals with evaluation terms only: it does
not deal with those of statistical METHODS since these are well-known, standardised terms.
For the most part, we have drawn upon existing definitions from a range of
sources, but mainly from those developed in the field of PROGRAMME EVALUATION.
(see Reading Reference List, Annex 1). These have, however where necessary,
been adapted to suit the specific work of the Federal Office of Public Health.
To minimise confusion, EVALUATORS working under contract with the
Swiss Federal Office of Public Health are urged to base their use of
EVALUATION terms on the definitions supplied in this Glossary.
N.B. The terms which have been cross-referenced in the glossary are
indicated in small capital letters e.g. CROSS-REFERENCED TERMS.
62
Refers to what the intervention has been able to achieve overall: its OUTPUTS, its
RESULTS, IMPACT, etc.
Achievement(S)(of the
Intervention)
DE - Erreichtes
FR - ralisations
Aim
DE FR -
Gesamtziel/
bergeordnetes Ziel
but
Assessment
DELeistungsabschtzung/
Bewertung
FR - examen
An AUDIT checks that the means used to produce RESULTS were put into practice
according to professional rules and standards. It does not comment on, nor
question, the quality, RELEVANCE, EFFECTIVENESS, etc. of the IMPACTS or RESULTS of
a measure.
Audit
DE - Controlling
FR - audit, rvision
Synonymous with
Auto-evaluation
DE - Auto-Evaluation
FR - auto-valuation
SELF-EVALUATION
Bias
DE FR -
Content analysis
DE - Inhaltsanalyse
FR - analyse de contenu
An approach based on testing a pre-conceived hypotheses (very often experimental) in order to draw conclusions about its VALIDITY and/or GENERALISABILITY.
Deductive approach
DE - Deduktiver Ansatz
FR - approche dductive
Delphi technique/survey
DE - Delphi Technik
FR - mthode/technique delphi
Systematic analysis of the CONTENT of (written) documents e.g. memos, minutes, PROJECT descriptions, training curriculum, etc.
Documentary analysis
DE - Dokumentenanalyse
FR - analyse de documents
Any change, intended or unintended, which can be attributed to the intervention being evaluated. Synonymous with OUTCOME, RESULT, IMPACT. Examples of
unintended EFFECTS are ripple EFFECT, halo EFFECT, hawthorn EFFECT, etc.
Effect
DE - Effekt
FR - effet
Effectiveness
DE - Effektivitt
FR - effectivit
A measure of how well resources (human, financial, material etc.) are used to
produce desired OUTCOMES and/or OUTPUTS. Includes the analysis of the inputoutput cost ratio. Implies the absence of wastage in the process of achieving
GOALS. Efficiency analysis tries to answer the question: is it possible to produce
more OUTPUTS using less inputs or using alternative, less expensive ones?
Efficiency
DE - Effizienz
FR - efficience/rendement
Bias/Verzerrung
biais
63
Evaluation
DE - Evaluation
FR - valuation
Evaluator
DE - Evaluator/in
FR - valuateur/trice
Evaluability appraisal
DE - Machbarkeitsstudie
(der Evaluation!)
FR - tude de faisabilit
(de lvaluation!)
Analysis of the feasibility of answering the EVALUATION questions using a proposed design or procedure and/or the feasibility of answering the questions per
se. In short, checking to see that what is planned can actually be done.
External evaluation
DE - Externe Evaluation
FR - valuation externe
EVALUATION by EVALUATORS who are neither responsible for the financing, nor the
managing or implementation of the intervention under study. In short EVALUATION by those who have no personal, financial or other self-interest in the object
being evaluated.
Feasibility study
DE - Machbarkeitsstudie
FR - tude de faisabilit
Fields(of evaluation)
DE - Anwendungsbereiche
FR - domaines (dvaluation)
The major
The sum total of what an EVALUATION finds out about the intervention under
analysis e.g. the PROJECTs context, EFFECTS/RESULTS, IMPACTS, processes, EFFICIENCY etc.
Focus
DE - Fokus
FR - point focal/point
de focalisation
The area or aspect(s) on which the EVALUATION and its analysis will concentrate.
e.g. the EVALUATION of a school health education PROGRAMME may choose to
FOCUS on the acceptability of the PROGRAMME by different groups rather than on
its end RESULTS. Equally, it may focus on the RELEVANCE or EFFICIENCY of the PROGRAMME. It may well choose to FOCUS on a much wider SCOPE.
Formative evaluation
DE - Formative Evaluation
FR - valuation formative
(pas de terme quivalent
ni en allemand ni en
franais)
Generalisability
DE - Generalisierbarkeit
FR - gnralisabilit
The degree to which information about a tested group or setting may be extrapolated to the greater POPULATION or to different settings. GENERALISABILITY is
directly linked to external VALIDITY in that non valid data will produce non generalisable FINDINGS.
Global evaluation
DE - Globalevalaution
FR - valuation globale
This refers to the EVALUATION of a total prevention package: the global strategy,
measures and actions taken towards obtaining the prevention packages overall AIMS and OBJECTIVES.
Goal
DE - Gesamtziel/
bergeordnetes Ziel
FR - but
Synonymous with
64
EVALUATION.
AIM.
In its pure form, the EVALUATOR is not told the AIM and OBJECTIVES of the PROGRAMME/PROJECT/activity etc. under EVALUATION so that s/he is free to judge what
is going on and what is being achieved without being influenced by any predetermined criteria.
Holistic approach
DE - Holistischer Ansatz
FR - approche holistique
In EVALUATION terms, this refers to the sum total of the individual RESULTS and
EFFECTS/OUTCOMES of an intervention or measure, be they intended or unintended. IMPACT analysis can limit itself in time e.g. to immediate EFFECTS etc. and in
FOCUS e.g. target POPULATION. It can, however, broaden its analysis in terms of
(a) time, e.g. examining EFFECTS etc. over medium to longer term, and (b) FOCUS,
e.g. going beyond the directly targeted POPULATION.
(In market research, IMPACT EVALUATION e.g. of a campaign usually is restricted to
Impact
DE - Wirkung
FR - impact
Indicator
DE - Indikator
FR - indicateur
Generates hypothesis from and during field work. (Grounded theory see
Strauss & Glaser, The Discovery of Grounded Theory: Strategies for Qualitative
Research, Weidenfeld & Nicolson, London, 1968). Hypotheses are formulated
on the basis of the data gathered as opposed to gathering data in order to test
a preconceived hypothesis (DEDUCTIVE APPROACH).
Inductive approach
DE - Induktiver Ansatz
FR - approche inductive
Technique used to draw verbal information from an individual/group about a predetermined topic. Can be structured (i.e. asking standardised questions which
elicit only responses which are pre-determined and of limited range), semistructured (i.e. range of questions are pre-determined but the way they are
asked and/or the expected responses are not necessarily), or unstructured (i.e.
non standardisation of open-ended questions, sequence and responses but
centred around a pre-determined topic(s)).
Interview
DE - Interview
FR - interview/entretien
This is the overall analysis of information arising from several studies on a similar topic/field of interest. involving, as the first step, the standardisation of the
relevant information. Analysis therefore takes place once the disparate information is standardised and therefore transformed into comparable values. To a
large extent, it relies on a SYNTHESIS of other studies/EVALUATIONS.
Meta-analysis
DE - Meta-Analyse
FR - mta-analyse
Meta-evaluation
DE - Meta-evaluation
FR - mta-valuation
The working plan (theoretical framework and design) for organising and conducting the selection, collection and analysis of data, including the
approach/strategy to be used (e.g. conventional, positivist, interpretative, naturalistic, phenomenological etc.) and choice of METHODS (e.g. INTERVIEWS, survey,
observation, etc.) to be used.
Methodology
DE - Methodologie/
Vorgehensweise
FR - mthodologie
Method
DE - Methode
FR - mthode
Monitoring
DE - Monitoring
FR - monitoring/surveillance
These are a set of discrete, specific and measurable sub-GOALs which need to
be attained in order to achieve the end GOAL. They should be smart i.e. specific, measurable, appropriate, realistic and attainable within a defined time period.
Objective
DE - Ziel/Zielsetzung
FR - objectif
65
Outcome
(of the project/intervention)
DE - Resultat
FR - rsultat
Synonymous with EFFECT when referring to the individual and/or sum of the
EFFECTS/RESULTS (of the intervention). Mainly refers to immediate, post-treatment EFFECTS, but one should consider the medium and longer term OUTCOMEs
too. (See also IMPACT).
Outputs
DE - Output/Produkt
FR - produits
These are the activities, goods and services directly produced by an intervention/EVALUATION e.g. brochures, reports, workshops, hotline service, computer
program etc.
Pilot project/study
DE - Pilotprojekt/-Studie
FR - projet/tude pilote
A PROJECT/study intended to trial its practicability in a real setting (not to be confused with FEASIBILITY).
Population
DE - Population
FR - population
Process evaluation
DE - Prozessevaluation
FR - valuation de processus
Programme
DE - Programm
FR - programme
Project
DE - Projekt
FR - projet
Qualitative data
DE - Qualitative Daten
FR - donne qualitative
Quantitative data
DE - Quantitative Daten
FR - donne quantitative
Numerical data.
Questionnaire
DE - Fragebogen
FR - questionnaire
Relevance
DE - Relevanz
FR - pertinence
Reliability
DE - Reliabilitt/Zuverlssigkeit
FR - fiabilit
Refers to the consistency of the RESULTS yielded when the same process and
METHODS are used during repeated applications and/or by different observers.
Not to be confused with VALIDITY.
Representativeness
DE - Representativitt
FR - reprsentativit
Result(s)
DE - Resultat/Ergebnis
FR - rsultat
Sample
DE - Stichprobe
FR - chantillon
66
SAMPLE
The breadth of what will be taken into account by the EVALUATION, e.g. what
issues and aspects will be addressed, which (sub)groups will be
observed/INTERVIEWed and over what time period etc.
Scope
DE - Reichweite
FR - porte
The re-working and analysis of existing data and/or reconsideration of its interpretation and FINDINGS.
Secondary analysis
DE - Sekundranalyse
FR - analyse secondaire
Secondary evaluation
DE - Sekundrevaluation
FR - valuation secondaire
Self-evaluation
DE - Selbstevaluation
FR - auto-valuation
and/or managing a
PRO-
Stakeholders
DE - Beteiligte/Betroffene
FR - protagonistes =
les stakeholders
directement impliqus
pas de terme universel qui
engloberait aussi ceux qui ne sont
pas directement impliqus par le projet.
An EVALUATION that is carried out during the concluding phase of a PROJECT/PROGRAMME/activity etc., with the intention of passing judgement intended to contribute towards decision making re PROJECT etc.s future. Compare and contrast
with FORMATIVE EVALUATION.
Sumative evaluation
DE Bilanz-Evaluation
FR valuation sommative
(pas de terme quivalent
en franais)
Combining the FINDINGS of multiple studies into one overall picture. In EVALUATION
this is most often done by compounding a set of criteria/INDICATORS/performances on several dimensions and attributing an overall judgement. (See also
META-ANALYSIS).
Synthesis
DE Synthese
FR synthse
Target group/population
DE Zielgruppe/Zielpopulation
FR groupe cible/population cible
Triangulation
DE Triangulation
FR triangulation
Refers to the degree to which whatever is claimed, holds true. For example, a
test is valid if it measures what it purports to measure. Valid EVALUATIONS are
ones that take into account all relevant factors, given the whole context of the
EVALUATION (particularly including the clients needs) and appraise them appropriately in the synthesis process. (see Scriven, 1991)
Validity
DE Validitt
FR validit
The combination of activities used to make EVALUATION FINDINGS known (dissemination) and translated into practical use (thereby adding value).
Valorisation
DE Valorisierung
FR valorisation
67
Characteristic
Conventional Approach
Interpretive Approach
Associated phrases
Key concepts
variable, operationalisation*,
hypothesis, reliability, validity
replication, statistical significance
Associated names
A. Compte
Emile Durkheim
Lee Cronbach
L. Guttman
Gene Glass
Fred Kerlinger
Edward Thorndike
Ralph Tyler
J. Mill
Donald Campbell
Peter Rossi
Thomas Cook
Robert Travers
Robert Bales
Julian Stanley
Dilthey
Max Weber
Charles Cooley
Everett Hughes
Margaret Mead
Rosalie Wax
George H. Mead
C. Wright Mills
Ray Rist
Egon Guber
Yvonna Lincoln
Howard Becker
Associated disciplines
anthropology,
sociology, history
(ethnography)
H. Rickert
Estelle Fuchs
Herbert Blumer
Harold Garfinkel
Erving Goffman
Eleanor Leacock
Barney Glaser
William Filstead
Malcolm Parlett
Robert Stake
Robert Burgess
* The conventional approach to social scientific inquiry is still practised by many social scientists and still viewed as real science by many consumers
of evaluation and research results. This is despite the fact that major tenets of conventional social science have been found untenable within the philosophy of science. The most important of these major tenets have been asterisked in this handout.
Design:
Purpose
Basis
When developed
beginning of study
continuously evolving
Nature
Style
intervention, manipulation
selection, participation
Sample
Setting
Treatment
stable, fixed
variable, dynamic
68
Characteristic
Conventional Approach
Interpretive Approach
Control
high of antecedents,
extraneous variables, possible outcomes
Examples
experiments, quasi-experiments,
survey research
Nature
Focus
reliability, replication
validity, meaning
Specification of data
collection/analysis rules
before inquiry
Researcher/evaluator role
Researcher/evaluator
relationship to data
distant, detached
close, involved
Researcher/evaluator
relationship to subjects
Instruments/techniques
researcher/evaluator, interviews,
observations (tape recorder,
transcriber)
Data
Nature
Units
variable
patterns in context
Analysis
Focus
uniformity
diversity
Methods:
Analysis:
69
Characteristic
Conventional Approach
Interpretive Approach
When
How
Content
phenomenology, symbolic
interactionism, ethnography
(culture), ethnomethodology
idealism
Reality/truth
Relationship between
facts and values
Human nature
Human behaviour
wholly context-dependent
Relationship between
inquirer and subject of inquiry
independent, separable*
Communication of results:
Paradigm:
Affiliated theories
Assumptions about:
70
Characteristic
Conventional Approach
Interpretive Approach
quality criteria
rigor
relevance
source of theory
a priori
grounded
stance
reductionist
expansionist
purpose of inquiry
verification*;
facts, causes, explanation; establish
laws that govern human behaviour
and link laws into
discovery;
understanding, verstehen
understand process by which social
reality is created by different people
deductively integrated theory
knowledge type
propositional
propositional, tacit
value perspective
singular*, consensual
pluralistic, divergent
values in research
Postures about:
71
1. Questions on Relevance
Is the health behaviour model on which the projects intervention is based
appropriate for the target group/setting?
Are the project/programmes aims and objectives still relevant? Are they still
of the same priority?
Is the intervention being targeted at the right audience?
Is the intervention appropriate for its different target groups?
Is the intervention meeting the target groups needs?
2. Questions on Progress
Is the project/programme being put into operation as planned?
Is there any difference in the understanding of the project/programmes
aims and objectives between the different groups involved? If so, how has
this influenced the way the project is ultimately being put into practice?
To what extent have any unplanned side effects been taken into account during project/programme implementation?
Is the project/programme receiving positive support from all the various
groups concerned?
3. Questions on Effectiveness
Have the objectives been achieved in terms of quality, quantity and time?
To what extent was the achievement the effect of FOPH action?
Has FOPH stimulated actions and/or measures that would otherwise not
have occurred?
To what extent did changes in the environment affect the achievement of
project/programme objectives?
To what degree was the intervention implemented according to plan?
Was the project/programme effective in promoting itself to the targeted
groups?
4. Questions on Efficiency
Is the intervention the most cost-effective option? What alternatives
should be considered?
What are the constraints on using a more cost-effective method?
Do the human and financial resource costs compare favourably with related
interventions e.g. in another area of prevention intervention?
Have the inputs been made according to planned amounts, timing and quality?
What hidden costs have not been taken into account in project/programme
budgeting and planning?
72
To FOPH staff responsible for assessing training project proposals; and To persons and institutions
submitting training project funding proposals to the FOPH.
General Principles
All FOPH funded medico-social training/education projects for professional or voluntary
workers in the field of HIV/AIDS and/or drug dependence should be designed in line with
public and community health principles.
Among other things, they should:
be developed to meet the needs of the community, the institutions and the individuals included in
the cultural, social and economic context,
take into account the prevailing health and social policies relating to the field to which the training
applies,
take into account future needs and challenges,
encourage interdisciplinary and interprofessional cooperation,
ensure optimal exchange of information between practitioners and researchers,
ensure at least regional coverage,
increase the number of trained practitioners and the quality of the services they provide.
1
2
3
4
5
73
A need or
The (financial and social) cost of the training/educational project should be reasonably proportionate to
the funding available and the needs that are to be met.
The project should describe:
A the human and institutional resources available, including specialised teachers or experts in the field, other existing institutions or programmes in the same field;
B existing conceptual and theoretical resources. Methodological work may well have been done in part or in total in another language or within another context;
C existing material such as documentation, books, brochures, videos.1
The real overall cost of the training should be proportionate to the available funds of the organising institution. For example, a small institution should not contemplate investing all its funds in one project.
All possibilities of co-financing and subsidies (including through cantonal and local authorities, professional or consumer
associations or foundations) should have been systematically investigated. Do not forget the possibility of premises or logistical services being provided free of charge.
Given the limited funds available from the Confederation and the constant need for training, on no account can the FOPH finance
costly projects: it has to assure continuity of training support.A training project does not need to be expensive to be good!
The proposal should include a detailed budget (see Detailed Budget, last page).
The registration fee should not put off potential participants. Employers should therefore be encouraged to pay all or a proportion of the fees as part of their contribution towards the further training/education of their employees. (Different rates
may be applied for employer-subsidised and self-paying registrations).
A The purpose of the training must be explained and must be relevant to the needs of the population.
For example, The project contributes to reducing the incidence of professional-related HIV infections transmitted
through blood contact by the systematic application of preventive measures and reduction of risk factors. It is aimed
specifically at health carers providing patient home-based care.
Purpose: in terms of its anticipated effects on the target population (those in the care of the training participants);
Relevance: its relation to the health problems of the population at large, and its appropriateness in relation to the
resources available.
B The training objectives must be explicit and relevant to the skills required to carry out the function or task(s).
For example, Participants will be able to provide basic care to patients, in the patients home, whilst at the same time,
respecting the application of universal precautions. Each of the measures needed towards this end will be described,
explained and discussed. The conditions under which they will be applied will need to be systematically described and
put into practice.
General aims: all the knowledge, skill and behavioural attitudes (changes in behaviour!) that the participants will have
acquired by the end of the course;
Relevance of the aims: their relevance to the tasks that professional staff will have to fulfil and to the problems with
which they will be confronted.
1
74
The knowledge and skills taught should be briefly described and explained.
They should not be in conflict with: the ethics of the professions concerned,
the doctrine upheld by the FOPH in the relevant field. 2
In principle, the cost of developing the course should not exceed 5% of the total cost of the project.
The method or
What is important is what the adult has learnt and not what s/he has been taught.
As far as possible, the selection of teaching methods should be based on accepted knowledge and experience in
the field of adult education, e.g.:
A focused on learners needs and the groups existing knowledge,
B aimed at problem solving,
C methods and tools adapted to the learners work situation and to whatever resources are available
A good method is a method that: meets learners needs,
is suitable to the knowledge to be imparted,
suits the skills of the instructor, and
is proportionate to the resources available.
There is no one ideal teaching method. Even straightforward lecturing can well be the best solution in certain cases.A combination of different methods is often the most successful.
A good teaching method is one that truly enables and encourages participants learning and is not necessarily the most fashionable method of the day or the one with which they are most familiar.
Evaluation or
Evaluation is a dynamic process aimed at (i) improving the quality and the relevance of training/education projects; (ii) on-going adaptation of training to meet current needs; and (iii) improving the conditions under which
the projects are run.
An evaluation only makes sense if it is useful and has subsequent practicable application. It is not intended merely to justify itself!
In other words, dont just prove, but improve! Training project managers are also responsible for determining evaluation needs
within the framework of their project.They, and/or the FOPH may see a need for the project to be evaluated by an outside body.
In this case the external evaluation will be planned and commissioned by the FOPH under a separate evaluation budget.
Before choosing an evaluation, it is very important that the following questions be answered:
A what is the point of the evaluation?
B what questions do we want answered?
C what exactly are we going to evaluate:, knowledge, attitudes, an action, a strategy, the implementation process will this need
quantitative/qualitative data?
D who are the stakeholders? i.e. who is the evaluations target audience?
E how will the evaluation findings be disseminated and their practical application made evident?
There are several ways of evaluating a project; the choice should be based on the answers to the above questions and the means
and skills available.
For example, all of the following are evaluations but serve quite different purposes:
evaluation of the training completed by students with a view to awarding them a certificate (evaluation for certification),
evaluation of the trainings relevance to the tasks needed in the field,
evaluation of the knowledge acquired by students with a view to modifying the course along the way,
participants evaluation of the training with a view to improving the course in the future,
evaluation of observed changes in behaviour after one years application in professional practice,
teachers self-evaluation as part of his/her teaching supervision,
evaluation of the projects overall impact to support applications for future funding e.g. from cantonal authorities,
estimation of degree of satisfaction among the students. etc.
The cost of evaluation as shown in the budget may in exceptional cases only exceed 5% of the total.
For more detailed information see the Swiss Federal Office of Public Healths Guide to Project and Programme Evaluation Planning 1996.
Teaching methods
Detailed budget:
For further information see Teaching Guidelines in the Field of Health Care by J.-J. Guilbert, published by the
WHO, 1990, or contact us at the Federal Office of Public Health:
FOPH Training Unit
Marie-Claude Hofner
Ren Stamm
Ellen Dobler-Kane
76
031 323 88 06
031 323 87 83
031 323 80 20
031 323 87 61
031 323 87 65
031 323 88 03