Вы находитесь на странице: 1из 78

Guidelines for Health Programme & Project

Evaluation Planning

Swiss Federal Office


of Public Health

Guidelines for Health Programme & Project


Evaluation Planning

Swiss Federal Office


of Public Health

Copyright

1997
Evaluation Unit
Federal Office of Public Health
CH-3003 Bern
Tel. +41 (0)31 323 87 61 or 66
Fax. +41 (0)31 323 88 05
EMail: Marlene.Laeubli@BAG.admin.ch
Photocopying of Checklists only is authorised without written permission

Motto

I know of no safe depository of the ultimate


powers of the society but the people themselves;
and if we think them not enlightened enough
to exercise their control with a wholesome
discretion, the remedy is not to take it from
them, but to inform their discretion.
Thomas Jefferson
Letter to William Charles Jarvis
September 28, 1820

Evaluation is an investment in people and in progress.


(Egon Guba and Yvonna Lincoln, 1989)

Table of Contents

Page
Introduction and Overview of Contents

Part 1: Introducing Evaluation

11

Part 2: Planning an Evaluation

17

Checklist 2.1 Assessing the Need for


External Evaluation
Checklist 2.2 Planning Evaluation of a Health Project
Checklist 2.3 Self-Evaluation Basic Elements to be Included
Checklist 2.4 Training and Continuing Education Courses
Self-Evaluation Basic Elements to be Included
Part 3: Commissioning and Monitoring Evaluation Contracts
Checklist 3.1 Drafting the Evaluation Mandate
An FOPH Checklist
Checklist 3.2 Drafting an Evaluation Proposal:
The Evaluators Checklist
Checklist 3.3 Assessing the Evaluation Proposal
Checklist 3.4 Assessing the Evaluator
Checklist 3.5 Monitoring the Evaluation Contract
Part 4: Assessing the Technical Evaluation Report
Checklist 4.1 What the Technical Evaluation Report Should Cover
Checklist 4.2 Assessing the Technical Evaluation Report
Part 5: Promoting and Using Evaluation Findings
Checklist 5.1 Identifying Key Messages Key Target Groups
and Appropriate Ways to Tell Them

25
27
29
31
33
37
39
41
43
45
47
51
53
55
59

Annexes:
1. References and Recommended Reading List
2. Glossary of Evaluation Terms
3. Characteristics of Conventional and Interpretive
Approaches to Social Science Research and Evaluation
4. Evaluation Questions:
An Example of Different Question Types
5. Guidelines for Developing or Assessing Medico-Social
Training/Evaluation Projects Swiss Federal Office of Public Health,
1995. Evaluation, Research and Training Section

61
62
68
72
73

N.B. A complete set of the Checklists is provided separately so that they can
be photocopied and used in contract negociations with external partners.

Foreword

The Swiss Federal Office of Public Health is committed to assuring a high


standard of health for the countrys population. Its work is continually assessed
by measuring the overall impact on the health of the nation.
Corporate Philosophy of the Swiss Federal Office of Public Health (FOPH), 1992

Evaluation of its activities is an integral part of the corporate philosophy of the


Swiss Federal Office of Public Health, (Guiding Principles, 1992). Its objective is to help improve the planning and implementation processes.
To this end a special unit for evaluation was established within the Federal
Office of Public Health (FOPH) in 1992. Its main function is to ensure that the
FOPHs strategies, measures and activities in terms of disease prevention,
health promotion and health protection are evaluated on a continuous basis.
These Guidelines were written by Marlne Lubli Loud of the FOPH Evaluation
Unit in collaboration with colleagues from this Unit. They are intended to assist
FOPH staff and project/programme partners effectively plan an evaluation
study.
We hope that this manual will foster a stimulating and fruitful collaboration
between the FOPH and its external partners. We welcome any comments that
may help improve the ideas and checklists contained in these Evaluation
Guidelines.
Bern, 1996
The Evaluation Unit
Federal Office of Public Health

Acknowledgements

In the process of designing and writing these guidelines, we are deeply indebted to a multitude of people, all of whom in their own way contributed to this
final product.
Several organisations and individuals were asked to review the original drafts of
these Guidelines. We are indeed grateful to the many people who contributed
to turning the original ideas into the final version. In the first case, our thanks
go to our many colleagues in the Federal Office of Public Health Medical
Division: the Evaluation, Research and Further Education Section; and the
AIDS, Illegal and Legal Drugs Sections of the Health Promotion Division.
Our thanks too for the helpful comments received from our external partners:
the Prevention Programme Evaluation Unit of the Social and Preventive
Medicine Institute, University of Lausanne, (UEPP de IUMSP, Lausanne); the
Drugs Research Institute in Zurich (ISF); and the Institute for Social and
Preventive Medicine, Zurich (ISPM).
The process of translating ideas into a published set of guidelines equally
involves the efficient and creative help of graphic designers and desk-top publishers. We are indebted to Satzart AG Bern who turned our basic word
processed document into its current polished, professional format.

To all the above, our many thanks.

Introduction and Overview of Contents

Aims and objectives of the Guidelines


Where to look for what information
Why we have produced
these Guidelines and what
they are about

Yet another book on evaluation principles and theory? No! That is certainly not
what you will find in this book! These Guidelines have been written to help staff
of the Federal Office of Public Health (FOPH) and its external partners reflect
on what needs to be included when planning an evaluation of their project/programmes. They are therefore of interest to the following:
staff of FOPH health promotion, prevention and campaign sections;
Project/programme planners;
Project/programme managers and implementers;
external evaluators.
These guidelines deal with the evaluation of the FOPHs prevention and health
promotion activities and therefore address issues relevant to project and programme evaluation 1 only.
A reading list is provided for those interested in finding out more (Annex 1).
Readers are encouraged to discuss more complex needs with the Evaluation
Unit of the FOPH.
The manual provides practical information on how evaluations are best planned,
organised, commissioned and used. We accept that for reasons beyond our
control, putting these principles into practice will sometimes prove difficult. As
guidelines, however, they set the standards we should strive to attain.

Refer to Glossary of Evaluation Terms for definions of Programme and Project. Despite differences, the evaluation principles outlined in this manual apply to both. For ease of reading, therefore, from now on we will use only the term Project for both.

The purpose is therefore:


to learn more about what evaluation is and how it can best be used;
to set standards for planning, commissioning, presenting and using evaluation results;
to provide practical guidelines on how this can be done.
These Guidelines are therefore not intended to be a do-it-yourself evaluation
kit. Rather their purpose is to draw your attention to a range of questions and
issues to consider when planning project evaluation.
We have attempted to resume the most pertinent aspects of what evaluation
is about and how it should be used to yield the best results. For this reason, we
urge you to read these Guidelines through from start to finish.

The contents are divided into the following five parts:


Part One

An Introduction to Evaluation: covering the basic principles of what


evaluation is about, and why the FOPH wants its projects evaluated.

Part Two

Planning an Evaluation: This part prompts readers to identify why


an evaluation may be needed and what they want the evaluation
to examine. It then provides some general guidelines on when and
how evaluation should be planned to come up with the required
answers. A set of Checklists is included as pointers to the appropriate questions and issues to be addressed.

Part Three

Commissioning External Evaluation: This section prompts readers


to make a critical assessment of what needs to be addressed
before an external evaluation is commissioned. What items should
be included in the contract itself, and what can be expected during the course of the evaluation are also discussed. Again, Checklists are included as reminders of the key points.

Part Four

Assessing the Technical Evaluation Report: provides hints on what


the Evaluation Report should cover and how to judge its findings
and conclusions. Checklists are provided for assistance.

Part Five

Promoting and Using the Evaluation Results: The final part of the
Guidelines deals with identifying the key messages, key audiences to be informed and what we can do to use the results to
their best effect.

Annexes

Here you will find:


A reference to the literature used to produce this manual.
(Those keen to find out more about the theoretical aspects of
evaluation are recommended to refer to this list.)
A glossary of evaluation terms as we understand and define
them.
A table comparing characteristics of different theoretical paradigms relevant to evaluation.
An example of the different evaluation questions which can be
asked about health projects.
Guidelines for Developing or Assessing Medico-Social Training/Education Projects (FOPH, 1995)

N.B. A complete set of the Checklists is provided separately so that they can
be photocopied and used in contract negociations with external partners.

Part 1: Introducing Evaluation

Evaluation helps you to take the


right decisions for your mission

Some key concepts defined:


What is a Project?
What is a Programme?
What is the difference between Monitoring and Evaluation?
What is Project/Programme Evaluation?
What is Global Evaluation, Meta-evaluation and Meta-analysis?
External Vs Self-Evaluation
Why evaluate? or the role of evaluation in planning and managing health
projects and programmes
Principal Evaluation Methodologies
Linking project, programme and global evaluation into evaluation planning

Contents of this Part

The terminology used in the professional world of evaluation can be confusing


as there are often subtle differences in the way the same term is used and
applied. For this reason we have set out our own definitions in the Glossary of
Evaluation Terms (Annex 2) to help the reader understand what we mean by
each.
In this section we explain some of the key concepts used throughout these
Guidelines.
11

What is a Project?

A Project consists of activities aimed at achieving pre-defined goals during a


defined period of time. Often it is a means of testing an innovative approach or
measure ultimatley to be used as part of a wider programme.

What is a Programme?

We define a Programme as a collection of co-ordinated projects aimed at


achieving a set of common objectives. A programme is also delimited in terms
of time, scope and budget.

Evaluation and Monitoring

It is the inquisitive, probing aspect of evaluation that demarcates monitoring


from evaluation. Monitoring implies keeping a watchful eye on what is happening through reference to data collected specifically for this purpose. It is the
routine checking of progress against plan, often contributing much useful information, and is in fact an essential part of the evaluation process. Evaluation,
however, suggests that there is an analytical, interpretative process involved
when reviewing the data, which in turn may require the collection of additional,
more evaluation-specific data. Evaluation requires a critical and detached look
at what is happening. For example, a range of data can be regularly collected
on health training courses: numbers of participants; types of professional
groups such as nurses, doctors, social workers etc. who attend; number of
training days per course and so on. This will help us monitor the progress of
the course and detect trends, but it wont give us much evaluative information about how well the course is received, why it attracts certain groups of participants and not others, its effectiveness and other such information. Once the
monitoring data is reviewed it is likely to throw up many questions for which
other, more specific evaluative data will be need to be collected. (Refer to the
Glossary section for definitions of monitoring and evaluation).
So, as we have defined, evaluation implies examining, interpreting and analysing.
It can take place at the same time and throughout the life of a particular prevention project, training course etc., or towards the end or even after it has ended.
To a large degree, this depends on its purpose. (More about this in Part 2).

What is Project/Programme
Evaluation?

For our purposes we have defined evaluation as the systematic collection and
analysis of information not necessarily routinely available, about a specific project or programme to enable its critical appraisal.

What is Global Evaluation?

This refers to the evaluation of a total package: the global aims and objectives,
and the strategy, measures and actions taken towards attaining the global policy objectives. Analysis covers:
the strategy, measures, structural support and actions used to achieve the
policy aims and objectives;
key environmental factors (the socio-political and economic context);
the end results on target populations and/or settings;
the inter-relationship between the above.

What is Meta-evaluation?

Meta-evaluation is the evaluation of others evaluations. It provides a critical


analysis of HOW WELL evaluation studies have been conducted and is therefore a form of quality control which the Federal Office of Public Health (FOPH)
commissions from time to time. It should not be confused with meta-analysis.

What is Meta-analysis?

Meta-analysis is an overall analysis of the CONTENT of several studies on a


similar topic/field of interest. To a large extent it is a synthesis and secondary
analysis of others work.

Whats External Evaluation?


and Self Evaluation?

Not all interventions need or indeed should be professionally evaluated by outside specialists. Every project manager is expected to provide an internal selfevaluation of the projects development and results as part of his/her management tasks, for example to justify the use of public funding. Was the project implemented according to plan? Were the objectives attained? Were there
differences between what the project set out to do compared with what clients
really needed? These are just some of the questions which project managers
need to address in their interim and final reports. Self-evaluation is therefore an
integral part of good management and project planners should include a budget
of approximately 5% of total project costs for evaluation. Where needed some

12

of this money can then be used to call upon professional evaluators to help set
it up (what to do and how to do it).
A checklist of the main elements to be included in project self-evaluation is
provided at the end of Part 2. A proposed list of headings for the reports are
also included.
Many of the questions to be addressed in self-evaluations are, of course, the
same as those needed for external evaluations. So what is the difference?
Essentially, the main differences between self and external evaluations are the
evaluators (those who do it) and the scope covered by the evaluation study.
Project staff conduct self-evaluations. But they are constrained by time since
their principal task is to develop the project. The scope of what can be reviewed
during self-evaluations is therefore likely to be limited. Managers assessment
of the projects relevance, progress and effects is also likely to be influenced by
their interest in the success of the project itself. In other words, their closeness
to the project may make it difficult for them to provide an objective appraisal of
the situation. External evaluators, on the other hand, are specifically paid to
devote time and resources to this work. As externals, they should have no vested interests (e.g. financial, professional or personal) in the project being evaluated, and thus no self-interest in its outcome. In theory, therefore, they should
be more objective in their analysis.
Ideally however, external and self-evaluations should be designed to help
answer both project specific and the more global evaluation questions. The latter, of course, will vary according to the specific prevention package and the
level of prevention/target audience. Thus each level of evaluation should in turn,
contribute towards answering some particular and global questions. In such a
way the external global evaluation team should be able to synthesise and
analyse the collated information from a range of project self and external
evaluations. The FOPHs Evaluation Unit should be consulted for more information.
Essentially we (the FOPH) require evaluation for four reasons:

Why evaluate?

1. to improve our strategies and actions so that they are meeting clients
needs;
2. to help us make informed decisions in future planning;
3. to clarify the options available; and
4. to account for the expenditure of public funds.
Evaluation is not new, but it is being given greater emphasis and more systematic attention as part of the drive within the Civil Service world-wide to
improve accountability and financial management in general. Historically, professional programme evaluation developed in the USA during the 1960s. Its
major impetus came from the requirement for evaluation of the Great Society
Education Legislation in 1965. Similar demands were placed on other social reforms of the time.
With such ever-increasing demands, over the past three decades US evaluation
has moved from being a peripheral activity of academics, to a recognised profession and career with its own theories, standards and code of practice. A
similar trend can be traced in many other countries. The requirement for evaluation within the Swiss Civil Service has been a much more recent, but growing
phenomenon. A national research programme (no. 27) was funded to examine,
test and improve evaluation methods and methodologies relevant to the study
of state policy, strategies and measures, and their effects.
The need to assess and evaluate actions taken by the FOPH has now become
one of its guiding management principles (see Foreword). Within the Medical
Division, a range of projects are funded which aim at preventing disease, and
improving the populations general health by promoting healthy lifestyles. As
managers of public funds, we are responsible for ensuring that this money is
well spent. We are therefore accountable not only to the FOPH and the government, but equally to tax payers, and more particularly, the specific clients
13

for whom our activities are targeted. This means that we need to be assured
that appropriate systems are established to supply us with information which
can help us assess the development, acceptability, effectiveness and impact of
the projects we fund.
Principal evaluation
methodologies

There are many views on how evaluation should be approached ranging from
quantitative systems analysis to qualitative case studies. There is not just one
best way to evaluate health projects: some models are more appropriate
than others depending on the evaluations purpose, and the questions to be
addressed. The FOPH Evaluation Unit in fact routinely commissions a range of
different evaluation approaches, each being selected to suit a particular need.
To a large degree, the choice of the evaluation design depends on the purpose, and therefore the types of questions to be answered.
Generally speaking, however, there is a lack of consensus within the evaluation
community about which approach yields the most useful results. In general this
relates to the various philosophical assumptions held by evaluators, theoreticians and practitioners. At one extreme there is a strong belief in the need for
hard data and statistical proof. This type of approach (sometimes referred to
as the conventional approach) has its philosophical roots in logical positivism
and is characterised by quantitative measurement and experimental design to
test performance against objectives and a priori assumptions. Its strength is
that it can supply us with statistically measurable information from large population samples on a limited number of predetermined items. We therefore can
gain a broad set of findings from which, it is argued, we can make generalisable conclusions.
The relevance and applicability of this approach to real social settings have,
however, been increasingly questioned (e.g. Guba and Lincoln, 1989; Patton,
1979, 1980, 1987; Parlett and Hamilton, 1974; etc.). Some of its major beliefs
have been found untenable within the philosophy of science. Proponents of an
alternative evaluation methodology emphasise the need to provide description
and understanding of social phenomenon in its natural setting with no manipulation, control or elimination of situational variables. This type of approach is
grounded in phenomenology and adheres to the principles of inductive strategies for developing theory from the data during the course of the research
(Glaser and Strauss, 1967). In contrast to deductive strategies based on testing
pre-conceived assumptions, key evaluation issues and hypotheses emerge
from intensive on-site, qualitative, case study investigation and are systematically worked out in relation to the data during the course of the research. This
model is often referred to as the interpretative approach.
These two models conventional or positivistic evaluation and interpretative evaluation are therefore theoretical paradigms (methodology). Whilst the
conventional paradigm largely employs quantitative methods (the tools,
not the methodology), the interpretative approach mainly draws on qualitative methods. However, neither necessarily relies on only quantitative or qualitative methods and a combination of both can and is used by both paradigms.
The distinction between these two paradigms is steeped in their philosophical
tenets: deductive versus inductive. Characteristics of these two approaches are
provided in table form as Annex 3.

Summary

In this section we have set out our working definition of what we believe evaluation to be in order to meet our specific purposes. We have indicated that
whilst there is an array of strategies which can be adopted for conducting evaluations, each with its own merits and shortcomings, ultimately the appropriateness of an approach is determined by the specific questions and issues we
want addressed.

Linking project
programme and global
evaluation into
evaluation planning

From our definitions you can see that each type of evaluation has its own focus.
Project evaluation concentrates on the project, programme evaluation on the
programme, and global on the more global issues. With careful planning, each
can be linked one to the other. Programme evaluation can benefit from

14

analysing what has been learned from associated project evaluations, and global evaluations from the project and programme evaluations. Evaluation planning
should therefore be co-ordinated to include some general as well as more specific evaluation questions to ensure that each level of evaluation can contribute
to another.

With careful planning, each type of


evaluation can be linked one to the
other

15

16

Part 2: Planning an Evaluation

(1)

Evaluation helps planning by foreseeing future problems


(1) Murphys Law says that if things
can go wrong, they will!

When should you evaluate? integrating evaluation into planning


What can be evaluated?
Clarifying the purpose of the evaluation
Identifying evaluation audiences
Defining the evaluation questions
What to evaluate? scope and selection of information needs
Recognising what it can and cannot do Limitations of the evaluation
When feedback on evaluations progress and results is needed timeliness
of report-back
Budgeting
Mistakes to be avoided

Contents of this Part

Assessing the need for external evaluation the criteria


Planning an external evaluation
Internal, self-evaluation: basic elements to be included

Relevant Checklists

17

Having looked at what evaluation is and why we do it, in the following paragraphs we have set out some points for you to consider when planning an
evaluation study. Our objective is to get timely yet comprehensive and integrated information on how a project or programme is working. To do this we
have to work out what we need, why we need it and when feedback will prove
to be of most benefit.
When to evaluate?

All projects funded by the Federal Office of Public Health (FOPH) are contractually obliged to review, assess and report on their achievements. Each year
managers are asked to submit an Annual Report of their activities. This report
should be considered as a Self-assessment of the projects progress.
Sometimes, however, an internal, or self-assessment is not enough. The
FOPH may decide that an external evaluation is needed. But how do we
determine whether an external evaluation is needed? What criteria should we
use for making this decision? Checklist 2.1 at the end of Part 2 helps you to decide whether an external evaluation is warranted. If yes, this should then be discussed with the FOPHs evaluation specialists at the earliest opportunity, preferably during the early stages of negotiating the intervention to be evaluated.
Ideally projects should incorporate some degree of monitoring and evaluation
right from their start-up. Some of the reasons for this are obvious:
everything is fresh in the mind of both project sponsors and managers, particularly which key elements of the project will need to be carefully looked at;
the purpose and expectations of the evaluation can be defined and the
appropriate funds can be budgeted;
arrangements for getting which information from which groups can be set
up early on;
questions relating to the overall preventions strategy, measures and actions
can be included;
a good feedback procedure can be planned.

Evaluation a cyclical feedback


process

Since the aim of evaluation is to help improve the planning, development and
management of FOPH strategies and interventions, ideally it should provide us
with ongoing feedback throughout the various stages of a projects life: conception, development and implementation. In this manner, it can prove instrumental in the following ways:
to reduce the risk of projects generating unrealisable or inappropriate objectives and/or strategies;
to ensure that target populations receive relevant and effective health care
programmes;
to secure sustained political and financial support.
Evaluation should therefore be regarded as a cyclical process which involves
responding to a variety of clients on different aspects of the project throughout
its life cycle. Monitoring and evaluation feed into all stages of this development
cycle to help improve, re-orient or even st op an unsuccessful intervention. The
feedback process is therefore vital. Figure 2.1 below illustrates the key features.
Figure 2.1 shows how ideally evaluation can help appraise the life cycle of a
project. A similar procedure can be put into effect for programme evaluation
(i.e. the evaluation of several projects which share matching goals). However,
as the project(s) are refined and repeated on a regular basis, there will be more
need for the regular monitoring of progress and achievements and less need
for intensive evaluation. Periodic evaluation should, however, take place to reassesses whats happening.

18

The role of evaluation


in the policy cycle
Agenda-setting

Efficiency

Accountability

Relevance

EVALUATION
Effectiveness

Formulation

Progress

Implementation

Fig. 2.1 The Optimal Evaluation Feedback Process

Evaluation tasks: Know your customer!

Research (analysing issues, problems, context, past)


Development (of solution options and decision criteria)
Formulating objectives and measures
Monitoring and assessing process as well as outcomes, impact
Controlling and accounting (cause and effect; cost-effectiveness)

plus:
Information dissemination & valorisation
Networking (incl. lobbying)
Negotiating
Co-ordinating
Stakeholders involved

politicians
public/taxpayers
media
target populations
Fieldworkers
special interest groups
researches
others

19

What can be evaluated? Aspects


and focal points

Evaluation is principally concerned with four basic aspects of the intervention


project, its
relevance
progress
efficacy/effectiveness
efficiency

Is the project doing the right thing? for the right


people?
Is it being put into effect as planned?
Is it being done in an appropriate way? and
Is it having an effect?
Is it doing it economically?

Evaluations can focus on the outputs of an intervention, for example, counting


products such as how many calls were received by an AIDS hotline over a specific period, how many leaflets did a project produce etc. Equally, they can concentrate on the processes taking account of such levels as infrastructural and
financial support, management and organisational structure and support etc. Or
they can look at the final results, the outcomes and impacts.
However, projects do not operate in a vacuum. Thus the context 2 in which it is
set should also be examined to see if and how it influences the projects development (processes), and results.
Ideally, the evaluation might take a holistic approach and be able to take account of all these different focal levels.
In the absence of adopting an holistic evaluation model, the choice of evaluation focus should depend to a large degree on the developmental stage of the
project itself. For instance,
During the initial stage project planning and development, evaluation is
often needed to help define the needs.
Who needs what, in what situation? Once these have been identified,
planning can then tackle which strategy and measures are likely to be
effective.
What type of intervention is needed.
Via which methods can the audience best be reached, and which might
be the most cost-effective way?
During the pilot stage, evaluation looks at how intervention methods and
activities are being put into effect, and with what results.
Is it reaching its audience, how many etc.?
Is it being put into effect as planned?
Is it an appropriate method for reaching its objectives?
What are the supports and constraints?
How could it be improved?
Do the aims and objectives need to be modified?
What unexpected outcomes are there?
What changes are taking place? How? Why?
An evaluation also provides a preliminary assessment of the projects immediate effects and the likelihood that it can ultimately meet its aims and objectives.
As the project becomes established, we pay more attention to such questions as
Is it meeting its aims and objectives? and
under what conditions?
How has it changed from the original plan?
What impact is the project having?

20

The context of a project includes information on where it is located, the political and social climate in which it is set (i.e. is it conducive or hostile towards the project), and the economic constraints, etc. An understanding of the context and its changes during the project's life, and its interrelationship with the project's development is essential. This helps audiences interpret evaluation
findings, and judge whether the project's context is similar to others, or quite specific. It therefore
reduces the likelihood of claiming wider applicability of findings than is otherwise justifiable.

Thus an analysis of the results, outputs and impacts grows in importance as the
intervention advances.
Whilst ideally evaluation should therefore be planned to take place in parallel
with the development of a project, this does not always happen. At worst
evaluations are often tagged on as an afterthought and expected to make a
summative appraisal of what has happened in hindsight. The focus of the evaluation should relate to the stage at which the project is at.
As we said above, evaluations can serve different purposes according to different needs. We therefore need to consider the following:

Classifying the purpose of


the evaluation

What is the project trying to achieve?


What do we want to use the evaluation results for? e.g. to re-orient, to modify, to focus the projects aims and objectives, or to make decisions about
the continuation of the project?
The starting point in planning any evaluation, be it of an innovative or well established project, is the review of its aims and objectives. In order to understand
what a project is doing and how, it is vital that the evaluator(s) understands
what the project is attempting to achieve. Because of the importance of aims
and objectives, therefore, we shall devote the next few paragraphs to describing what they are. The aim(s) is a general statement about what the project is
attempting to achieve. In other words, it is the projects end goal. Objectives on
the other hand, are much more specific and outline the step by step processes needed to achieve the overall aim. They should be realistic, and expressed
in such a way that it will later be possible to tell whether they have been
achieved.

Aims and Objectives

SMART objectives are those which are relevant of course, but also:
Specific,
Measurable,
Appropriate,
Realistic, and achievable within a defined
Time.
Far too often, objectives are not clearly set out, or are set out in so many different places within the text that it proves difficult to pick them out or rank
them in any rational order.
By the time a projects funding contract is drawn up, its objectives should have
been refined as far as circumstances permit. An evaluation at this stage can,
however, sometimes bring to light ambiguities or inconsistencies in the project
proposal which can then be corrected before the project is launched.
Determining the projects objectives at the outset need not necessarily prevent
their being changed later. Indeed it is to be expected that the external environment will change and that in turn, the projects objectives may need to be modified. The existence of a plan enables such changes to be noted explicitly and
allows the evaluation to take them into account. It is very difficult to evaluate a
project whose objectives have shifted if the changes and reasons for such have
not been documented.
So, an evaluation can help clarify, re-define or even focus the Projects aims
and objectives. Equally it can describe and analyse the reasons for such
change, and the consequences.
But even if the Projects aims and objectives do not change, we may want to
commission an evaluation to learn more about how certain projects develop
under certain conditions. In other words, we may want an evaluation to document and analyse how projects actually get put into practice. Our purpose for
the evaluation in this case, is to increase our understanding so that we can
improve our future planning.

21

On the other hand, we may call upon an evaluation to help us make decisions.
For example, a project may not be doing what it set out to do. Or maybe it is
doing something different from what was expected. In such cases an evaluation could be asked to identify how and why this has happened so that we can
decide what to do. Is it relevant but too expensive? Is it operating in a hostile
climate? Is it an ineffective method of prevention for the particular setting or
target population? How could it be modified? Should it be closed down? Is the
contract due for renewal? What kind of decision will have to be made once we
have the evaluation results? Once we have clarified why we want to have an
evaluation, we can then determine what questions we need to ask.
But before looking at the evaluation questions, we should also remember that
the FOPH is not the only beneficiary of evaluations. The project planners, managers and implementers have as much interest as the FOPH in learning about
their prevention efforts. Equally, those who are directly or indirectly affected by
the work are also potential evaluation audiences.
Identifying evaluation audiences
Who will be involved and/or
affected by the project?

The full range of groups likely to be involved and/or affected should be identified in the evaluation proposal. This will help focus its purpose, scope and ultimate audience. Such groups will include those interested in using the evaluation results to make decisions as well as the users, present and potential, of
the intervention itself. Groups likely to be affected are:

the project planners


project funders and sponsors
project implementers
the targeted population(s)
those who work directly and indirectly with the target group(s)
other prevention planners
decision-takers

What is each group likely to want to learn from the evaluation findings? Whilst
it is unlikely that we can consult all stakeholders when planning the evaluation, we should at the very least, involve the key project staff. We should equally try to predict what type of information and findings will be of interest to other
potential evaluation audiences. In short, we should determine in advance not
only WHO will be interested in the evaluation findings, but also WHAT they are
likely to want the evaluation to inform them about.
Defining the evaluation questions

We expect evaluators to provide us with some answers, so we need to take


this task seriously and allocate time to reflect on the following: what do we
want to know and what do we really need to know (i.e. what is priority (e.g. for
decision-making) and what is feasible within the constraints of time and available resources. Equally we must consider what we expect to do with the
results. For instance, are we looking for conclusive answers which can allow
for national generalisability? Or do we want to learn more about how a specific
project performs and develops over a given time, under what conditions, and
which are the factors which help and which dont? Different types of evaluations are needed to provide different types of decision-making information.
Basically, the evaluation questions we ask should be about the projects relevance, progress, effectiveness and efficiency and at the same time, co-ordinate
with the overall global evaluation questions on the prevention strategy concerned. For this, we might want to learn about what processes were used by
the project to achieve its results, and how, if at all, the context e.g. social and
political setting, influenced the projects development and results. The overall
impact or outcome could also be taken into account. Such evaluation
(impact/outcome evaluation) should be encouraged to look out for the unplanned as well as the planned outcomes.
For example, in an evaluation study of AIDS Prevention in Uganda, it was found
that one unintended outcome of a protective awareness campaign for sex
workers and their clients was an unplanned shift in client behaviour: instead of

22

having protected sex with commercial sex workers, some turned their attention towards schoolgirls instead. 3
But we should also be aware that the way the question is posed will determine
the type of information we can expect. For example, the effectiveness of a project designed to train drug users as peer educators (mediators) amongst a subset of drug users such as iv drug user prostitutes. In this case we may want to
know what motivates the mediators to remain active for more than six months.
The question about motivation could be posed in a variety of ways. One could
look at the personality factors of those who remain and those who dont (e.g.
extrovert, shy, intelligent, domineering etc.). Equally one could consider the
social conditions of these two groups (e.g. social class, employment status, family life, schooling etc.). The type of data and data sources used for each would
be different and lead to a different type of analysis.
Formulating the right questions should start off by looking at a wide range of
potential questions for example about different aspects of the project (such as
the aims and objectives, inputs, management, infrastructure, social context,
etc.). Broad questions about these aspects can then be considered in terms of
priorities, how feasible they would be to answer and indeed how expensive
that might be. The Evaluation Unit can help formulate the initial evaluation questions pointing out what might be easily answered, what information would be
needed and what could and could not be expected from the answers to each.
However, we must remember that whilst we should determine the overall
questions we want addressed, ultimately it is the task of the evaluator to refine
these questions and identify the relevant sub-questions too. The evaluation
questions s/he poses will then help determine the design of the study and the
approach needed. Thus once an evaluator is appointed, (see Part 3 for how this
is done) the questions we initally put forward can then be refined and further
discussed until mutual agreement is reached.
The scope of information to be collected should be tailored to address the
agreed evaluation questions. (But the questions should not be determined by the
scope of information available!). However, to some degree this will depend on:

What to evaluate? The scope of


the study

what information needs to be analysed to answer the questions;


what relevant data sources exist already which can be used. For example,
are there evaluation reports on similar projects which can be analysed does
the project itself collect relevant data? Is survey data available about the
same target group, about similar questions?
what else needs to be collected;
what information can and cannot be feasibly collected;
what implications this has for the analysis.
In principle, the evaluation focus should not be too narrow too quickly.
(see more on this in Part 3).
Thinking about when evaluation results will be needed and by whom, should
take place at the planning stage so that a feedback schedule can be written
into the evaluation contract. Obviously, as a general guideline, findings should
be communicated to intended users at times when the information can best be
used, and in an appropriate format. Once again, we need to emphasise the
importance of identifying the potential users of the evaluation in advance. This
will help determine the different types of reporting formats and approaches
which are appropriate for the different intended user audiences. Authority to
fulfil this responsibility needs to be determined and written into the contractual agreement with the evaluators at the outset of the evaluation.

When do we need feedback?

Moodie R, Katahoire A, Kaharuya F, et al An Evaluation Study of the Uganda National AIDS Control
Programs (NACP) Information, Education and communication Activities, NACP/WHO, Entebbe,
December 1991

23

Budgeting an evaluation

In general, between 10%15% of the total projects budget should be set aside
for financing an external evaluation study (approx. 5% for self-evaluations). In
exceptional cases, and in consultation with the Evaluation Unit, the budget
could even be set slightly higher. For example, an intensive, in-depth evaluation
study of a low budget project may be recommended to obtain the required
answers to questions. In this case, more time input from the evaluator will be
required and will therefore cost more. Budget items should include staff costs,
travel, equipment needs, and evaluation output costs (e.g. report translation and production charges).
The budget should reflect realistic costings of the proposed scope, methods
and procedure.
A separate budget for projected valorisation activities should also be
itemised in the evaluation proposal budget. It should cover charges for the
evaluators time and transportation costs only. This is needed because once
the final report has been accepted, depending on the findings, the FOPH
may organise targeted activities to disseminate and discuss the results with
a range of stakeholders.

Recognising what the evaluation


can and cannot do

The evaluation will certainly have to take place within the limitations of budget,
time and resources available. Whilst it would be interesting to have all our questions answered, it is likely that such factors will constrain the scope and depth
of the study. That is why we have to prioritise and be aware of what can and
will not be accomplished.
But even when the study has been clearly delineated at the time of contract,
the evaluator may find that what seemed feasible in the beginning does not
prove possible in reality. For example, it may well be that the evaluator cannot
interview all the different levels of those involved in a project originally planned
(absent, left the project, too busy etc.). Alternatives have to be considered, and
if no feasible options can be exploited, the limitations to the analysis should be
explained and discussed in reports (verbal or otherwise) to the FOPH.

Mistakes to be avoided

24

Only thinking about commissioning an evaluation for the first time when the
project is nearing its end. Evaluations are more helpful when they are
planned concurrently with the actual project.
Not discussing the evaluation purpose with the project staff.
Defining and prioritising the evaluation questions without consulting key
partner clients.
Neglecting to identify who are directly and indirectly involved and/or affected
by the evaluation results at the planning stage.
Expecting access to data and project co-operation without negotiating what
and how this can be feasibly achieved.
Data gathering should be achieved without over-burdening the workload of
the project staff and through causing minimal disruption to project activities.
Neglecting to discuss the purpose of the evaluation and negotiate access to
data with ALL relevant persons/groups. Planning and discussing the evaluation and data needs with the project manager does not mean that others
involved will be aware of what is happening!
Expecting too much from the evaluation with respect to the time and budget available.
Expecting the evaluation to make decisions. It is not the task of the evaluator to make decisions based on the evaluation findings: this is the contractors responsibility i.e. the FOPH.

Checklist 2.1: Assessing the Need for External


Evaluation The Criteria

All Federal Office of Public Health funded projects are


expected to produce an annual report on their activities. This report should describe and discuss the projects development, progress, and achievements: it
therefore serves as a self-evaluation of the project.
The following questions should help determine
whether a project needs an independent evaluation, that is by evaluation experts who are not
employed by the project. The possible need should
then be discussed with the Federal Office of Public
Health Evaluation Unit before a final decision is
reached.

Does the project offer new measures, and/or a new


way of dealing with an FOPH priority health problem?
Are wider measures likely to be adopted as a result of
this project?
Does it have potential national importance?
Does the project share characteristics or have objectives which are similar to other past/present projects?
Is the project politically sensitive?
Is

the project being reviewed? due to e.g.


staff problems such as conflicts, absences etc.;
falling behind schedule;
out-of-date aims and objectives;
going off target i.e. not sticking to agreed aims and
objectives.

Does the project require an external evaluation for reasons


such as:
project expansion;
threatened termination;
legitimisation (e.g. political/economic);
sponsorship (e.g. continued sponsorship at
local/can tonal level)?
Is the projects budget more than sFr. 100 000?
Is the project relevant to comparative international
studies?
Is the Project relevant to Global evaluation priorities?
(e.g. in terms of target group/situation strategic
method)

These Checklists are an integral part of the FOPHs


Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
25

Checklist 2.2: Planning Evaluation of a Health


Project

What should be evaluated, what questions will be


answered and how it will be done are determined by
the following (1) the nature of the project itself; (2) its
stage of development, and (3) why the evaluation is
being commissioned (the purpose of the evaluation).
The following checklist is intended to help the
Federal Office of Public Health and its project partners collate the information needed to formulate
an Evaluation Mandate.

The Project to be Evaluated


Is it a project or a programme (i.e. is it a specific
intervention or a set of interventions/activities with
common characteristics and objectives)?
At what stage of development is it (i.e. how long has
it been running)?
Is the target group(s)/target setting clearly defined?
Are the overall aims and objectives clearly stated?
i.e. is it clear what the project is trying to achieve?
Are the operational objectives SMART? i.e. specific?
measurable? appropriate? realistic? and delineated in
time?
Has the project identified a set of criteria for assessing
its progress, results and medium to longer term
effects? (process, results and impact indicators)?
Has the project established routine monitoring and
assessment procedures for its self-evaluation?
Has the purpose of an external evaluation been
discussed with the project staff? at which levels?
Have project staffs support and co-operation for the
evaluation been secured?
The Evaluation
The Purpose
What is the main purpose of the evaluation? e.g.:
to help clarify/redefine the projects aims and
objectives;
to re-orient the project;
to analyse whether or not the intervention works or
doesnt and under what conditions;
to make decisions about the projects continuation;
to make retrospective judgement of achievements
and performance;
to measure what impact has been achieved.
Time Framework
Should an external evaluation therefore be designed to
take place, (in what time framework), as
a) Formative Evaluation (takes place concurrently with
the projects development and/or implementation to
highlight how/where improvements might be made)
b) Summative Evaluation (look-back over work
achieved judgmental rather than developmental )

p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
27

The Evaluation Focus and Questions

Timely Feedback

Have reports on similar projects/evaluations been


reviewed to help identify what needs to be studied?
Have the main evaluation questions been defined?
(e.g. in relation to project progress, relevance,
effectiveness, and efficiency)
Have the main evaluation questions been checked
with the Evaluation Unit e.g. to include global
evaluation questions too?

Have we defined when we (FOPH and project


managers) need to know about evaluation findings?
e.g.
in relation to project changes under planning;
in relation to projects future planning;
to help modify project as and where necessary
(and earlier rather than later);
in relation to decisions being taken in operational
context which may affect project (changes in
budget, local support, etc.);
in relation to development of other projects
sharing similar characteristics and/or objectives.
Have we identified when other potential evaluation
audiences are likely to need feedback on the results?
If yes, what information? When? Why?

Support Needs for Conducting Evaluation


Has the evaluation been discussed with the project
managers?
Have all other key partners been consulted?
e.g. about the purpose of the evaluation and questions
to be addressed?
Did project staff help formulate the evaluation
questions?
Is the Project collecting Data for Self evaluation?
Has the Project agreed that the evaluator can have
access to these data?
If none, what data could be collected internally (i.e. by
the project) which could then be used by the external
evaluation?
Evaluation Audiences and Timely Feedback
(N.B. the best time to provide evaluation feedback is when
the information will prove most useful to the project)
Have we identified which groups are likely to be
directly and indirectly interested in the evaluation
results?
Definite Evaluation Audiences
project funders e.g. the FOPH
project planners
project managers/implementers
the evaluation team of the relevant global evaluation
programme
Potential Evaluation Audiences
the population(s) targeted by the project
those who work directly and indirectly with the
target group(s)
prevention planners, implementers, funders of
similar projects
relevant policy makers (e.g. at federal cantonal,
communal level)
Have we considered if there might be any reason to
restrict feedback on the evaluation results to certain
audiences only? e.g. political sensitivity etc. If yes,
this needs to be specified in our Evaluation Mandate
(see checklist 3.1)

28

Checklist 2.3: Self-Evaluation Basic Elements to


be Included

All Federal Office of Public Health funded projects are


expected to produce an annual report on their activities. This report should describe and discuss the projects development, progress, and achievements: it
serves as a self-evaluation of the project. It is important therefore that right from the start, the projects
development should be systematically documented. A
log book should be kept on what actions, decisions,
changes were made, when, by whom and why.
Similarly records of what services or goods were produced, how many, for whom, and, for example at what
unit cost, should also be systematically kept and summarised in the self-evaluation.
A self-evaluation budget is included in every FOPH
project contract. This money is provided to support
evaluation activities. It can be used, for example, to
purchase evaluation consultancy to help establish
recording methods and procedures.
The following should be used as a checklist of
items which should be systematically assessed
and reported in the self-evaluation (annual report):

1. Description of the project design and development


What the Project planned to do.
what did it set out to do? (aims and objectives)
for whom? for which situation?
how was it going to do this?
how did it plan its structure, organisation and
management?
What the Project actually did
This should include tables and/or graphs to show for
example:
a chronological list of major actions, decisions and
changes;
a table of the contacts made e.g. with WHOM
(type of group/institutions/ etc.) nature of contact,
purpose;
a description of which goods/services were produced, for whom etc.
simple statistics/frequency counts of e.g.:

the number of goods/services produced;

how many were used by which type of


groups, over what period of time;
for what purpose etc.
N.B. examples of project documents e.g. on training,
brochures etc., should be annexed to the report.
What were the major problems met e.g. unforeseen
events, and how were these resolved?
How was it actually structured, organised, managed,
and with which organisations/groups did it work?
(provide organigram of management structure and
relationships between project and key external
partners)
What human and financial resources were actually
made available?
2. Discussion and Lessons Learned

Did the project achieve what it tried to do?


How?
Why? or Why not? and
What helped and what didnt?
What were the strengths of the project?
What were its weaknesses?
Summary of main lessons learned

p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
29

3. Recommendations
WHAT recommendations/advice would you give
WHICH groups/people about
the future of your project?
setting up a similar project within the same
canton/setting?
setting up a similar project within a different
canton/setting?
In particular, what would you recommend to the
Federal Office of Public Health about supporting
a similar project?

30

Checklist 2.4: Training and Continuing Education


Courses Self-Evaluation Basic Elements to be
Included
All managers of Federal Office of Public Health funded
Training and Continuing Education projects are
expected to produce an annual report on their activities. This report should describe and discuss the projects development, progress, and achievements: it
serves as a self-evaluation of the overall Project.
It is important therefore that right from the start, the
projects development should be systematically documented. A log book should be kept on what actions,
decisions, changes were made, when, by whom and
why. Records on course provision and attendance
should also be systematically maintained. (Refer to
the Guidelines for Developing or Assessing MedicoSocial Training/ Education Projects published by the
Federal Office of Public Health included here as
Annex No. 5).
A self-evaluation budget is included in every FOPH
project contract. This money is provided to support
evaluation activities. It can be used, for example, to
purchase evaluation consultancy to help establish
recording methods and procedures.
The following should be used as a checklist of
items which should be systematically assessed
and reported in the self-evaluation (annual report)
on Training and Continuing Education Projects.

1. Description of the Projects design and


development
i) What did the Training/Continuing Education Project
look like during Planning?
what did it set out to do? (aims and objectives)
for whom (which group of health carers)?
for which situation?
how was it going to do this?
how long would it take to become self-funded?
what sources of funds were to be used over short,
medium, and longer term?
how did it plan its structure, organisation and
management?
ii) What did the Project look like in action?
This part should COMPARE PLANS with what actually
took place. It should include:
A description of the major problems met by the
project e.g. unforeseen events, and how these were
resolved?
A description of how the project was actually
structured, organised, and managed. With which
organisations/groups did it work? (provide organigram
of management structure and relationships between
project and key external partners);
Details of the human and financial resources actually
made available, and from which sources;
A description of which courses took place, for whom
etc.
A description of which courses DID NOT take place,
and why
A description of how the courses were evaluated
(e.g. see section 5 in the Guidelines for the
Development or Review of Training/Continuing
Education Projects published by the Federal Office of
Public Health included here as Annex No. 5).
N.B. examples of course documents e.g. on training,
brochures etc., should be annexed to the report).
A description and analysis of course costs to show
prices of course day, per participant: e.g.
number of course days by number of participants
cost ratio of inputs by number of course days per
year, per number of participants

p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
31

This section (describing the project in action)


should be supplemented by tables and/or graphs to
show for example:
a chronological list of major actions, decisions and
changes;
a table of the contacts made e.g. with WHOM
(type of group/institutions/ etc.) NATURE of contact,
PURPOSE;
simple statistics/frequency counts of e.g.:
the number of courses planned compared with
the number of courses held;
the number who registered per course, compared
with the numbers who actually attended.
aggregated characteristics of participants per
course e.g.
professional occupations (e.g. by current
employment)
attendance on previous courses/on same course at
this institute (and/or at any other institute)
cantons of employment
employer institutes
course fees paid by whom e.g. self-funded or
paid by employer

2. Discussion and Lessons Learned


Did this Training/Continuing Education Project achieve
what it tried to do?
How?
Why? or Why not? and
What helped and what didnt?
What were the strengths of the project?
What were its weaknesses?
Summary of main lessons learned
3. Recommendations
WHAT recommendations/advice would you give
WHICH groups/people about
the future of this project?
setting up a similar project within the same
canton/setting?
setting up a similar project within a different
canton/setting?
In particular, what would you recommend to the
Federal Office of Public Health about supporting
a similar project in the future?

32

Part 3: Commissioning and Monitoring


Evaluation Contracts

Continual contact with the external


evaluators helps you understand
whats happening in good time

Drawing up an evaluation mandate


Identifying potential evaluators
Reviewing the evaluation proposal
Setting up the Contractual Agreement
Refining the evaluation en route
Keeping in touch during the evaluation process
Mistakes to be avoided

Contents of this Part

Drafting the FOPH evaluation mandate the FOPHs checklist


Drafting the evaluation proposal the evaluators checklist
Assessing the evaluation proposal
Assessing the evaluator
Monitoring the evaluation contract

Relevant Checklists

33

This section provides our partners with an overview of the principles used by
the Federal Office of Public Health (FOPH) when commissioning external
evaluations.
The Evaluation Unit provides FOPH staff with guidance on the step-by-step procedures used when commissioning evaluation studies, and is ultimately
responsible for the quality control of all external evaluation contracts. Its staff
help FOPH partners determine their evaluation needs and purposes, an appropriate evaluation approach and useful feedback schedule. It also advises on the
choice of external evaluators.
The following paragraphs briefly describe the procedures which
should be adopted for setting up the contractual agreement.
Step 1 The Evaluation Mandate

Once the need for an evaluation has been agreed (see Criteria Checklist for
external evaluations, Part 2) the basic requirements for the evaluation study are
set out in writing. In some cases (e.g. evaluation budget = less than frs.
100 000), this may only need to be a brief outline of the purpose, the general
evaluation questions, potential evaluation audiences, time scale and budget.
For more substantial studies, it should be more detailed. (see Checklist 3.1 to
see what information should be provided). The evaluation mandate is then discussed with the key partners involved (e.g. FOPH staff such as specialists from
the relevant prevention section and Evaluation Unit), and the project managers).

Step 2 Identifying potential


evaluators

Once agreement is reached, a Call for Proposals for the evaluation may be
circulated to the evaluation community. The Call for Proposals package must
include not only the evaluation mandate, but also:
the major documents about the project;
an outline of the FOPHs relevant prevention strategy plus a short statement
on where the particular project to be evaluated is in relation to this;
copies of previous evaluation reports, research reports and/or theoretical
papers relevant to the proposed evaluation study (if available) or at least
details of where they can be obtained;
and the Evaluation Units Drafting a Proposal checklist (Checklist 3.2 of
these Guidelines).
Evaluators should take care to refer to these documents in drawing up their initial evaluation proposal.
Potential evaluators will then be invited to submit their proposals. The evaluation proposal is a key component of the evaluation contract as it sets out the
main elements of what will be done, by when, and how this will be achieved.
It is therefore annexed to the contract and forms an integral part of the agreement.

Step 3 Reviewing evaluation


proposals

Evaluation proposals will be assessed by the Evaluation Unit according to a set


of criteria which include the feasibility, ethics, and relevance of the described
procedures and approach in relation to the general evaluation purpose and
questions. At the discretion of the Evaluation Unit, other specialists/expert
committees may also be invited to give a critical review of the proposals.

Step 4 Setting up the evaluation


contract

The Evaluation Unit draws up the necessary contractual agreement. The


accepted evaluation proposal sets out the purpose, methods, procedure, time
scale, budget, and plans for the feedback and dissemination of findings to
potential audiences. This is then annexed to the contract. Items such as ownership of data, obligations of contractor and contracted, payment procedures etc.
are detailed in the actual contract.
On average it takes 68 weeks to have a contract processed.

Step 5 Refining the evaluation


en route

34

A good evaluation design is the result of direct negotiation between the key
stakeholders and the evaluator. It is based on a sound knowledge of the project, its context and the differing stakeholders concerns. Once commissioned,
the evaluators should therefore spend an initial period getting to know more

about the project, the context in which it is set and the key partners. A wider
range of project documents should be reviewed, and discussions with all key
client groups should be held in order to help focus and prioritise the questions
to be addressed. This is likely to lead to the production of a refined evaluation
proposal and workplan. What can and cannot be addressed should be clearly
described to indicate the limitations of the study.
Any refinement to the original proposal needs to be submitted in writing. Once
agreed by all parties concerned, it will be used as an integral part of the contract and annexed to the original proposal.
Once the evaluation is underway, it is a serious mistake to believe that nothing
needs to be done until the interim or final report comes in. It is possible, and
indeed highly likely in the case of experimental and pilot projects, that the original evaluation plan, i.e. its purpose and procedures, will change during the
course of the evaluation. This may be due to, for example, unanticipated issues
which come to light only once the evaluation is underway, to significant
changes in the projects setting, or to a conflict of interests between key partners.

Keeping in touch during the


evaluation

The contractors and project managers need to be kept informed! Any modification to, or reorientation of the original evaluation plan should be mutually
agreed, preferably in writing, between the contractual partners. It is the FOPHs
responsibility to ensure that potential changes are fully discussed with the relevant partners, i.e. the relevant FOPH sections, such as the Prevention Section
and Evaluation Unit, and the project under study. Significant changes will
require a formal amendment to the contractual agreement, requiring the same
signatories as the original contract.
But we dont only want to hear about changes to the evaluation design: important findings, issues or concerns which come to light through the evaluation
process should also be fed back to us. Remember, we expect evaluations to
help us improve our prevention efforts: this means that we should be kept upto-date on findings, albeit even interim findings, as and when they become
available.
The project team, is busy, the evaluator is busy and we, the FOPH, are also
busy. But keeping in touch is important! So, be sure that a regular contact time
schedule is set up between the various partners right from the beginning and
keep to it!
Not identifying and consulting the key partners during the initial evaluation
planning. (FOPH)
Accepting the evaluation proposal without having referred to key partners for
comment. (FOPH)
Not identifying the range of potential evaluation audiences during evaluation
planning. (FOPH and external project partners)
Appointing an evaluator without prior assurance of his/her integrity and competence through e.g. reference to past evaluations and past contractors.
(FOPH)
Appointing an evaluator without having first establishing mutual trust.
(FOPH)
Failing to ensure that the main investigator is competent i.e. that an inexperienced assistant is assigned the work rather than the person appointed.
(FOPH)
Assigning evaluations to those who have little knowledge and experience of
the specific study setting (e.g. prisons, school system, government administrations etc.). (FOPH)
Not establishing preliminary agreement between FOPH, evaluators and key
project staff for feedback arrangements to whom, how and when?
Assuming that key partners will participate in the evaluation (and possibly
also provide evaluators with project collected data) without having first
secured their agreement. (FOPH and evaluators)
Assuming that the accepted evaluation proposal covers all aspects of the
contractual agreement. The draft contract and the annexed evaluation pro-

Mistakes to be avoided

35

36

posal together should be carefully checked by both parties (the FOPH


Evaluation Unit and the evaluators themselves) to ensure that all details are
included, understood and agreed.
Neglecting to provide for periodic review and amendment of the contract.
(FOPH and evaluators)
Failing to include time within contract period for checking draft copy of final
evaluation reports. (FOPH and evaluators)
Neglecting to itemise potential valorisation plans and budget within the
contract. (evaluators and FOPH)
Failing to secure agreement to changes to the original evaluation purpose
and procedure from key partners. (FOPH)
Planning a data collection process which is over-demanding and disruptive to
the project. (evaluators)
Failing to take into account the ethics of proposed data collection. (all key
project partners and evaluators)
Neglecting to incorporate within work plan the time needed for discussing
draft reports with FOPH /projects before printing the final product. (FOPH
and evaluators)
Guaranteeing anonymity and/or confidentiality when this is not possible.
(evaluators)
Not checking from time to time FOPH and Projects information requirements. (evaluators)

Checklist 3.1: Drafting the Evaluation Mandate


An FOPH Checklist

The FOPHs Evaluation Mandate should describe what


we need, and be set out according to the sections
described in this checklist: (see Part 3 of the FOPH
Guidelines for Project and Programme Evaluation
Planning).
Use this list to check what the evaluation mandate
does and does not yet cover.

Section 1:
Introduction and Background
A brief description of the project, including its aims
and objectives, budget, time period, and relationship
to the FOPHs global prevention strategy
Legal basis for commissioning the study
Section 2:
The Evaluation Mandate
The purpose of the evaluation, and intended use
of results according to which types of evaluation
audience groups
The initial evaluation questions as defined by the
FOPH and relevant project manager
(both the project specific and those of interest to the
relevant global evaluation study)
Major areas and levels of interest and concern for the
evaluation focus
Data currently being collected e.g. by project and what
other data and data sources are available
List of evaluation outputs expected
Section 3:
Time Plan
Time period for the evaluation study
Timetable of when critical decisions will be taken
about project development, or timing of other factors
which could affect the project (vital to help evaluator
organise study and project feedback schedule).
Section 4:
Diffusion and Valorisation of Evaluation Findings
List of intended evaluation audiences, i.e. key
audience groups to be informed of the evaluation
results grouped according to definite and potential
groups
Possible formats for report-back to which type of
groups

p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
37

Section 5:
Organisation Chart of Evaluation management and
responsibilities
Section 6:
Budget
Budget guidelines including a separate and specific
budget for valorisation
Annexes:
All relevant documents should be attached to the
mandate to help the evaluator prepare his/her proposal
(e.g. reports on similar projects, evaluation studies,
etc.). If not available, at least reference to what these
are and where they may be found.
Also include Evaluation Guidelines Checklists 3.2
Drafting an Evaluation Proposal: An Evaluators
Checklist and Checklist 3.3: Assessing the
Evaluation Proposal.

38

Checklist 3.2: Drafting an Evaluation Proposal:


The Evaluators Checklist

When responding to the FOPHs Call for Offers evaluators should use this checklist to determine what to
include in their proposal, and, wherever possible, how
it should be set out. This list is based on information
set out in the FOPH Guidelines for Project and
Programme Evaluation Planning.
Use this list to check what your proposal does and
does not yet cover.

i) Cover Page with title of proposed evaluation,


and evaluators name, address etc. and date of
submission.
ii)Summary of Main Points of Proposed Study.
Section 1:
Introduction and Background
A brief description of the project, including the aims
and objectives and its relationship to the FOPHs
global prevention strategy.
Why the evaluation is being commissioned and
how it is intended to be used (e.g. decision, project
expansion, etc.).
Intended evaluation audiences, i.e. key audience
groups to be informed of the evaluation results.
grouped according to definite and potential audiences
Major areas and levels of interest and concern for the
evaluation focus.
Section 2:
Proposed Evaluation Design
(Methodology, Approach and Methods)
The preliminary evaluation questions to be answered
as set out in the FOPH mandate.
What is already known about previous work relevant
to the project (status quo in research/evaluation of this
area theoretical reference framework for proposal).
The proposer should draw on her/his own experience
and/ or knowledge of the field.
Evaluators proposed evaluation questions (this should
include the FOPH and Projects questions but should
introduce others and/or modifications to the original
FOPH questions).
Where relevant it may well note that additional questions
and unanticipated information may come to light during
the course. If so, it should propose that a refined evaluation design will be made, and by when this should be expected.

p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
39

The methodological framework, and


proposed data collection methods,
proposed samples and sampling techniques,
proposed methods of analysis, and
support needed to conduct the proposed study e.g.
what type of clearance will be needed for data
access, from whom and to collect what type of data.
What alternatives would be possible if this could not
be guaranteed.
Brief summary of the evaluation outputs/products we
can expect (e.g. types of reports (oral/written;
interim/final/abridged), guidelines, posters, publications, etc.).
Description of the proposed evaluation team, and
who will do what (e.g. supervision, main investigator,
data collector, secretarial support). Details of what
infrastructural support is available, and what may be
needed to support the evaluation proposed (e.g. hardware/software, recording equipment, etc.).
Section 3:
Workplan and Time Schedule
Table of proposed workplan and time scale for
completion of the study (to include overall length of
study, start and finish date according to date contract signed proposed reporting times etc. planned
time for diffusion and valorisation of findings).
In cases where, in keeping with the methodology, the
proposed evaluation design may need to be refined
after a certain period of fieldwork, the time period
needed should be specified.
Please note that a period of one month is needed for the
FOPH/Project to review the DRAFT of the final report. The
FOPH/Projects comments have to be taken into account
in producing the final report.
Section 4:
Valorisation Plans
Proposals for feedback process what kind, for
whom, and when
during evaluation
at study completion
Section 5:
Budget
i) For conducting the evaluation
Personnel Costs: no. of people in evaluation team,
at what % of time, and at what salary grade.
Operation Costs: capital equipment needs
(e.g. soft/hardware, recorders etc.);
travel and subsidies, etc.
Production Costs: e.g. for production of
questionnaires, reports, slides etc., and
translation costs
ii)for dissemination and valorisation of results
separate and specific budget for evaluators time
needs for such activities as writing non academic
articles and contributions to oral presentations
40

N.B. A curriculum vitae and statement highlighting the


evaluators (and other members of proposed teams) qualifications, experience, and interest relevant to the project
should be annexed to the proposal. An example of his/her
work should be included wherever possible, as well as the
name, address and telephone/fax details of previous
clients for reference.

Checklist 3.3: Assessing the Evaluation Proposal

An FOPH review panel will assess all evaluation proposals submitted in response to a Call for Evaluation
Proposals. The standard criteria for scientific rigour
as well as the relevance of the design will be used as
the basis of the assessment.

However, the panel will also take into account the


points set out in this checklist.

These Checklists are an integral part of the FOPHs


Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

Is the proposal set out according to FOPH guidelines?


Are all items covered? If not, is this explained?
Is the proposal clear, easy to read?
Does it show a sound understanding of how the
project complements the relevant FOPH prevention
strategy?
Does it demonstrate a sound knowledge of relevant
prevention projects and/or evaluation studies?
Are the questions to be addressed clearly stated,
relevant and appropriate? i.e. are our needs well
understood?
Does the evaluator offer additional questions/new
insights into ways of assessing the project?
Has an initial prioritisation of evaluation questions
been made?
Is the proposed methodology appropriate to the evaluation questions?
Is it likely that the evaluation questions can be
adequately answered by using the proposed methodology?
Is the proposed evaluation approach likely to succeed?
(i.e. not too disruptive to project activities, not too
demanding of project staff, likely to engage cooperation etc.).
Are the described scope, approach and expected
outcomes realistic and achievable within the budget
and time constraints?
Is there some discussion about possible valorisation
plans?
Does the work plan identify when the initial proposal
and evaluation strategy, questions etc. will be
reviewed and refined?
Does the work plan include time for valorisation?
Does the work plan include extra time (approximately
2 months) to discuss draft reports with FOPH/others
and make amendments where needed)?
Do the initial proposals for feedback meet our needs?
(i.e. in terms of timeliness, and format/type of feedback relevant to different evaluation audiences needs).
Is the proposed budget fair, realistic and comprehensive? (e.g. does it cover all necessary costs e.g.Report
translation and production, materials, equipment, personnel, travel and subsistence costs relevant to the
proposed work? Does it also include money for valorisation e.g. communicating the results to the range
of evaluation audiences: publications, workshops,
seminars etc. Does it do it at a fair price?)

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
41

Checklist 3.4: Assessing the Evaluator

An FOPH review panel will assess all evaluation proposals submitted in response to a Call for Evaluation
Proposals. The standard criteria for scientific rigour
as well as the relevance of the design will be used as
the basis of the assessment.
However, the panel will also take into account the
points set out in this checklist.

These Checklists are an integral part of the FOPHs


Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

Has the evaluator(s) done similar work before?


Have we referred to examples of his/her past
evaluation work?
Have we referred to former clients for their opinion?
Is the evaluator familiar with relevant work done by
others? (e.g. to draw on major lessons learned, areas
of priority concern, etc.).
Does the evaluator have the relevant competence, and
experience? If not, is there an evaluation team? Do
members together offer all the necessary skills? e.g.
technical competence
relevant experience
languages
public relations and communications
Has the evaluator sufficient infrastructural support to
do the evaluation? (equipment, secretarial assistance
etc.).
Has the evaluator got sufficient time to complete
this study?
Is the evaluator (all team members) totally indepen
dent from the project? (i.e. has no personal/material
interest in its success,
no relationship with key project staff, etc.).
Is the evaluator likely to engage the co-operation of
project staff?

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
43

Checklist 3.5: Monitoring the Evaluation Contract

Even when an evaluation contract has been carefully


negotiated and satisfactorily agreed, it is still necessary to monitor its implementation attentively: (a)
because there may be unancticipated events or
changes that will affect the evaluation as a consequence the original evaluation design may need to be
re-negotiated and modified; (b) because either the
Office or the evaluator may become dissatisfied with
contract implementation if this starts to happen it is
important to identify and redress the problem before
any breach of contract occurs which may ultimately
lead to contract termination.
This checklist prompts you to think about which
elements you will need to look at when monitoring the implementation of your evaluation contract. These concern the procedures and expectations set out in the contract and evaluation proposal. In addition, you will have to consider the
quality of the work and performance. Checklists
3.3 and 3.4, and 4.1 and 4.2 provide help on what
should be considered for making such a judgement.

These Checklists are an integral part of the FOPHs


Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

Have the agreed procedures for data collection been


put into practice? (including collecting data from all
target groups/activities identified in the evaluation
design)
Has the evaluator met the agreed deadlines for each
stage of the work?
Has the Office kept to the agreed timetable in, for
example, providing information, attending meetings
etc.?
Has the evaluator provided feedback on important
concerns, and/or findings AS AND WHEN they
become available?
Did the Office keep the evaluators informed of
any changes likely to affect the Project and/or
the evaluation?
Has the evaluator delivered the contracted evaluation
products/outputs?
Were these products/outputs delivered on time, that is
according to the agreed deadlines?
If NO to any of the above, was the Office kept
informed of any problems with, for example data
collection procedures, meeting agreed deadlines,
producing evaluation products/outputs?
Were new agreements satisfactorily negotiated?

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
45

Part 4: Assessing the Technical Evaluation Report

The evaluation report should communicate the key findings in a clear


and simple way

What it should cover


How it should be presented
Reviewing the results and recommendations

Contents of this Part

What the report should include


Assessing the technical evaluation report

Relevant Checklists

47

The checklists provided with this section set out the principal requirements of
what should be covered in a technical evaluation report, and the criteria for its
overall appraisal.
In the main, we are interested in the following:
Does the report address the questions we asked, (and if not, was there
some good reason for not doing so)?
Does it present the information clearly?
Are the findings and recommendations useful to our future planning?
What the report should cover

The technical evaluation report needs to provide a clear description of the


project itself and the context in which it operates. This is important for our
understanding of what the project is trying to achieve and whether or not the
climate is suitably conducive towards this end. It is important therefore that
the evaluation should collect examples of the projects documents (such as
brochures produced, copy of the teaching/training programme, questionnaires
produced by the project, etc.). Such documentation about the project and its
work should be appended to the evaluation report.
Having set the scene, the report then needs to provide us with an adequate
description of how the evaluation went about its task: what it did, how it did it,
what problems were met and how these were addressed. A clear description
of what was taken into account and what was not helps identify the potential
limitations of the analysis. In addition, we need sufficient information on the
reliability and validity of the data collected. An explanation of the methods used
to analyse the different data sets is equally important for the same reasons.
Examples of the different data collection tools and results should also be included to show clearly how the work was executed. For example, in the case of
qualitative analysis, an extract of verbatim responses (but presented in such a
way as to ensure the safeguard of anonymity wherever agreed) to a particular
question should be given to help us understand the full range of opinions and
understanding of the particular issue.
The conclusions and recommendations should be defended by reference to the
data obtained.

Criteria for assessing the report

The scientific rigour of the report is obviously an important criterion for judging
the report; its relevance and utility are equally important. The evaluation should
report information clearly enough for it to be easily understood. The discussion
should be comprehensive, but direct and focused on the evaluation questions
and issues. A good balance between text and graphic representations should
be provided. However, graphics should not be included just for their own sake:
they should be used to add value and understanding to the descriptive text.
Messages about the weaknesses and strengths of the project and its implementation have to be easily identifiable, succinct and well defended. We need
to understand;

what worked well,


what did not,
what were helpful factors,
which were inhibiting factors,
how these factors ultimately influenced and shaped the projects development.

The lessons learned through the experience should be identified and discussed
to help us understand how to improve our future strategies and measures.

48

Not addressing what the key partners want to know about the project.
Concentrating on theoretical rather than practical issues in the report.
Over use of technical jargon rather than clear, simple explanations.
Not providing a description of the project, and the context in which it operates.
Not analysing how the operating context may have influenced/shaped its
development and ultimate results.
Using graphics, tables and figures that add little value or understanding to
the descriptive text (and/or vice versa!)
Providing insufficient information about the methodology and the strengths
and weaknesses of the methods used, as well as the effects on the analysis.
Not maintaining confidentiality of information when this had been agreed.
Confusing what is meant by anonymity and confidentiality.
Modifying conclusions or recommendations to suit partner interests when
not justified by the data.
Not providing a clear summary of what was addressed and how, what the
Projects strengths and weaknesses were, and what were the main lessons
learned, particularly with respect to future project planning.
Making indefensible generalisations.
Not recognising nor discussing the possible limitations of the study:
why these occurred, what alternatives were considered, what were the consequences and how might these have affected the analysis and overall findings.
Assuming that the whole report will be read in detail.

Mistakes to be avoided

49

50

Checklist 4.1: What the Technical Evaluation


Report Should Cover

(N.B. report back should be in varied forms to meet


different audience needs below we describe essentials to include in a written, technical report)
An FOPH review panel will assess all evaluation
reports submitted as part of the evaluation contract.
The standard criteria for scientific rigour as well as the
relevance of the design will be used as the basis of the
assessment and relevance of the design.

The Report should cover the following items:


List of Contents
Acknowledgements
Glossary of technical terms, abbreviations etc. used in
the report
Summary of purpose, methods and the principal
conclusions (particularly the lessons learned) and
recommendations

Use this list to check what has and has not been
covered in the report.
Part 1:
Introduction What the Project was meant to do
What the Evaluation was asked to do and why
Brief description of projects aims and objectives and
operational context
Terms of reference for the evaluation (purpose of
evaluation, over what period of time, principal
questions to be addressed and if modified, the
questions and issues actually addressed)
Part 2:
Evaluation Methodology What was evaluated; how
was it done; what data was obtained from which
sources.
A tabled summary of data collected, sources,
frequency, methods used etc. should be included to
illustrate the scope and weight of data collected.
Which methods were used to answer which evaluation
questions should also be shown in table form
The limitations of the study should be discussed in
detail (e.g. implications for the analysis of restricted
data access etc.).
Part 3:
Results and Discussion

How did the Project actually develop?


Were changes made to the original plan? if yes
What were the changes?
Why were these changes made?
What did the project achieve?

p. t. o
These Checklists are an integral part of the FOPHs
Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
51

Part 4:
Conclusions and Summary of Main Lessons Learned

Did the project achieve what it tried to do?


How?
Why? or Why not? and
What helped and what didnt?
What were the strengths of the project?
What were its weaknesses?
How did this affect the project e.g. for meeting its
objectives?

Part 5:
Recommendations
WHAT recommendations/advice would you give
WHICH groups/people about
the future of this project?
setting up a similar project within the same setting?
setting up a similar project within a different setting?
WHAT would you advise/recommend to the Federal
Office of Public Health about supporting a similar
project?
Annexes: These should include at least the following:
The original evaluation mandate and, where relevant,
the authorised changes
Examples of the evaluation tools used for data
collection and of data analysed e.g. excerpts from
qualitative interviews, questionnaires used etc.,
list of documents analysed
Examples of the project documents, especially those
referred to in the report e.g. brochures, teaching
materials etc.

52

Checklist 4.2: Assessing the Technical Evaluation


Report

An FOPH review panel will assess all evaluation


reports submitted as part of the evaluation contract.
The standard criteria for scientific rigour as well as the
relevance of the design will be used as the basis of the
assessment and relevance of the design.
However, the points set out in this checklist will
also be taken into consideration.

Structure and presentation of the report


Is it clear, well structured, easy-to-read and
comprehensive?
Does it provide both text and graphics to convey
messages?
Is it focused on, and structured to reply to the main
evaluation questions?
Scientific content
Was the methodology well applied?
Is there sufficient information on the project,
the evaluation and their operational contexts?
Does it discuss the influence of contextual
conditions on the development of the project?
Is it technically competent?
Does it meet the criteria of scientific rigour?
Are the conclusions credible?
Is a good overview provided of scope, type and
sources of data collected?
Are the limitations of the study identified, explained
and their implications discussed?

Usefulness
Does it do what it said it would?
Were the right questions asked and answered?
Does the analysis of the projects strengths and
weaknesses improve our understanding?

Recommendations
Are they feasible, practical and useful for improving
this project/other projects?
Are they relevant to future developments at a
national level?
Do they help us determine what needs to be done to
improve our overall strategy, measures and actions?

These Checklists are an integral part of the FOPHs


Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
53

Part 5: Promoting and Using Evaluation Findings

To attract an audience, we should


determine what findings will be of
interest to which stakeholder group,
and in which manner these will be
best communicated

When we should be informed


Who should be informed
Suitable modes of reporting
Making effective use of evaluation findings
Mistakes to be avoided

Identifying key messages


Identifying who is affected/interested
Deciding the best way each should be informed

Contents of this Part

Relevant Checklists

55

By now you will have understood that we encourage evaluations to be designed in consultation with the various partners involved or likely to be affected by
the results. The more this happens, the higher the chances of having the
results accepted and used by the various stakeholders.
From the design stage therefore, we have argued that the key partners and
evaluators need to identify the full range of possible evaluation audiences.
Ultimately this helps the evaluators reflect on which results have implications
for which target groups. Once results are available, evaluators together with
the FOPH and other key partners need to determine which findings need to be
brought to the attention of which target groups, and by which means. (What
are the key messages for which groups). Who are the key decision makers?
Who can best act on which findings?
When providing feedback on evaluation findings, evaluators need to review the
following:
Who needs to know?
Which groups?
Which key people?
Who can ultimately take action/decisons?
What information is likely to be relevant and/or of interest?
In what sequence?
In which type of format?
When?
What problems are likely to arise?
Can these be minimised?
How?
When should we be informed of
evaluation findings

In Part 1 we stated that essentially the FOPH requires project evaluation for
four reasons:
1.
2.
3.
4.

to improve our strategies and actions so that they are meeting clients needs;
to help us make informed decisions;
to clarify the options available; and
to account for the expenditure of public funds.

For this we need feedback from evaluators at an appropriate time. For example, if our strategies or measures are proving ineffective, inappropriate or inefficient, we want to know as soon as possible so that we can modify our
actions, redress the problems or even cancel contracts wherever necessary.
Evaluators should therefore be encouraged to provide timely feedback and
highlight how these findings apply to and/or affect different participant groups
or audiences. The evaluation design should have taken into account when key
decisions about the project would be taken, and consequently planned to provide feedback (wherever possible and relevant) accordingly.
In our contractual agreement with evaluators we sometimes request an interim as well as a final report. However, during preliminary negotiations, we also
emphasise our need to have feedback reported as and when significant findings come to light.
We should ensure that we are available and interested in receiving feedback
and willing to assess with the evaluators alternative courses of action and likely consequences.
Who should be informed

56

Deciding who should be informed is determined by what the findings are and
what key messages are detected for which audience group. As a starting point,
however, we can say that all those who participated in the evaluation should,
wherever possible, be informed of the results. Too often reporting tends to be
restricted to the sponsors and project managers. Significant results/messages
should also be communicated to the various groups affected by the outcome
through, for example, the mass media, publications etc. That is why we have
previously stressed the need to identify the range of potential users of evaluation results as well as those likely to be affected during evaluation planning.

This helps not only the focus of the evaluation study, but serves us as a point
of reference when reviewing the findings.
Evaluations should be designed to have an impact on our prevention strategies
and measures. In other words, they should provide us with useful, pertinent
and clear information based on the use of sound scientific method and analysis. The conclusions and recommendations should help different interest
groups see for example what achievements have been accomplished, where
improvements can be made, which more cost-beneficial approaches might be
employed or even that support for wasteful, unproductive efforts should be
withdrawn.

Suitable modes of reporting

The FOPH, project managers, researchers and evaluators will be interested in


receiving written, technical reports. Assuming that the conclusions are fully justified and well defended, various audiences will be interested in learning what
was discovered and in some cases, what should be done next.
Key partners, together with the evaluators, should identify the various audiences to be addressed, which messages are likely to be of interest, and,
based on this information, devise the most appropriate formats for transmitting the messages to the targeted groups.
A range of different modes should be planned to report evaluation findings to
meet the needs of the different audiences. It is a mistake to believe that all
audiences will read reports. It may be more appropriate to communicate findings to targeted groups via workshops, seminars, oral presentations. etc..
Articles can also serve as a useful medium to reach a wide range of targeted
readers. But obviously different types will be needed for different groups. For
instance, an article in the FOPH bulletin requires a different style from that of
an article placed in a popular journal or newspaper.
We have previously argued that evaluation is a cyclical process which starts and
ends at the planning stage. Work does not stop once the evaluation results are
to hand. On the contrary, it is the starting point for the FOPH to reflect on its
overall prevention strategy, measures and actions, and then determine how
best to proceed.

Using evaluation results effectivley the task of


the Federal Office of Public Health

For example, the majority of FOPH funded projects are aimed at preventing
health problems. As such they are directed at social and therefore behavioural
change. The relevant evaluations should highlight the favourable conditions
needed to bring about such change. To do justice to the work performed on our
behalf, the FOPH should consider the evaluation recommendations and act
upon the results in terms of:
What conditions does the evaluation suggest are needed to help bring about
change?
What can be done to create the conditions needed to bring about such
change?
Which groups/institutions/associations/organisations are in the best position
to help this process?
What measures might be adopted which are feasible and appropriate to
engage their co-operation and support?
Written articles alone may not be sufficient! Information should also be presented viva voce to target evaluation audiences. Workshops, meetings, seminars should be organised by the FOPH to get the message over to those who
can act on the information. This is where the valorisation budget provided in
the original evaluation contract can be finally exploited! Be imaginative about
using the evaluation to its best advantage!

57

Mistakes to be avoided

58

Assuming the evaluation work stops when the report is completed and delivered. (FOPH and evaluators)
Neglecting to determine which findings might benefit which target group
what key messages are there for which audience groups? This should be
systematically worked out between the evaluators, FOPH staff and the key
project staff.
Neglecting to address the political audiences which decision-makers
should know about which evaluation findings. (FOPH)
Not agreeing who will do what to promote the results. Evaluators should
provide the right material (e.g. articles, oral presentations etc.), but the sponsors and project partners need to organise, co-ordinate and ensure that
results get fed to the right decision-makers and other interested parties.
Thinking that publications alone will suffice. Promoting the findings means
adopting interactive strategies to present and discuss the evaluation findings
with those who can best act upon the results. (FOPH)
Neglecting to present and discuss findings in a manner appropriate to the
target audience. The messages need to be clear, to the point and in the
cultural style of the target group. For example, the style used to address a
scientific audience will not be appropriate when addressing for example a
parents association. (evaluators)

Checklist 5.1: Identifying Key Messages Key


Target Groups and Appropriate Ways to Tell
Them
If evaluations are to be useful, they need to highlight
what has been learned (a) to improve prevention planning and implementation, and (b) how it affects the
work of all those involved in this process. They should
not be buried in the text, but brought out clearly in the
conclusions, recommendations and/or options proposed.
Once the evaluation report is written, it is up to
the Federal Office of Public Health to translate
the evaluation messages into effective action. To
do this, we have to decide what follow-up action
is needed, by whom, and when. We should therefore consider carefully the points set out in this
checklist.

These Checklists are an integral part of the FOPHs


Guidelines for Health Programme and
Project Evaluation Planning.

Swiss Federal Office


of Public Health

Have we determined what is important in the evaluation findings for prevention planning?
Have we resolved what is important for prevention
implementation?
Have we distinguished the likely affects on the work
of the different groups involved?
Have we considered what action might be taken as
a result?
Have we worked out what would need to be done
to have this happen?
Have we identified which people/groups would be
the most effective for getting things done?
Have we determined how best to get the message
over to each of these groups?
Have we identified who would be the most
appropriate person/group to convey the message?
Have we defined what help will be needed
(human/ financial resources)?
Have we prioritised which people/groups should be
approached (strategic planning from the ideal to the
feasible)?

For further information please contact:


Swiss Federal Office of Public Health Evaluation Unit
Sgestrasse 65
CH3098 Kniz

Contact Person:
Marlne Lubli Loud Tel. + 41 (0)31 323 87 61,
FAX + 41 (0)31 323 88 05,
EMail:Marlene.Laeubli@BAG.admin.CH
59

Annex 1: References and


Recommended Reading List

The Program Evaluation Standards, 2nd Edition, Sage Publications, NY, 1994

American Evaluation Research


Society Joint Committee on
Standards Educational
Evaluation

Thinking About Program Evaluation, Sage Publications, London, 1990

Berk, Richard A. and


Rossi, Peter H.

Accompagner et mettre point avec succs les evaluations des mesures tatiques: Guide de rflexion, Editions Gorg S.A., Geneva, 1995 (French version)
Evaluationen staatlicher Massnahmen erfolgreich begleiten und nutzen: Ein
Leitfaden, Verlag Regger AG Chur/Zurich, 1995 (German version)

Bussmann, Werner

Program Evaluation Kit, Sage Publications, London, 10th Edition, 1995

Fitz-Gibbon, Carol Taylor; and


Morris, Lynn Lyons (Eds)

Empowerment Evaluation: Knowledge and Tools for Self-Assessment & Accountability, Sage Publications, NY, 1995

Fetterman, David M.;


Kaftarian, Shakeh J. &
Wandersman, Abrahm (Eds)

Workbook for Evaluation: A Systematic Approach, 5th Edition, Sage Publications, London, 1993

Freeman, Howard E.;


Rossi, Peter H.; and
Sandefur, Gary D.

Evaluation Fundamentals: Guiding Health Programs, Research and Policy, Sage


Publications, London, 1993

Fink, Arlene

Fourth Generation Evaluation, Sage Publications, London, 1989

Guba, Egon G.; and


Lincoln, Yvonna

Social Research: Philosophy, Politics and Practice, Sage Publications, London,


1994

Hammersley, Martyn

Professional Evaluation: Social Impact and Political Consequences, Sage Publications, London, 1993

House, Ernest R.

Manuel de lauto-valuation/Externe Evaluation von Entwicklungsprojekten,


Direction de la coopration au dveloppement et de laide humanitaire Service Evaluation, Bern, 1990 (German version) and 1994, (French version)

Imfeld, Josef et al

Evaluation as Illumination: A New Approach to the Study of Innovatory


Programmes, Univ. of Edinburgh, Occasional Paper, 1972

Parlett, Malcolm and


Hamilton, David

Utilization-Focused Evaluation, 2nd Edition, Sage Publications, London, 1986

Patton, Michael Quinn

Qualitative Evaluation and Research Methods, 2nd Edition, Sage Publications,


London, 1990

Patton, Michael Quinn

Evaluation: A Systematic Approach, 5th Edition, Sage Publications, London,


1993

Rossi, Peter H.; and


Freeman, Howard E.

Rapid Assessment Methodologies for Planning and Evaluation of Health


Related Programmes, INFDC, New York, 1992

Scrimshaw, Nevin; and


Gleason, Gary (Eds)

Evaluation Thesaurus, 4th Edition, Sage Publications, London 1991

Scriven, Michael

Dictionary of Statistics and Methodology: A Nontechnical Guide for the Social


Sciences, Sage Publications, NY, 1993

Vogt, W. Paul

Multi-language Evaluation Glossary, IIAS, English Edition, 1994 originating from


European programme MEANS (Mthodes dEvaluation des Actions de Nature
Structurelle)

Working Party on Policy and


Program Evaluation

Educational Evaluation: Alternative Approaches and Practical Guidelines.


Longman Inc., NY, 1987

Worthern, Blaine R.; and


Sanders, James R.
61

Annex 2: Glossary of Evaluation Terms

Given that EVALUATION is a relatively new discipline, there is as yet, no one, widely agreed-upon set of EVALUATION terms. Yet meanings of words are critical
because they influence what we do and how we do it. We have therefore provided definitions for the key terms we use in the EVALUATION Unit of the Federal
Office of Public Health to convey what we understand by EVALUATION; its tasks,
work and responsibilities. The glossary deals with evaluation terms only: it does
not deal with those of statistical METHODS since these are well-known, standardised terms.
For the most part, we have drawn upon existing definitions from a range of
sources, but mainly from those developed in the field of PROGRAMME EVALUATION.
(see Reading Reference List, Annex 1). These have, however where necessary,
been adapted to suit the specific work of the Federal Office of Public Health.
To minimise confusion, EVALUATORS working under contract with the
Swiss Federal Office of Public Health are urged to base their use of
EVALUATION terms on the definitions supplied in this Glossary.
N.B. The terms which have been cross-referenced in the glossary are
indicated in small capital letters e.g. CROSS-REFERENCED TERMS.

62

Refers to what the intervention has been able to achieve overall: its OUTPUTS, its
RESULTS, IMPACT, etc.

Achievement(S)(of the
Intervention)
DE - Erreichtes
FR - ralisations

This is the general statement about what the PROJECT/PROGRAMME/activity etc. is


attempting to achieve. It is the overall, (often long-term) end GOAL.

Aim
DE FR -

Gesamtziel/
bergeordnetes Ziel
but

The process of judging performance/RESULTS based on predetermined criteria.


Similar to EVALUATION, but narrower in FOCUS. For example: assessing student
performance through such measures as specific assignments and/or the
RESULTS of standardised tests.

Assessment
DELeistungsabschtzung/
Bewertung
FR - examen

An AUDIT checks that the means used to produce RESULTS were put into practice
according to professional rules and standards. It does not comment on, nor
question, the quality, RELEVANCE, EFFECTIVENESS, etc. of the IMPACTS or RESULTS of
a measure.

Audit
DE - Controlling
FR - audit, rvision

Synonymous with

Auto-evaluation
DE - Auto-Evaluation
FR - auto-valuation

SELF-EVALUATION

The extent to which a measurement (e.g. use of t-test ) or a sampling technique


or analytic METHOD (e.g. analysis of QUALITATIVE DATA) systematically yields non
VALID RESULTS. This can arise from errors occurring during any of the processes
from the planning of a study through to the interpretation of its RESULTS and the
conclusions reached. Errors arise through subjectivity, prejudice, technical
and/or METHODological mistakes.

Bias
DE FR -

The process of systematically examining the content of a volume of material


(written documents, films, pictures etc.) or procedures and practices (e.g. tasks
and performance in the classroom, doctors consultation rooms etc.) in order to
determine key characteristics. (see also DOCUMENTARY ANALYSIS)

Content analysis
DE - Inhaltsanalyse
FR - analyse de contenu

An approach based on testing a pre-conceived hypotheses (very often experimental) in order to draw conclusions about its VALIDITY and/or GENERALISABILITY.

Deductive approach
DE - Deduktiver Ansatz
FR - approche dductive

Procedure used for problem solving/EVALUATION based on achieving consensus


through the repeated drafting of a written paper by members of an expert
group. The process is characterised by the co-ordinated work of a group of
recognised experts who provide individual (i.e. non-peer influenced), written
feedback on an initial document; the SYNTHESIS ordering, and ranking of responses and a repeated redrafting of the background paper for re-circulation by a coordinator. The process continues until consensus is reached.

Delphi technique/survey
DE - Delphi Technik
FR - mthode/technique delphi

Systematic analysis of the CONTENT of (written) documents e.g. memos, minutes, PROJECT descriptions, training curriculum, etc.

Documentary analysis
DE - Dokumentenanalyse
FR - analyse de documents

Any change, intended or unintended, which can be attributed to the intervention being evaluated. Synonymous with OUTCOME, RESULT, IMPACT. Examples of
unintended EFFECTS are ripple EFFECT, halo EFFECT, hawthorn EFFECT, etc.

Effect
DE - Effekt
FR - effet

A measure of the extent to which an activity, strategy, PROJECT etc. induces


change that may or may not be part of the original AIMS and OBJECTIVES. It is
essentially a measure of GOAL ACHIEVEMENT.

Effectiveness
DE - Effektivitt
FR - effectivit

A measure of how well resources (human, financial, material etc.) are used to
produce desired OUTCOMES and/or OUTPUTS. Includes the analysis of the inputoutput cost ratio. Implies the absence of wastage in the process of achieving
GOALS. Efficiency analysis tries to answer the question: is it possible to produce
more OUTPUTS using less inputs or using alternative, less expensive ones?

Efficiency
DE - Effizienz
FR - efficience/rendement

Bias/Verzerrung
biais

63

Evaluation
DE - Evaluation
FR - valuation

The systematic collection and analysis of information, not necessarily routinely


available, on various aspects of a given object of study (such as a specific PROJECT, PROGRAMME, intervention etc.) to enable its critical appraisal. In short, the
process of determining the value, merit, justification and/or worthiness of
something.

Evaluator
DE - Evaluator/in
FR - valuateur/trice

The person/team conducting the

Evaluability appraisal
DE - Machbarkeitsstudie
(der Evaluation!)
FR - tude de faisabilit
(de lvaluation!)

Analysis of the feasibility of answering the EVALUATION questions using a proposed design or procedure and/or the feasibility of answering the questions per
se. In short, checking to see that what is planned can actually be done.

External evaluation
DE - Externe Evaluation
FR - valuation externe

EVALUATION by EVALUATORS who are neither responsible for the financing, nor the
managing or implementation of the intervention under study. In short EVALUATION by those who have no personal, financial or other self-interest in the object
being evaluated.

Feasibility study
DE - Machbarkeitsstudie
FR - tude de faisabilit

The analysis of the likelihood that an intervention can be implemented as


planned. Includes the study of contextual conditions as well as characteristics
and resources of the intervention under plan. A FEASIBILITY STUDY does not test
for EFFECTIVENESS, EFFICIENCY nor desirability (see PILOT STUDY)

Fields(of evaluation)
DE - Anwendungsbereiche
FR - domaines (dvaluation)

The major

Findings (of evaluation)


DE - Befunde
FR - rsultats/constatations
de lvaluation

The sum total of what an EVALUATION finds out about the intervention under
analysis e.g. the PROJECTs context, EFFECTS/RESULTS, IMPACTS, processes, EFFICIENCY etc.

Focus
DE - Fokus
FR - point focal/point
de focalisation

The area or aspect(s) on which the EVALUATION and its analysis will concentrate.
e.g. the EVALUATION of a school health education PROGRAMME may choose to
FOCUS on the acceptability of the PROGRAMME by different groups rather than on
its end RESULTS. Equally, it may focus on the RELEVANCE or EFFICIENCY of the PROGRAMME. It may well choose to FOCUS on a much wider SCOPE.

Formative evaluation
DE - Formative Evaluation
FR - valuation formative
(pas de terme quivalent
ni en allemand ni en
franais)

It is conducted during the development of an intervention or strategic measure


with the intent to improve performance by means of continuous feedback to
key STAKEHOLDERS. Usually produces reports for internal use. FORMATIVE EVALUATION is contrasted with SUMATIVE EVALUATION.

Generalisability
DE - Generalisierbarkeit
FR - gnralisabilit

The degree to which information about a tested group or setting may be extrapolated to the greater POPULATION or to different settings. GENERALISABILITY is
directly linked to external VALIDITY in that non valid data will produce non generalisable FINDINGS.

Global evaluation
DE - Globalevalaution
FR - valuation globale

This refers to the EVALUATION of a total prevention package: the global strategy,
measures and actions taken towards obtaining the prevention packages overall AIMS and OBJECTIVES.

Goal
DE - Gesamtziel/
bergeordnetes Ziel
FR - but

Synonymous with

64

EVALUATION.

FIELDS of EVALUATION are:


PROGRAMME AND POLICY EVALUATIONS.

product, personnel, performance, PROJECT/


Whilst each has been practised for some
decades, the development and discussion of METHODological issues is much
more recent. PROGRAMME EVALUATION is one of the most developed in terms of
METHODOLOGY and theory.

AIM.

In its pure form, the EVALUATOR is not told the AIM and OBJECTIVES of the PROGRAMME/PROJECT/activity etc. under EVALUATION so that s/he is free to judge what
is going on and what is being achieved without being influenced by any predetermined criteria.

Goal free evaluation


DE - Zieloffene Evaluation
FR - valuation sans objectifs
dclars

Philosophical tenet arguing the necessity to consider the whole. Grounded in


the belief that the whole is greater and different from the sum of its parts. It
rejects the feasibility and usefulness of breaking down the whole into isolated
parts (as in Gestalt-psychology).

Holistic approach
DE - Holistischer Ansatz
FR - approche holistique

In EVALUATION terms, this refers to the sum total of the individual RESULTS and
EFFECTS/OUTCOMES of an intervention or measure, be they intended or unintended. IMPACT analysis can limit itself in time e.g. to immediate EFFECTS etc. and in
FOCUS e.g. target POPULATION. It can, however, broaden its analysis in terms of
(a) time, e.g. examining EFFECTS etc. over medium to longer term, and (b) FOCUS,
e.g. going beyond the directly targeted POPULATION.
(In market research, IMPACT EVALUATION e.g. of a campaign usually is restricted to

Impact
DE - Wirkung
FR - impact

an analysis of its visibility, acceptability, recall etc.)


An INDICATOR serves as a proxy measure for information about a phenomenon
which in itself is not directly measurable. For example: the total amount of alcohol consumed per capita in a country over a year indicates the degree of alcohol abuse. In general they represent one class of data only.

Indicator
DE - Indikator
FR - indicateur

Generates hypothesis from and during field work. (Grounded theory see
Strauss & Glaser, The Discovery of Grounded Theory: Strategies for Qualitative
Research, Weidenfeld & Nicolson, London, 1968). Hypotheses are formulated
on the basis of the data gathered as opposed to gathering data in order to test
a preconceived hypothesis (DEDUCTIVE APPROACH).

Inductive approach
DE - Induktiver Ansatz
FR - approche inductive

Technique used to draw verbal information from an individual/group about a predetermined topic. Can be structured (i.e. asking standardised questions which
elicit only responses which are pre-determined and of limited range), semistructured (i.e. range of questions are pre-determined but the way they are
asked and/or the expected responses are not necessarily), or unstructured (i.e.
non standardisation of open-ended questions, sequence and responses but
centred around a pre-determined topic(s)).

Interview
DE - Interview
FR - interview/entretien

This is the overall analysis of information arising from several studies on a similar topic/field of interest. involving, as the first step, the standardisation of the
relevant information. Analysis therefore takes place once the disparate information is standardised and therefore transformed into comparable values. To a
large extent, it relies on a SYNTHESIS of other studies/EVALUATIONS.

Meta-analysis
DE - Meta-Analyse
FR - mta-analyse

EVALUATION of others EVALUATIONs an analysis of other EVALUATION studies. It


provides a critical analysis of how well EVALUATION studies have been conducted.

Meta-evaluation
DE - Meta-evaluation
FR - mta-valuation

The working plan (theoretical framework and design) for organising and conducting the selection, collection and analysis of data, including the
approach/strategy to be used (e.g. conventional, positivist, interpretative, naturalistic, phenomenological etc.) and choice of METHODS (e.g. INTERVIEWS, survey,
observation, etc.) to be used.

Methodology
DE - Methodologie/
Vorgehensweise
FR - mthodologie

Formalised technique used for collecting, organising or processing data, e.g.


semi-structured INTERVIEW, QUESTIONNAIRE, observation, CONTENT ANALYSIS, multiple regression analysis, multi-criteria analysis, etc.

Method
DE - Methode
FR - mthode

In the context OF PROJECT/PROGRAMME EVALUATION it is the routine checking of


progress against plan. For example the annual counting of participants on a
given course. Normally MONITORING activities do not question the OBJECTIVES,
processes or actions involved.

Monitoring
DE - Monitoring
FR - monitoring/surveillance

These are a set of discrete, specific and measurable sub-GOALs which need to
be attained in order to achieve the end GOAL. They should be smart i.e. specific, measurable, appropriate, realistic and attainable within a defined time period.

Objective
DE - Ziel/Zielsetzung
FR - objectif
65

Outcome
(of the project/intervention)
DE - Resultat
FR - rsultat

Synonymous with EFFECT when referring to the individual and/or sum of the
EFFECTS/RESULTS (of the intervention). Mainly refers to immediate, post-treatment EFFECTS, but one should consider the medium and longer term OUTCOMEs
too. (See also IMPACT).

Outputs
DE - Output/Produkt
FR - produits

These are the activities, goods and services directly produced by an intervention/EVALUATION e.g. brochures, reports, workshops, hotline service, computer
program etc.

Pilot project/study
DE - Pilotprojekt/-Studie
FR - projet/tude pilote

A PROJECT/study intended to trial its practicability in a real setting (not to be confused with FEASIBILITY).

Population
DE - Population
FR - population

The total number of subjects/elements from which a SAMPLE is drawn, or about


which a conclusion is stated.

Process evaluation
DE - Prozessevaluation
FR - valuation de processus

Concentrates specifically on the implementation aspects of an intervention. Is


less concerned with inputs, OUTPUTS, OUTCOMES etc., but rather with the procedures, practices and operations used to attain the projects OBJECTIVES.

Programme
DE - Programm
FR - programme

A collection of co-ordinated PROJECTS, measures, processes, or services aimed


at achieving a set of common OBJECTIVES. A PROGRAMME is delimited in terms of
time, SCOPE and budget.

Project
DE - Projekt
FR - projet

A PROJECT consists of a set of similar, co-ordinated activities aimed at achieving


pre-defined GOALS. It is also limited to take place within a defined period of time,
SCOPE and budget. Often it is a means of testing an innovative approach or measure ultimately to be used as part of a wider PROGRAMME.

Qualitative data
DE - Qualitative Daten
FR - donne qualitative

Data in the form of words, images, maps, sounds.

Quantitative data
DE - Quantitative Daten
FR - donne quantitative

Numerical data.

Questionnaire
DE - Fragebogen
FR - questionnaire

A list of questions to be asked, often with pre-determined wording and


sequence. The respondent may or may not be required to give structured
responses. Answers may be given in writing or orally. Can be structured, or
semi-structured. (see INTERVIEW)

Relevance
DE - Relevanz
FR - pertinence

The degree to which a measure or action etc. matches an identified need.

Reliability
DE - Reliabilitt/Zuverlssigkeit
FR - fiabilit

Refers to the consistency of the RESULTS yielded when the same process and
METHODS are used during repeated applications and/or by different observers.
Not to be confused with VALIDITY.

Representativeness
DE - Representativitt
FR - reprsentativit

The degree to which an observation made on a


tem/the population as a whole.

Result(s)
DE - Resultat/Ergebnis
FR - rsultat

Refers to the intended/unintended changes RESULTing from an intervention.


Similar to EFFECTS, OUTCOME. See also findings (for EVALUATION RESULTS).

Sample
DE - Stichprobe
FR - chantillon

A group of subjects/items selected from a larger group. Studying the smaller


group (the SAMPLE) is intended to reveal important things about the larger group
(the POPULATION).

66

SAMPLE

applies to the sys-

The breadth of what will be taken into account by the EVALUATION, e.g. what
issues and aspects will be addressed, which (sub)groups will be
observed/INTERVIEWed and over what time period etc.

Scope
DE - Reichweite
FR - porte

The re-working and analysis of existing data and/or reconsideration of its interpretation and FINDINGS.

Secondary analysis
DE - Sekundranalyse
FR - analyse secondaire

Similar to SECONDARY ANALYSIS (reanalysis of original EVALUATION data and FINDINGS)


but integrates new data as and when needed. Its aim is to produce a new EVALUATION of a particular PROJECT, often by broadening the SCOPE or depth of the original analysis. (Compare with META-EVALUATION)

Secondary evaluation
DE - Sekundrevaluation
FR - valuation secondaire

An EVALUATION by those who are administering


GRAMME/PROJECT/intervention etc. in the field.

Self-evaluation
DE - Selbstevaluation
FR - auto-valuation

and/or managing a

PRO-

Individuals, groups or organisations who have a defined interest in the activity


being evaluated and are therefore held to be to some degree at risk with it (e.g.
PROGRAMME staff, sponsors and others not necessarily involved in its day-to-day
operation). Also can include the interventions direct and indirect recipients (e.g.
TARGET GROUP, family of TARGET GROUP, taxpayers, etc.).

Stakeholders
DE - Beteiligte/Betroffene
FR - protagonistes =
les stakeholders
directement impliqus
pas de terme universel qui
engloberait aussi ceux qui ne sont
pas directement impliqus par le projet.

An EVALUATION that is carried out during the concluding phase of a PROJECT/PROGRAMME/activity etc., with the intention of passing judgement intended to contribute towards decision making re PROJECT etc.s future. Compare and contrast
with FORMATIVE EVALUATION.

Sumative evaluation
DE Bilanz-Evaluation
FR valuation sommative
(pas de terme quivalent
en franais)

Combining the FINDINGS of multiple studies into one overall picture. In EVALUATION
this is most often done by compounding a set of criteria/INDICATORS/performances on several dimensions and attributing an overall judgement. (See also
META-ANALYSIS).

Synthesis
DE Synthese
FR synthse

Those groups/individuals at which the health intervention, measure, strategy


etc. is aimed.

Target group/population
DE Zielgruppe/Zielpopulation
FR groupe cible/population cible

The use of several different instruments (e.g. observation, INTERVIEWS, tests,


etc.) and/or classes of respondents (e.g. managers, participants, sponsors, etc.)
to obtain information about the same aspect/subject (e.g. acceptability/RELEVANCE etc. of a PROJECT).

Triangulation
DE Triangulation
FR triangulation

Refers to the degree to which whatever is claimed, holds true. For example, a
test is valid if it measures what it purports to measure. Valid EVALUATIONS are
ones that take into account all relevant factors, given the whole context of the
EVALUATION (particularly including the clients needs) and appraise them appropriately in the synthesis process. (see Scriven, 1991)

Validity
DE Validitt
FR validit

The combination of activities used to make EVALUATION FINDINGS known (dissemination) and translated into practical use (thereby adding value).

Valorisation
DE Valorisierung
FR valorisation

67

Annex 3: Characteristics of Conventional and


Interpretive Approaches to Social Scientific
Research and Evaluation
Credit due to: Yvonna Lincoln & Egon Guba, Bodan & Biklen, Michael Quinn Patton and Ray Rist

Characteristic

Conventional Approach

Interpretive Approach

Associated phrases

experimental, quantitative, outer (etic)


perspective, social facts, statistical,
predictive, a priori, deductive

ethnographic, field work,


qualitative, inner (emic)
perspective, descriptive,
ecological, phenomenological,
emergent, inductive, holistic

Key concepts

variable, operationalisation*,
hypothesis, reliability, validity
replication, statistical significance

understanding, meaning, social


construction, everyday life, verstehen,
confirmability, working hypotheses

Associated names

A. Compte
Emile Durkheim
Lee Cronbach
L. Guttman
Gene Glass
Fred Kerlinger
Edward Thorndike
Ralph Tyler

J. Mill
Donald Campbell
Peter Rossi
Thomas Cook
Robert Travers
Robert Bales
Julian Stanley

Dilthey
Max Weber
Charles Cooley
Everett Hughes
Margaret Mead
Rosalie Wax
George H. Mead
C. Wright Mills
Ray Rist
Egon Guber
Yvonna Lincoln
Howard Becker

Associated disciplines

behavioural psychology, economics,


sociology, political science (experimental
physics)

anthropology,
sociology, history
(ethnography)

H. Rickert
Estelle Fuchs
Herbert Blumer
Harold Garfinkel
Erving Goffman
Eleanor Leacock
Barney Glaser
William Filstead
Malcolm Parlett
Robert Stake
Robert Burgess

* The conventional approach to social scientific inquiry is still practised by many social scientists and still viewed as real science by many consumers
of evaluation and research results. This is despite the fact that major tenets of conventional social science have been found untenable within the philosophy of science. The most important of these major tenets have been asterisked in this handout.

Design:
Purpose

develop and test theories


establish the facts explain,
predict, and control phenomena

describe multiple realities


develop experiential understanding
develop grounded theory

Basis

goals, objectives, hypotheses

subject/audience concerns and issues,


activities and interactions

When developed

beginning of study

continuously evolving

Nature

pre-ordinate, structured, formal, specific

emergent, evolving, flexible, general

Style

intervention, manipulation

selection, participation

Sample

large, stratified, randomly selected

small, non-representative, theoretically or


purposively selected

Setting

laboratory (context unrelated*)

nature, field (context relevant)

Treatment

stable, fixed

variable, dynamic

68

Characteristic

Conventional Approach

Interpretive Approach

Control

high of antecedents,
extraneous variables, possible outcomes

low holistic understanding sought

Examples

experiments, quasi-experiments,
survey research

case studies, life histories, ethnographies

Nature

predetermined, structured, standardised,


objective, quantitative

open-ended, unstructured, variable,


subjective, qualitative

Focus

reliability, replication

validity, meaning

Specification of data
collection/analysis rules

before inquiry

during and after inquiry

Researcher/evaluator role

stimulator of subjects to test critical


performance, taking readings

stimulated by subjects and their


activities, negotiations and
interactions

Researcher/evaluator
relationship to data

distant, detached

close, involved

Researcher/evaluator
relationship to subjects

circumscribed, short-term, detached,


distant

empathetic, trustworthy, egalitarian,


intense contact

Instruments/techniques

paper- and pencil of physical devices, e.g.,


questionnaires, checklists, scales,
computers, tests, structured interviews
and observations

researcher/evaluator, interviews,
observations (tape recorder,
transcriber)

Data

numerical, coded, counts and measures,


succinct, systematic, standardised

descriptions, records and


documents, observational field
notes, photographs, peoples own
words, direct quotations

Nature

componential, explanatory, reductionist

holistic, descriptive, depth and detail

Units

variable

patterns in context

Analysis

statistical, deductive, conducted at end

analytical, inductive, ongoing,


evolving

Focus

uniformity

diversity

Methods:

Analysis:

69

Characteristic

Conventional Approach

Interpretive Approach

When

usually once, at end of study

ongoing, continuous, as needed

How

formal, written reports

informal, oral and written portrayals,


case studies

Content

identification of variables and their


interrelationships, numerical plus
interpretation

portraying what experience is like,


narrative, direct quotations,
negotiated constructions

structuralism, functionalism, realism,


positivism*, behaviourism, logical
empiricism*, systems theory

phenomenology, symbolic
interactionism, ethnography
(culture), ethnomethodology
idealism

Reality/truth

singular, convergent, fragmentable,


exists out there can be empirically
verified and then predicted and controlled

multiple, divergent, interrelated,


socially constructed, can be
understood through verstehen

Nature of truth statements


and generalisations

time- and context-free*


generalisations,nomothetic statements,
focus on similarities

time- and context-bound working


hypotheses, idiographic
statements, focus on differences

Relationship between
facts and values

separable*, facts constrain beliefs*,


inquiry is value-free*

inextricably intertwined, beliefs


determine what should count as
facts, inquiry is value-bound

Human nature

Humans are engaged in continuing


process of interacting with environment.
Humans and environment influence each
other in ways that are law-governed and
thus predictable

Humans are intentional beings,


directing their energy and
experience in ways that enable
them to constitute the world in
a meaningful form

Human behaviour

law-governed, result of concentration of


many antecedent variables

wholly context-dependent

Relationship between
inquirer and subject of inquiry

independent, separable*

interrelated, interactive not


separable

Nature of Causal linkages

real causes, temporally precedent with


effects

mutually simultaneous shaping of


and by all entities, all are causes
and effects

Communication of results:

Paradigm:
Affiliated theories

Assumptions about:

70

Characteristic

Conventional Approach

Interpretive Approach

quality criteria

rigor

relevance

source of theory

a priori

grounded

stance

reductionist

expansionist

purpose of inquiry

verification*;
facts, causes, explanation; establish
laws that govern human behaviour
and link laws into

discovery;
understanding, verstehen
understand process by which social
reality is created by different people
deductively integrated theory

knowledge type

propositional

propositional, tacit

value perspective

singular*, consensual

pluralistic, divergent

values in research

value-free;* objectivity is critical to


reduce bias and influence of extraneous
variables, to enhance replicability

values inevitable part of social


reality; objectivity commonality
of perspective; subjectivity is
important to get involved with
subjects, to use personal insight

Postures about:

71

Annex 4: Evaluation Questions:


An Example of Different Question Types

1. Questions on Relevance
Is the health behaviour model on which the projects intervention is based
appropriate for the target group/setting?
Are the project/programmes aims and objectives still relevant? Are they still
of the same priority?
Is the intervention being targeted at the right audience?
Is the intervention appropriate for its different target groups?
Is the intervention meeting the target groups needs?
2. Questions on Progress
Is the project/programme being put into operation as planned?
Is there any difference in the understanding of the project/programmes
aims and objectives between the different groups involved? If so, how has
this influenced the way the project is ultimately being put into practice?
To what extent have any unplanned side effects been taken into account during project/programme implementation?
Is the project/programme receiving positive support from all the various
groups concerned?
3. Questions on Effectiveness
Have the objectives been achieved in terms of quality, quantity and time?
To what extent was the achievement the effect of FOPH action?
Has FOPH stimulated actions and/or measures that would otherwise not
have occurred?
To what extent did changes in the environment affect the achievement of
project/programme objectives?
To what degree was the intervention implemented according to plan?
Was the project/programme effective in promoting itself to the targeted
groups?
4. Questions on Efficiency
Is the intervention the most cost-effective option? What alternatives
should be considered?
What are the constraints on using a more cost-effective method?
Do the human and financial resource costs compare favourably with related
interventions e.g. in another area of prevention intervention?
Have the inputs been made according to planned amounts, timing and quality?
What hidden costs have not been taken into account in project/programme
budgeting and planning?

72

Annex 5: Guidelines for Developing or Assessing


Medico-Social Training/Educations Projects.
Swiss Federal Office of Public Health, 1995

Federal Office of Public Health


Evaluation/Research/Education Section
Copyright OFSP/BAG, 1995

To FOPH staff responsible for assessing training project proposals; and To persons and institutions
submitting training project funding proposals to the FOPH.

Guidelines for Developing or Assessing Medico-Social


Training/Education Projects

General Principles
All FOPH funded medico-social training/education projects for professional or voluntary
workers in the field of HIV/AIDS and/or drug dependence should be designed in line with
public and community health principles.
Among other things, they should:
be developed to meet the needs of the community, the institutions and the individuals included in
the cultural, social and economic context,
take into account the prevailing health and social policies relating to the field to which the training
applies,
take into account future needs and challenges,
encourage interdisciplinary and interprofessional cooperation,
ensure optimal exchange of information between practitioners and researchers,
ensure at least regional coverage,
increase the number of trained practitioners and the quality of the services they provide.

A training/education project should provide answers to the following questions:

1
2
3
4
5

Is there a need for the type of training proposed?


Does the project take into account potential resources and available means?
Are the purpose, objectives and content of the project relevant?
Does the proposed teaching method take into account the principles of adult training?
Is evaluation a clearly integral part of the project?

73

A need or

Is training really necessary?

Needs can be identified in various ways:


A they can be measured:
by analysing the results of a specific survey as in a needs assessment (this is a long and exacting task and a project in
itself: it requires special skills and quite considerable funds),
by studying and analysing the literature available and on-going research (not forgetting the results of statistical and
demographic surveys);
B they can be defined:
by specialists in one field or discipline (development of new techniques, new concepts or methods, of a new health problem confronting operational staff, etc.);
C they can be expressed:
by professional associations or unions, by a planning institute, by educationalists, consumers, the trainees themselves or
any other person involved.

Resources and means or

Use what you have first

The (financial and social) cost of the training/educational project should be reasonably proportionate to
the funding available and the needs that are to be met.
The project should describe:
A the human and institutional resources available, including specialised teachers or experts in the field, other existing institutions or programmes in the same field;
B existing conceptual and theoretical resources. Methodological work may well have been done in part or in total in another language or within another context;
C existing material such as documentation, books, brochures, videos.1
The real overall cost of the training should be proportionate to the available funds of the organising institution. For example, a small institution should not contemplate investing all its funds in one project.
All possibilities of co-financing and subsidies (including through cantonal and local authorities, professional or consumer
associations or foundations) should have been systematically investigated. Do not forget the possibility of premises or logistical services being provided free of charge.
Given the limited funds available from the Confederation and the constant need for training, on no account can the FOPH finance
costly projects: it has to assure continuity of training support.A training project does not need to be expensive to be good!
The proposal should include a detailed budget (see Detailed Budget, last page).
The registration fee should not put off potential participants. Employers should therefore be encouraged to pay all or a proportion of the fees as part of their contribution towards the further training/education of their employees. (Different rates
may be applied for employer-subsidised and self-paying registrations).

Purpose, aims and content

A simple and comprehensible description

A The purpose of the training must be explained and must be relevant to the needs of the population.
For example, The project contributes to reducing the incidence of professional-related HIV infections transmitted
through blood contact by the systematic application of preventive measures and reduction of risk factors. It is aimed
specifically at health carers providing patient home-based care.
Purpose: in terms of its anticipated effects on the target population (those in the care of the training participants);
Relevance: its relation to the health problems of the population at large, and its appropriateness in relation to the
resources available.
B The training objectives must be explicit and relevant to the skills required to carry out the function or task(s).
For example, Participants will be able to provide basic care to patients, in the patients home, whilst at the same time,
respecting the application of universal precautions. Each of the measures needed towards this end will be described,
explained and discussed. The conditions under which they will be applied will need to be systematically described and
put into practice.
General aims: all the knowledge, skill and behavioural attitudes (changes in behaviour!) that the participants will have
acquired by the end of the course;
Relevance of the aims: their relevance to the tasks that professional staff will have to fulfil and to the problems with
which they will be confronted.
1

74

The proposal should include key references.

The knowledge and skills taught should be briefly described and explained.
They should not be in conflict with: the ethics of the professions concerned,
the doctrine upheld by the FOPH in the relevant field. 2
In principle, the cost of developing the course should not exceed 5% of the total cost of the project.

The method or

What is important is what the adult has learnt and not what s/he has been taught.

As far as possible, the selection of teaching methods should be based on accepted knowledge and experience in
the field of adult education, e.g.:
A focused on learners needs and the groups existing knowledge,
B aimed at problem solving,
C methods and tools adapted to the learners work situation and to whatever resources are available
A good method is a method that: meets learners needs,
is suitable to the knowledge to be imparted,
suits the skills of the instructor, and
is proportionate to the resources available.
There is no one ideal teaching method. Even straightforward lecturing can well be the best solution in certain cases.A combination of different methods is often the most successful.
A good teaching method is one that truly enables and encourages participants learning and is not necessarily the most fashionable method of the day or the one with which they are most familiar.

Evaluation or

Choose a really useful evaluation method.

Evaluation is a dynamic process aimed at (i) improving the quality and the relevance of training/education projects; (ii) on-going adaptation of training to meet current needs; and (iii) improving the conditions under which
the projects are run.
An evaluation only makes sense if it is useful and has subsequent practicable application. It is not intended merely to justify itself!
In other words, dont just prove, but improve! Training project managers are also responsible for determining evaluation needs
within the framework of their project.They, and/or the FOPH may see a need for the project to be evaluated by an outside body.
In this case the external evaluation will be planned and commissioned by the FOPH under a separate evaluation budget.
Before choosing an evaluation, it is very important that the following questions be answered:
A what is the point of the evaluation?
B what questions do we want answered?
C what exactly are we going to evaluate:, knowledge, attitudes, an action, a strategy, the implementation process will this need
quantitative/qualitative data?
D who are the stakeholders? i.e. who is the evaluations target audience?
E how will the evaluation findings be disseminated and their practical application made evident?
There are several ways of evaluating a project; the choice should be based on the answers to the above questions and the means
and skills available.
For example, all of the following are evaluations but serve quite different purposes:
evaluation of the training completed by students with a view to awarding them a certificate (evaluation for certification),
evaluation of the trainings relevance to the tasks needed in the field,
evaluation of the knowledge acquired by students with a view to modifying the course along the way,
participants evaluation of the training with a view to improving the course in the future,
evaluation of observed changes in behaviour after one years application in professional practice,
teachers self-evaluation as part of his/her teaching supervision,
evaluation of the projects overall impact to support applications for future funding e.g. from cantonal authorities,
estimation of degree of satisfaction among the students. etc.
The cost of evaluation as shown in the budget may in exceptional cases only exceed 5% of the total.
For more detailed information see the Swiss Federal Office of Public Healths Guide to Project and Programme Evaluation Planning 1996.

For HIV/AIDS see Prevention of HIV/AIDS in Switzerland, Swiss FOPH, 1993;


for drug dependece, see Federal Regulation of 20.2.91.

Swiss Federal Office


of Public Health
75

Federal Office of Public Health


Evaluation/Research/Education Section

Training Projects and Programmes

Drafting the Proposal


Proposals should be structured as follows:

Description of the setting in which the programme will take place


Identified needs

Human and institutional resources


Conceptual and theoretical resources
Material resources (teaching and financial)

Purpose of the training


Target group(s) for which the course is designed? what are the
criteria/conditions for enrolment?
Educational aims and summary of course content

Teaching methods

Planned evaluation(s): of the learning, of the project itself


Dissemination of evaluation results and plans to make their practical
application evident.
Type of certification

Practical organisation and timetable

number of hours teaching, dates, premises, etc.,


advertising/publicity,
other

Detailed budget:

total cost and estimated cost per participant,


salaries for instructors (hourly remuneration if possible),
administrative costs,
percentage of total to be spent on course development, publicity and evaluation,

For further information see Teaching Guidelines in the Field of Health Care by J.-J. Guilbert, published by the
WHO, 1990, or contact us at the Federal Office of Public Health:
FOPH Training Unit
Marie-Claude Hofner
Ren Stamm
Ellen Dobler-Kane

76

031 323 88 06
031 323 87 83
031 323 80 20

FOPH Evaluation Unit


Marlne Lubli-Loud
Margret Rihs-Middle
Marianne Gertsch

031 323 87 61
031 323 87 65
031 323 88 03

Copyright OFSP/BAG, 1995

income from registration fees, subsidies.

Вам также может понравиться