Академический Документы
Профессиональный Документы
Культура Документы
Authors:
Team Lead: Noaman Saeed,
Team members: Fida Muhammad & Khurram Jilani
2.0 Purpose:
The purpose of the intended evaluation is to provide an estimate of programme’s impact for
accountability purposes and to make suitable recommendations for further improving the benefits
of on-going phases of the programme. The evaluation is intended to:
Provide DFID and its partners, with an initial assessment of the contributions that the VTEI
Programme has made towards bringing about a Socio-Economic Development of the
Malakand Region, in-line with its stipulated Goal1 & Purpose2;
Suggest any improvements that can be made to inform the implementation of remaining
bridges to be built in the region; and,
Introduce application of mixed method approach to Impact Evaluation of the VTEI program;
thereby amplifying the beneficial impacts of the programme interventions.
4.0 Constraints with availability and quality of baseline data & attribution
1. Some baseline surveys were undertaken before the launch of the programme and regular
beneficiary feedback surveys as well as traffic counts are being undertaken to monitor key
indicators for the programme. There is not, however, a comprehensive assessment of
baseline data in terms of its coverage of all programme sites (schools and bridges) or its
quality (in terms of reliability and validity).The availability and establishing of robustness of
baseline data could potentially become an issue.
2. Secondly, it is deemed difficult to create a valid control group of citizens that would not have
access to the rebuilding benefits of schools’ and bridges’ (at the locations where they have
already been provided). The locations where bridges are yet to go up or where schools still
don’t exist (or have not been repaired / replaced post the floods’ damage) would need to be
assessed for the extent to which they are comparable and could therefore act as an Ante -
construction scenario. It is likely, however, that they will be systematically different across
the range of socio-economic indicators making comparisons invalid.
1
Goal is the term DFID used at the time when the VTEI programme was designed, prior to adopting ‘Impact’ in current Log frames and Theories of
Change.
2
Purpose is the term DFID used at the time when the VTEI programme was designed, prior to adopting ‘Outcome’ in current Log frames and Theories
of Change.
5.0 Definition of Impact Assessment - IE
3The OECD-DAC (Organisation for Economic Co-operation and Development – Development
Assistance Committee) definition of impact with its emphasis on long-term effects can be taken as
the basis for defining Impact Evaluation for this report as well. The definition of IE looks to
establishing cause and effect relationships between development interventions and development
results, although not exclusively through counterfactual-based methods.
Emphasis on ‘contributory’ causes can be assumed to be a major theme in this study which is
consistent with the broad consensus that development aid interventions work best in combination
with other non-aid factors. This contribution-based logic is also consistent with the complex and
multi-dimensional nature of VTEI development interventions as specified in the ToRs.
Contributory causality is relevant when it is likely that there is more than one possible cause, i.e. the
intervention is just one part of a causal package. If there are several different but similar programmes
operating in the area then various methodologies coupled with the comparative ‘Case’ design
approach may also be considered relevant. If VTEI is the only programme operating in that area, then
unpacking various case details through within-(VTEI) case analysis by reviewing the existing ‘Theory
of Change’ and using ‘process tracing’ can be applied and finally eliminating alternative explanations
may be considered appropriate4.
Note: This approach is to be considered subject to the literature review and in conjunction with the
analysis of existing qualitative and quantitative data based on ground realities.
As the ToRs for this study emphasise, with the growing importance of ‘qualitative’ methods the
distinction between quantitative and qualitative approaches is both clarified and challenged. This
provides the opportunity of combining different methods – quantitative and qualitative (E.g.
quantifying participatory survey results through ranking based perception matrices or similar scales
encompassing investigative variables and triangulation of survey results through findings of Focus
Group Discussions).
Does not rely on ONLY measuring quantitative changes in indicators over time - and
Does not rely on having comparison groups alone to establish a counterfactual. (Although the
option of their formation will still be investigated wherever possible)
3
“Broadening the range of designs and methods for impact evaluations” - Report of a study commissioned by the Department for
International Development - APRIL 2012; Stern et, al
4
“Eliminating alternative explanations is a standard way to leverage causal inference in science”. - Formalised by Michael Scriven
(1976) in his General Eliminative Method (GEM) approach
ii- Combining designs and methods – even within the same design ‘approach’ –
strengthens causal claims
Good evaluations are almost invariably “mixed method evaluations”. Qualitative information informs
both the design and interpretation of quantitative data. In a “Theory-based approach”, as being
rightly favoured in the ToRs for this study, reliance on qualitative data provides vital context and a
means to verify quantitative data or to strengthen it in case it is not robust or representative enough
as seems to be the case for VTEI study so far. The “mixed methodology” approach being suggested
will have to match the activities as mentioned in the ToRs to ensure line by line compliance to all the
required objectives and constraints thus identified both for Phases 1 & 2 i.e.
As specified in the detailed action plan, the Phase -1 activity would span a period of one month in
which the team would endeavour to establish first contact with all the mentioned stakeholders
relevant to the VTEI initiative. That would include DFID-CRHR team, members of the GoKP relevant
ministries/departments of education and construction/ works along with the TA team responsible
for construction of bridges.
One of the major objectives of the start-up meetings is to address the concept of “Attribution” and
improve the understanding of situation on ground. The consultants will sit down with stakeholders,
primarily DFID-CRHR team to identify/verify a set of ‘attributes’ that could likely to have design
implications. These attributes could include identifying:
This phase will also see the team designing the data collection tools that would ensure collection of
data as per the investigation themes and going through available data for the two components of the
project.
The team will also look into the available literature regarding various reported variables to establish
an investigative “baseline”. The main purposes of any QA system should be to ensure that Impact
evaluations can be defended and justified; whilst at the same time encouraging good practice among
evaluators. As highlighted in the ToRs, the quality and relevance of the available baseline data lacks
comprehensive evaluation. Under these circumstances the following options can be considered
which are in line with the two major activities identified for phase -1:
Identify core data points relevant to the links of the existing “theory of change” diagram and
investigate relevance and robustness of available data (utilising the three Annual Reports,
regular beneficiary/stakeholder feedback, baselines conducted earlier at inception of the
program and the on-going traffic counts). All qualitative and quantitative data will need to be
evaluated as part of the initial “desk review” to identify gaps and areas of improvement.
Ensure that various aspects relevant to data quality assurance norms are fulfilled. For that,
Lincoln & Guba’s naturalistic criteria could be a good starting point as evidenced by the
following matrix:
Preferred tools to be recommended for use include OECD-DAC Quality Standards and Europe Aid
‘checklist’ both aim to ‘improve the quality of development evaluation processes and products’ and
reflect a wider concern to improve practice by influencing what evaluators do.
VERIFY rapidly 1) the quality of reported data for key indicators at selected sites; and 2) the
ability of data-management systems to collect, 3) manage and report quality data.
IMPLEMENT corrective measures with action plans for strengthening the data management
and reporting system and improving data quality.
MONITOR capacity improvements and performance of the data management and reporting
system to produce quality data
The team of consultants intend to refine the line of questioning during their meetings with identified
stakeholders for Phase -1 activity where it is expected that more areas of investigation might be
identified and a more thorough investigation will be required for newly identified areas of interest.
5
This tool was developed with input from a number of people from various organizations. Those most directly involved in development of the tool
included teams from The World Health Organization, Global Fund to Fight AIDS, Tuberculosis and Malaria, MEASURE Evaluation and Office of the
Global AIDS Coordinator, PEPFAR, USAID and UNAIDS. This tool directly benefited from feedback on the DQA for Auditing from a number of
participants in workshops and meetings held in Nigeria, Ghana, Senegal, South Africa, Vietnam, Switzerland and the U.S.
Inputs
Considerations - to review the assumptions that underpin the theory of change
Theory of VTEI Programme (June 2010 to Numbers/ Process for the Review of
Consideration during existing
Change October 2014) Value Questions for Inception Meetings documents/data
data collection
DFID original Contribution (million 1. Are the inputs (£ value grant, condition for the
23 use of grant, TA, implementation and supervision
£)
partner selection) relevant and sufficient to carry Compare allocation of resources with
66 Steel Bridge Kits for Malakand out the VTEI Programme activities? actual distributions
12.9
Division
2. Was the programme launched timely i.e. June
20 more bridges in the Malakand 2010 and was needed at that time? Compare timelines for distribution of
3.5 Identify rationale and
region 3. Is/Was the proportion of support w.r.t other resources and inputs with actual time
documented evidence of
Inputs Government of Khyber Pakhtunkhwa
3.5
rehabilitation and reconstruction programmes
responses during inception
followed
(GoKP) - civil works for the bridges significant?
meetings.
Engineering Services TA (Halcrow and 4. Are the programme inputs matches the Review Aid support and Government
Sarhad Rural Support Programme 2.1 expectations i.e. activities, outputs, outcomes and budgerty allocations for reconstruction and
(SRSP) impact? rehabilitation of education and transport
5. Is the programme experienced any delays or infrastructure in Malakand division
40 semi-permanent schools 1 inefficiencies in the allocation and distribution of
resources/inputs?
Activities
Theory of VTEI Programme (June 2010 to Numbers/ Consideration during existing Process for the Review of
Questions for Inception Meetings
Change October 2014) Value data collection documents/data
Outputs
Theory of VTEI Programme (June 2010 to Numbers/ Consideration during existing Process for the Review of
Questions for Inception Meetings
Change October 2014) Value data collection documents/data
Improved access and better Assess if progress on outputs
Outputs - Education Review the strength of indicators used to
educational facilities 1. Which indicators have been considered/used to is properly reflected through
measure progress on these outputs
assess the changes in these output? indicators
Employment during construction
2. Do you have baseline value for these output
phase Assess if the indicators are good
indicators? If yes, what is the source of data on Asses if output indicators are
reflection of outputs
this, and are these sources verifiable? properly tracked and monitoring
Cost of transport reduced
Indicators and 3. Have you tracked the progress on these during programme
Outputs - Bridges Assess the quality and reliability of data on
baseline value indicators during the programme life? if yes, what it implementation
Increased vehicle traffic output indicators
shows?
4. Do you think there could be any other Identify if indicator tracking and
Identify if there are any other more
Increased pedestrian traffic indicator(s) (currently not part of result framework- monitoring guided the
appropriate indicators to reflect the
ToC) that reflects/ evident the achievements of the programme - course correction
Output - Schools & Schools and Bridges maintained and progress on these output indicators; and
programme and managing the programme
Bridges protected readily data is available on those
priorities
Outcomes
Theory of VTEI Programme (June 2010 to Numbers/ Consideration during existing Process for the Review of
Questions for Inception Meetings
Change October 2014) Value data collection documents/data
Immediate Identify the relationship of
More responsible and gainfully
Outcomes - outputs with the immediate Review the correlation b/w outputs and
employed citizens of the future
Education outcomes outcomes
1. How strongly, significantly and effectively the
Cheaper access to markets, schools outputs contributed to changes in these outcomes?
Assess if progress on Review the strength of indicators used to
and clinics 2. Which indicators have been considered/used
immediate outcomes is measure progress on these immediate
by the programme to assess the changes in these
properly reflected through outcomes
outcomes?
Cheaper and better availability of food indicators
3. Are there baseline value for these outcome
staples Assess if the indicators are good
Immediate Indicators and indicators recorded? If yes, what is the source of
Asses if outcome indicators are reflection of immediate outcomes
Outcomes - Bridges baseline value data on this, and are these sources verifiable?
properly tracked and monitoring
Increased profitability of local products 4. Have the progress on these indicator been
during programme Assess the quality and reliability of data on
tracked during the programme life? if yes, what it
implementation outcome indicators
shows?
Quicker access to markets, schools 5. Could there be any other indicator(s) (currently
and clinics Identify if indicator tracking and Identify if there are any other more
not part of result framework- ToC) that reflects/
monitoring guided the appropriate indicators to reflect the
evident the achievements of the programme
Accelerated reconstruction and programme - course correction progress on these outcome indicators;
Immediate Outcome -
service delivery improvements in all and managing the programme and readily data is available on those
Overall
sectors priorities
Through the synthesis of existing research evidence and reviews of project documentation
Review of existing literature on socio-economic development related to infrastructure
projects (particularly in fragile states).
Identifying and determining appropriate indicators to assess progress on socio-economic
development.
Reviewing the coverage and quality of existing baseline and trend data on the appropriate
indicators of socio-economic development.
Reviewing the evaluation questions in conjunction with the evaluation users and
stakeholders.
Identifying and getting consensus on detailed evaluation method that sets out a final set of
questions.
Designing of data collection tools to meet the evaluation requirements and balanced against
the availability and quality of the existing data
Reviewing the options for establishing a counterfactual, including comparison sites without
schools or bridges as well as methods that could be used without any comparison groups.
Identifying a mechanism to identify unexpected outcomes.
Using the findings from the above to produce an inception report document that sets out the
feasibility of evaluating the programme. The inception report would be able to demonstrate
how the different forms of data will be integrated into a single coherent analysis. A clear cost
and resources’ plan would be included with a timetable for delivery
A- Developing a qualitative module maybe after conducting a small survey for bridges' & schools'
target areas to select a "Qualitative Investigation sample" of beneficiaries. (Random stratified
sample of beneficiary sites can be taken).
a. Dividing beneficiaries into “target groups” from within the sample where ever possible
to achieve disaggregation based on gender and ethnicity or any other entity groups to
identify the conflict sensitive delivery of the program.
B- Re-verifying "priority issues" & comparing them with existing "Theory of change" linkages to
check for relevance and identification of knowledge gaps to be filled by surveys
D- Incorporating recalibrated "Impact & process" indicators in thematic questionnaires for FGDs
and various ranking matrices to be designed for perception mapping activities later in field
surveys
The idea of conducting a small preliminary survey allows would allow the team to identify priority
issues to be covered in detail by the qualitative methods that are to follow and to target beneficiaries
based on gender, ethnicity or groups, to ensure all are catered.
It will also allow the team to address any knowledge gaps as well as identifying categories of
responses be administered through a mix of perception ranking matrices and observable change
approach.
It is proposed that this section of the impact assessment would be undertaken using the mixed-
methodology approach with a heavy tilt towards the “participatory approach” with contribution from
the “quantitative methods” wherever necessary so that both are complimentary to each other. What
is being suggested here is:
Quantitative Data as point of Departure for Qualitative Research, i.e. A quantitative data set serves
as a starting point for framing a study that is primarily qualitative, which is relevant in case of VTEI
ToRs.
14.1 Making use of the “Participatory Impact Assessment”, contextual approach– PIA
technique6, it will be possible to evaluate the sample of beneficiaries without having to form the
control groups to establish a counterfactual. This fact will be established in Phase- 1 deliberations,
but the techniques being proposed can also deliver sound evaluations in the absence of control
groups.
Option of forming comparison groups from within the beneficiaries will still be investigated in detail
and is based on the type and quality of data available and will be majorly affected by the possibility
of outreach to those community based on various challenges pertaining to ground realities such as
security situation, availability of enough numbers to form a group, willingness to participate and
relevance to the area of intervention etc.
Note: Participatory evaluation is usually a module within an overall design rather than an overarching
principle7
14.2 Rationale - In common with qualitative research, participatory research tends to employ more
contextual methods and elicit more qualitative and interpretive information, but brings an important
additional commitment to respect local (emic) knowledge and facilitate local ownership and control
of data generation and analysis (Chambers, 1994, 1997). This aspect of ownership and control in
participatory research is intended to provide space for local people to establish quantitative and
qualitative methods in impact evaluation and measuring results their own analytical framework and
to be in a position to challenge ‘development from above’ (Mukherjee, 1995, 27). Participatory
methods generate both qualitative and quantitative data. ‘Participatory numbers’ can be generated
and used in context, but have also been taken to scale, most notably through participatory surveys
or through aggregation of group-based scoring and ranking activities. Participatory methods can be
quick and efficient, producing data in a timely fashion for evidence-based analysis and action.
6
Participatory Impact Assessment (PIA) is an extension of Participatory Rural Appraisal (PRA) and involves the adaptation of participatory tools
combined with more conventional statistical approaches specifically to measure the impact of humanitarian assistance and development projects on
people’s lives - Participatory Impact Assessment – A guide for practitioners; Cately et al, 2010) – Feinstein Int’l Centre; Tufts University.
7
Stern et, al
14.3 A key requirement is to produce results from a representative sample, which can be
generalised in order to reach conclusions for the population of interest. This implies working in a
larger number of sites than is common for most studies that use participatory methods.
This raises the question as to “who” will participate. In VTEI case those participating will definitely
include beneficiaries but may also include country-based officials and decision makers. There will be
different implications from different patterns of participation. For example the participation of
decision makers may have implications for implementation efficiency and sustainability. In order for
the measurement of qualitative impacts not to become too reductionist it will be sequenced with
qualitative analysis. In this way the evaluation and subsequent policy learning can be enriched by
qualitative analytical studies.
Quantification from the findings of PIA would involve developing and/or applying indicators or
indexes that measure changes in qualitative impacts, including both perception scoring data and
observable changes in behaviour. These indicators will allow for measurement and aggregation of
non-material and often complex multi-dimensional impacts.
Communities have their own priorities for improving their lives, and their own ways of identifying
impact indicators and measuring change. Oftentimes these priorities and indicators are different
from those identified by external actors. Traditional M&E systems tend to over emphasize ‘our
indicators’ not ‘their indicators’.
Firstly, a qualitative module can be developed that will add to an existing longitudinal survey
instrument (existing baseline surveys and data from the three Annual Reviews and other baseline
data), that was applied at the initial stage of the project presumably to a relatively large sample of
the targeted population and, later to a comparator population.
If enough similarities are found out between the now selected comparison group and the earlier
targeted beneficiary group formation of a comparison group can be investigated. This can be done
for infrastructural interventions that are yet to take off.
This sample will be the basis of all outreach and investigation in the target areas and will be
formulated using advice of all stakeholders and availability of resources and ease of out-reach
keeping in mind community nuances and sensitivities (e.g. gender exposure etc.)
A random stratified sample of sites will be taken to generate and aggregate indicator data on
qualitative impacts. This data can be collected using a mix of individual and group based scoring—
often described as a community score card (CSC). A community score card was designed and
implemented in Jamaica, for example, as part of a community based monitoring and evaluation of
social policy impacts on police-youth relations in a cross section of communities. The score card
included indicators of empowerment, designed to measure the existence of choice, exercise of choice
and impact of choice for young people in their interactions with the police. This tool is found suitable
for gauging perceptions hence will be helpful for the purpose of finding out the perception of the
targeted communities’ perception about Govt service delivery in the aftermath of a disaster.
This community score card can be further enriched using the additional investigative variables
relevant to the “theory of change” map as well as some taken from the UNAID and UNGASS indicators
list for socio-economic development.
data and analysis due to the presence of qualitative module that can be quantified using various
ranking and scoring methods to investigate perceptions.
The quantitative data thus generated will comprise perception scores of specific qualities of service
provision in the project so far, usually scored on a 4 or 5 point scale. These can be complimented by
These scores will then be aggregated from all the focus group discussions held and can be compared
across groups and over time. The key to a successful CSC session, in contrast with a survey module,
is that the scores are not simply elicited as an end in themselves but feed qualitative discussion. The
scoring will be used to prompt a discussion of three questions: (a) Defining the problem/issue; (b)
Diagnosing the problem; and (c) Identifying solutions. Follow up action might involve service users
taking action or engaging with service providers to resolve some of the problems identified during
the CSC session
8
Holland et al, 2007
through “Ranking Matrices” involving investigative variables that can be triangulated with the survey
data.
In other words, scoring is intrinsically useful to the qualitative exercise because the act of being
required to score something that is subjective sharpens the qualitative analysis that follows.
Participants have to justify their (relatively precise) scores. This process shows variance in the opinion
and the facilitator can call upon those people who have scored differently than the majority and can
ask for the reasons why they have done so.
One on One interviews and Key informant interviews with identified stakeholders will also be utilised
as they are an important part of the research. Each sector team (Bridges / Schools) would undertake
a thorough review of available documentation prior to the visit to the area of interest. It aims to
engage with a full range of stakeholders while minimising the transaction costs of their involvement
Experiments and Statistical studies are still favoured as complimentary to contextual methods being
proposed, but that would mean presence of sufficient numbers (beneficiaries, households etc.) for
statistical analysis. The possibility of their inclusion and percentage contribution to the overall
evaluation methodology will largely depend on the data furnished after the Phase -1 feedback.
Similarly, critique on the “Theory of change” at this point will be a bit premature, owing to the lack
of availability of data collected earlier that identified gaps and areas of improvements that can be
explored in the context of designing this new impact assessment exercise. A lot of permutations exist
as to the selection and hybridization of techniques available to conduct a QCA (qualitative
comparative analysis), and are subject to debate regarding their relevance for the job in hand.
15.0 Stakeholder wise Implementation Methodology and proposed tools of
evaluation
Phase II: Implemtation phase based on "proposed mixed-method methodologies". Subject to
findings of Phase-1
Type of
Stakeholders Assessment Methodology Intended Benefit
methodology
(1) Review the option of setting up a counter (Qualitative) 1) Increased enrolment /
factual, or else utilise primary data from A- Participatory retention in schools
"Memory Recall methodology" asking Assessments + Key 2) Reduction in the costs of
beneficiary children & teachers to deliberate Informant interviews + getting access to schools
on various Impact variables based on before FGDs 3) Improved access to schools/
/ after scenario. (Quantitative education services for children
(2) Conducting Key Informant Interviews of B- Closed ended 4) Reduction in the time taken
representatives of all beneficiary groups. questionnaire to be to get to schools
(3) Making use of "Community Scorecard" administered in 5) Increased savings from
method can be used for quantification of identified survey areas + children education for other
Students, before/after findings using ranking & scoring Community Score household expenses
Teachers & methods for quantification. Cards and developing
Parents, 4) Conducting Focus Group Discussion Perception ranking
(FGDs) with beneficiary parents on various matirces for perception
overall
enriched variables of investigation. mapping
community 5) Using "Perception scoring" to quantify C- Participatory/ group
members. observable changes during exercises (1-4). based data to be
6) Based on conveneance of outreach, a triangulated with survey
"closed ended" questionnaire based on the data
findings of Steps (1-5) can be administered
to quantify findings using SPSS.
7) Triangulation of steps (1-6) to converge
findings
1) Construction Support
(Qualitative) 2) Introduction to a model of
Key Informant Interviews with C&W Participatory utilising community based construction
(Government)
department staff to assess their learning and Interviews and possibility model in education
Construction & experience from VTEI programme school 3) Improved accountability
of conducting FGDs
Works - CW construction activities coupled with Perception system
surveys
1) Conduct perception survey of (Qualitative) Case 1) Improved employment
communities that have experience Studies of sample opportunities during the
reconstruction and provision of education schools Focused construction phase
services interviews at Chamber 2) Cost of transport reduced
2) Review through "case study" of of commerce and 3) Increased vehicle traffic
sample school construction activities to industries 4) Increased pedestrian traffic
Overall calculate number of labor days employed (Quantitative) 5) Cheaper & better avaialbility
General during school infrastructure development Perception Surveys & of food staples
population of activities quantification through 6) Improved access to markets
3) Stability Index measurement - to identify perception 7) Increased profitability of local
the target area communities' perception about economic measurement indices & produce
(Bridges & revival improved because of school & changes in behaviour. 8) Increased evidence in the
Schools) bridges construction, new shops opened, Triangulation for the incidence of poverty reduction
new commercial brands introduced, private two sets of data in the 9) Accelerated speed of
sector investment in education sector (private end reconstruction and service
schools and learning academies) etc. delivery in all sectors
10) Increased level of overall
economic activity
The above proposed methodology is a mixed-method approach primarily making use of “contextual
methods” and relying on participatory approach wherever applicable to qualitatively assess the
impacts of the VTEI program. Quantification of various qualitative methods adopted will be
attempted through utilising the Community based scorecard approach and designing ranking
matrices containing indicators for investigation.
Ample use of point contact methodologies like Focal point interviews, Focus group discussions will
be carried out to substantiate the findings from the perception matrices as well as data from any
longitudinal “quantitative survey” if the team finds it feasible to conduct it in the targeted areas.
It is expected that in order to quantify data on gender and individual entities or groups based on
ethnicity that the project benefited, target groups will be identified from within the randomly
stratified sample of target areas established earlier to capture various dimensions affecting those
entities.
To sum it all up, all findings will be triangulated to identify any “unexpected outcomes” that are
expected to arise, especially with the participation of identified community members as part of the
research team.