There are some basic steps that most evaluation processes will go through, as follows:
Scoping the evaluation
- purpose / objectives of evaluation;
- limits / boundaries (e.g. timescale, budget, boundaries, subjects to be covered or not);
- approach (audit or learning approach);
- level of engagement in evaluation (e.g. getting data from participants, testing results, setting up an advisory group, involvement in deciding key themes for the evaluation, control over findings e.g. what said and how reported etc);
- confidentiality of results (e.g. is the process to be open to full public scrutiny?);
- main themes and questions to be covered by the evaluation (what will it look at: see ‘what should it cover’ section above).
Collecting data
Getting baseline, ongoing and/or data on completion of engagement process, through methods such as:
- desk research (e.g. reviewing all documentation produced by the programme);
- observation (e.g. attendance at workshops; listening in to online debates);
- interviews (e.g. with participants, consultant team, commissioners of the work);
- questionnaires to participants (e.g. by telephone or online);
- group working (e.g. group reflections on progress);
- online (e.g. feedback on progress through various online discussion groups).
You will need to work out when you should best collect the data, for example:
- at the beginning of the process to benchmark
- at the end of each public event (if more than one)
- at the end of the whole process
- later … depending on the long term objectives of the exercise.
You will also need to decide who you want to collect data from. In a public engagement exercise you will generally want to get data from the following:
- the public participants
- the policy-makers who are being influenced by the process
- whoever commissioned the process
- whoever designed and implemented the process (could be different)
- facilitators.
And, finally, consider what data you want, which is likely to include:
- quantitative data i.e. actual statistics, or data that can be converted to statistics
- qualitative data on specific questions that can be analysed according to views on specific issues
- quotes
- specific examples to back up general points
- personal / organisational stories
- photos, charts, etc
Analysing data
The data collected can be assessed against various analytical frameworks including testing the data:
- against the stated aims and objectives of the engagement process
- against agreed qualitative and quantitative indicators
- by surfacing, clarifying and articulating ‘assumptions’ about aims and objectives among participants and commissioners (from baseline feedback, interviews etc), and testing achievements against these
- against agreed principles of good practice in participatory working (e.g. those promoted by The Environment Council on stakeholder dialogue, or bodies such as the International Association of Public Participation and Involve; see Annex 4).
Testing findings
The initial findings from the data collection and analysis can be tested with various stakeholders in the evaluation process through, for example, setting up an advisory group (with experts / participants etc), workshops with participants, electronic consultation on draft reports etc.
Report writing
This is a crucial step. It usually starts with producing a draft report for testing with those commissioning the evaluation and stakeholders, and then producing a final report for publication. Full evaluation reports can be very dense, and packed with statistics, so it is often necessary to produce a summary report for wider circulation, including to participants who will not necessarily want to read the full report. It is often useful to make the summary report relatively popularist, appealing to a general audience, with illustrations, quotes, etc.