by Author Dougie Lonie

Published on

You are here:

3 weeks, 10 events, and 600 people later... so what's the outcome?

After talking to a lot of people about how to demonstrate impact, I'm left wondering just how important is it to quantify our achievements?

 

Sorry to be so outputs focused in the title of the blog, but, you know, big numbers get people's attention.

Or do they? 

I've just spent 15 days travelling around the country talking to people about how Youth Music is becoming more outcomes, rather than outputs, focused.  It's still difficult to get away from those big numbers though. 

The most common question that came up in the six training sessions and four launch events was 'how can we quantify what we are achieving?'  This seems to stem from the impression that no-one will be interested in the impact of a project unless it can be shown on a graph.  This is not helpful for anyone interested in communicating real impact because usually it leads to people only reporting on the number of children, number of sessions, number of staff, number of pens used, number of forms filled out, numbers of outcomes training sessions attended and so on ad infinitum.

The biggest challenge we all face is explaining the way we work and the impact it has in a way that is engaging but doesn't necessarily fit on a graph.  Statistics are useful but they only tell us part of the story - not everything we need to know about the way we work.  Certainly not 'proof' of the effect we're having.  Especially since most of us work with smallish groups of children and young people where any attempt at 'statistical significance' or 'generalisation' to wider populations is futile. 

Yes, numerical data from questionnaires and surveys can provide useful indicators of the intended impact of our projects.  If a young person goes from scoring themselves a 1 for musical ability to a 10 after a year of participation then we have an impression of positive change.  If ten young people do the same thing this might also indicate positive change for the wider group - but it isn't absolute proof of effectiveness and shouldn't be treated as such.  

I've really enjoyed the opportunity to discuss the outcomes approach with colleagues over these past few weeks as we all work on developing the range of practical ways we can measure and demonstrate impact (some quantitative, some qualitative). 

What we really need are sound evaluation principles applied throughout project planning, delivery and evaluation.  Intended outcomes should be set based on what changes we know need to happen, not on what is easy to measure or quantify.

We do need to collect output statistics and they are useful indicators of reach and volume, but not everything can or should be quantified, and that's ok.

We can only take on the statistical fetishists with a convincing alternative so please have a look at the outcomes and evaluation guidance elsewhere on the site and join in the discussion.  Ultimately, numbers have their place, but there are many ways that we can measure the changes that take place because of our work, so long as it gets a little thought and planning.