Ade from GDS has been blogging about her work on online policy making, initially setting out the reasons for consultation, and following that through with a request for further examples in the categories that she’s developed to assist thinking around how and when Government engages. There’s just too much to say in the space below a blog, so I decided I’d post my response here instead.
So, to respond directly to Ade, I am a big fan of the case study approach, and seeing that formalised would be a real step forward in embedding digital engagement more formally. One of the things we learned during our social media week activities was that – despite the fact I can assure you government isn’t quite as silo’ed as you think it is – people don’t always get the time and space to see outside their immediate and associated policy areas. So if they haven’t been ‘doing digital’ they tend to assume that the department is lagging behind. Being able to show them examples from consultations to ongoing engagement exercises was effective at changing that perception. So having a bank of case studies – and giving those a prominent home – will go some way towards redressing that balance. We all need to get better at doing that in a systematic way.
So, I like the way that she’s tried to categories the types of case study – so moving beyond traditional consultation to include projects for:
- Generating ideas
- Testing out new ideas
- Creating/designing a document/service or deliver a project in collaboration with relevant stakeholders
- Drawing on dispersed knowledge to inform policymaking
- Getting detailed and focused feedback within a tightly defined framework
- Addressing misconceptions and clarifying objectives through discussion and engagement
She gives some great case study examples from some well-known (and not so well-known) exercises in the categories where she has been able to identify them – I’ll talk a little bit more about what else I think she could have included below, but I’d add another couple of examples to those she’s used:
- In category 3, delivering a project in collaboration with stakeholders: that was the intention behind the approach taken during development of the Science and Society Expert Group action plans – with varying degrees of success and participation depending on the groups themselves. That site has actually been through a number of iterations over a 4-year period – from its use as initial consultation site (with online engagement driven by the digital team within DIUS), to collaboration, and back to trying to test out new ideas – each with varying levels of engagement from the relevant community. Again, I’ll confess an interest in this site as I was in the science and society team, for the first two stages of the site’s development.
- In category 4, drawing on dispersed knowledge, I’d include Focus on Enforcement, a younger cousin of the Red Tape Challenge. As a project, it’s the online face of a series of reviews on how regulations are enforced by local and national authority regulators, enabling people to share their experiences with those regulators. It also veers into category 1, generating ideas, through its use of IdeaScale which enables interested parties to suggest (and vote on) ideas that future reviews could examine. Like the Red Tape Challenge, the subject focus changes regularly, which has its own challenges for building relationships with online communities, while we’ve been less successful at driving traffic to the IdeaScale suggestion site – this may be to do with the fact that it requires registration to contribute, or to the fact that it’s simply not been as visible as it should be.
So what should case studies include?
People are probably bored of me referencing the Sciencewise programme – the non-digital public dialogue programme I used to support in my previous job. Their approach to evaluation is a valuable one, however, especially its focus on actively using such a template for case studies which measure immediate impact of dialogue and engagement interventions, and which are routinely updated in recognition that impact can take place long after the initial engagement exercise has completed. For me it also gives more clues about how we should go beyond metrics to look at impacts on the people involved, the policy makers and the actual policy.
And, I think that gives a clue to how some the case studies featured in Ade’s blog could be further developed (with apologies to those who have taken the time to get them this far). I’ve picked on one here to illustrate what I mean.
- The Red Tape Challenge Case Study: This feels quite hurried, defensive, and tends to focus slightly on the negative aspects of running the site, without (as we do with Focus on Enforcement) acknowledging the challenges of regularly changing the focus and audience targeted. The study doesn’t reflect the involvement of the various Departments over time, nor does it consider how the comments received have been used to develop and shape approaches to policy making. Off the top of my head, I can think of at least two consultations and subsequent changes to policy that have happened as a result. I think it needs to give the time period over which the 29,000 responses were received, and acknowledge the value of having such a large evidence base. I’d also like to hear more about the approaches used to reach the relevant audience for each challenge, and any online partnerships or techniques used to encourage discussion.
What would I add?
So, for me (and I’m half wearing my ex policy hat here too), as well as the really useful headings that Ade has used in her case examples, I’d like to see included evidence on:
- Impact on respondents
- Was this the first time that respondents have been involved in a government consultation?
- How many new people were brought into the debate as a result of this new approach?
- If the exercise hasn’t generated as many comments as hoped, the reasons for that. That’s particularly important in a current exercise where we’re effectively asking individuals and business to comment on people who they’re likely to meet again, and whose relationship with the state puts them in a relative position of power.
- How did individuals feel about participating, was it easy for them to get involved, and did they feel their voices and opinions were heard?
- The policy team
- Was this the first time the policy team or any of its members have engaged in this way? This could be an attempt at a proxy for measuring how far digital engagement has been embedded.
- Would they be likely to take the digital approach again?
- How likely those policy makers are to engage digitally on an ongoing basis?
- The policy
- Was the eventual policy shaped as a result of online interventions and suggestions?
- What are the ongoing impacts of the engagement?
- Joint learning
- What the digital and/or policy team would avoid if starting the process from scratch?
- Whether the approach has been modified or used again in subsequent projects
- The Process
- Initial mapping and analysis of how best to reach the desired audiences, and the eventual success in achieving that aim
- Online partnerships developed to help promote the exercise, and facilitate discussion and active engagement around it. For example, in the BIS HE White Paper Consultation, The Student Room was partnered with, while policy teams were active in engaging on external online spaces.
Some of these are implied or included in Ade’s approach already, of course, and I’m sure that many won’t agree with some of my additions, or think that some of them don’t necessarily need to be shared beyond the digital team.
And, some of these are big asks, which would mean building in extra levels of monitoring and evaluation at the beginning of any engagement exercise: including these types of qualitative measures takes the focus off the digital element, focusing instead on the engagement. More importantly, though, it talks some of the language of policy-makers to sell the benefits of the exercise in the policy makers own terms and language. And including a commitment to evaluation right from the beginnings of any project could – like with Sciencewise – form part of an informal contract between digital and policy that emphasises the former’s importance, and begins to really embed digital thinking. It’s easy to say this in a blog, but it’s proved less easy to do in practice.