Outcome Based Government

By Toby Eccles and Sarah Doyle

A few weeks back we were at a retreat for Social Impact Bond Developers on an Island on the Muskoka Lakes in Canada. It was a sensational venue (thank you to the Breuninger and BMW Foundations, and the MaRS Centre for Impact Investing) and a really interesting few days. One of the topics that a small group of us spent some time thinking about was: What does Outcome Based Government look like? Social Impact Bonds are a really interesting model for a range of opportunities, but they are still a specialist subject that will only ever affect a modest amount of overall funding. There is however a need for a much greater proportion of government spending to use some of the elements that SIBs encourage. In particular:

  • Deciding what you are trying to achieve and how you are going to measure it before you start;
  • Benefiting from the combination of outcomes data and flexible contracting to use rapid iteration and adaptation models to test, learn and improve services on a continual basis;
  • Transparency on what has been achieved;
  • Explicitly analysing interactions between the outcomes you are trying to achieve and other parts of government expenditure, and measuring their impact on each other.

These would have some fairly profound implications. Examples might be:

  • The expectation that, wherever feasible, any policy idea would be treated as a hypothesis or series of hypotheses to be tested in an experimental way, informing future decisions on the potential to expand funding or commit to a wider roll out. This would potentially allow more policy ideas to be tested, with only a selection implemented on a wider scale;
  • Government would over time learn the actual cost of its services and external service providers at producing certain outcomes, making it easier to compare;
  • There would be a much clearer understanding of what works and whether a given initiative had been successful;
  • Budgets being set including both funds and measurable outcomes (tied to specific funding streams), at a sufficient level of granularity to demonstrate success or failure. This could happen at multiple levels of government, both for departments and within departments.
  • Service providers would benefit from longer-term contracts, increased flexibility to adapt over the course of the contract with a view to delivering on outcomes, and a reduced reporting burden regarding day-to-day operations and interim outputs.
  • New partnerships would be incentivized within government, cutting across traditional silos, and outside of government, where a consortium of service providers (generally coordinated by a prime contractor) may have a better chance of delivering on target outcomes.
  • For it to work there would need to be at the least an independent entity defining, or assessing or auditing the outcomes that government is using, to ensure rigour and accountability, and avoid politicization.

This is in part about better measurement, but it is also understanding that we live in a complex world, that we can’t produce predictable outcomes simply by better planning, that instead we need to create feedback loops, gather information, and adapt accordingly. In short we need social services to have a rigorous model of learning that generates knowledge that can be built upon and improved. The path to getting there is not straightforward. It strikes me that there are at least four products that alongside the SIB can help move government along this path:

  • Support bringing outcomes into the budgeting and financial planning and management processes;
  • Help bringing outcome elements to contract renewals in such a way as to drive improved services;
  • Procurement and contracting models that build in feedback loops and an expectation that the contract will adapt over time, rather than stay the same;
  • A better model for outcome work at scale than the present large scale national or state-wide procurement process that we are seeing with the likes of the work programme or transforming justice. This would involve developing a model that allowed feedback and learning, starting experimentally in two or three areas. For example one could create a framework of sought after outcomes and maximum values that can be paid for them, a referral methodology, and an initial community of providers. Thereafter the providers can be added to, say annually, and the outcome values can also be changed according to what outcomes are generated for what value. There would be transparency in terms of what providers are doing and the outcomes they are achieving. Toby will be writing more on this soon.

Would this work? What else is needed? All thoughts welcome!

PS This is intended as a starter for discussion. There are plenty of areas of government where an outcomes approach may be inappropriate. There are also plenty of poor ways of introducing outcomes that simply become target cultures with strange perverse incentives, or the creation of meaningless numbers outside of government control. Previous attempts have tended to create top down targets as a way of managing from the centre, rather than as a way of creating feedback loops from the outside. We hope to explore some of these challenges and issues in future blogs.

Thanks to Peter Barth @ Third Sector Capital, and Caitlin Reimers @ Social Finance US for a great conversation…

 

Peterborough SIB – a success or a failure?

If you are a Social Impact Bond (SIB) aficionado, you will already know by now that the Peterborough SIB will no longer be a SIB for its third and final cohort. It will most likely be turned into a fee for service model due to the arrival of Transforming Rehabilitation (TR).

For more information on this change, have a look at: http://socialfinanceuk.wordpress.com/2014/04/24/social-finance-statement-peterborough-social-impact-bond/. In that Blog you can also find links to the RAND report on the inner workings of the Peterborough SIB to date, and the latest reoffending data from the project, which shows a decline of 11% vs an increase nationally of 10%.

Should this be a moment of great hand wringing? The death of SIBs? The proof they’re not needed? Certainly a few tweets today might suggest so. I beg to differ.

To understand this a little better, it is worth going back to why we developed the SIB model and set up the Peterborough pilot in the first place.

  • Enable innovation: It was felt that government struggled to try out new models or areas of public service delivery for fear of failure and the perception that if they spent money on something that didn’t work they would be accused of wasting it.
  • Enable flexibility and focus on outcomes: Organisations were frustrated that they were being procured to provide a series of defined inputs and processes, and then held to account to deliver exactly and only that. This produced two problems: firstly, for those providing a more holistic service they were often more expensive than another organisation that bid simply to provide the limited brief that they were asked for; and secondly programmes provided little room for learning. Success was to deliver what was pre-agreed, regardless of what was discovered during delivery.
  • Bring rigour to prevention: Preventative work was commissioned but often only for short periods of time, or unsustainably. Preventative services were perceived as a good idea, but usually weren’t being measured effectively. So they were at risk on the low end of the budget cycle, when money was tight, as they hadn’t built the evidence of their effectiveness.
  • Better alignment: Foundations often had quite a negative view of government. They fund things in the belief that if successful government will take them on, only to find that too often that is not the case and they are left with a dependent organisation on their hands. The SIB was designed to give each of government and foundations or social investors clear roles in the structure, allowing them to work together.
  • Investment in social change: We felt that creating positive social change at scale needed more than grant money. We thought that creating a structure that enabled investment in positive social change could, if successful, create an engine for further change and improvement to society.

When first considering the SIB model we looked at a range of areas, including reducing reoffending. When analysing reoffending it quickly became apparent that the lack of support for short sentence offenders, the lack of attempt to move people onto a different path, was simply wrong. So we gained a new objective, to demonstrate that we, as a country, should be working with short sentence offenders, rather than simply watching them going round and round the system creating more and more damage to themselves and others.

So how is Peterborough doing against those objectives?

  • Enabling innovation: Success, the model was implemented, when without the SIB structure the project would not have been put in place.
  • Enable flexibility and focus on outcomes: Success, as can be read in the RAND report, there are numerous citations from stakeholders that the model has enabled indeed required the service to adapt to the needs of service users and improve over time.
  • Bring rigour to prevention: Success, what other prevention pilot do we know of that has had detailed figures published while it was ongoing? Admittedly, actual results according to the payment metric aren’t out yet, but this level of rigour simply isn’t normally seen.
  • Better alignment: Success, investors in the pilot were keen, and investors in further SIBs have also been keener to fund the programme than they would have been had it been a traditional grant funded project with no connection to government.
  • Investment in social change: Still building… We need more SIBs before we can claim that we have created a new investment community, though Bank of America Merrill Lynch distributing Social Finance’s New York State SIB to their wealthy clients is a significant step in this direction.

And finally, most importantly, have we made a difference to short sentence offenders? YES!!! There is a new statutory obligation to work with short sentence offenders across the country. Whatever the merits or otherwise of the wider TR agenda, this is a very significant change on which many have been campaigning for a long time. Obviously it wasn’t just the Peterborough SIB that caused this change, but it clearly played a significant role.

So what is the problem? Clearly a nationally implemented programme is going to have an impact on any small scale pilots in the same area. It is hard to see what else might have happened. The one beef left is that the learning from the Peterborough SIB has not made the TR programme better. For example, it has not made the case for greater investment in rehabilitation, with the savings to come from the prisons budget not the probation budget (see http://tobyecc.wordpress.com/2013/05/10/a-step-in-the-right-direction-but-not-enough/). But Peterborough did not create TR. TR is about wider privatisation and cost reduction, as well as rehabilitation.

Peterborough was not designed to be a test case for a national payment-by-results programme, but to enable innovation, to demonstrate the value of flexibility and focusing on outcomes, to bring greater rigour, and most importantly to shine a light on the woeful situation this country has with short sentence offenders. Against these objectives it has been and remains an iconic success, and a cause for celebration.

The role of procurement in (not) reshaping public services

Below is a paper I used as an intro to a working group looking at issues around procurement and its role in holding back service improvement. We’ll be writing up some conclusions or ideas next, so all thoughts gratefully received.

Put yourself in the place of a Director of Adults or Children’s Services in a Local Authority in the UK. In both areas you have rising demand for services. In adults services an ageing population and more disabled children surviving to adulthood are long term trends adding to the numbers. In children’s services the Southwark Judgement and the recent increase to the fostering age to 21 are creating upward pressures on demand without significant funding to support them. Each awful childcare case we read about, whether Baby Peter or Victoria Climbie, puts further pressure on the service, as social workers seek placements for more children who are cause for concern.

Into this context you have demands for cash savings of 20% or more over the next few years. So what can you do?

Traditional belt tightening

You can tighten access to services and ensure that significant cost decisions, to take a child into care for example, are not taken by individual social workers but go through resource panels that look at the case in comparison with others.

You can cut any non-essential or non-statutory services, those that you don’t have a legal obligation to provide. These are often the “softer” services, those that may prevent cases from occurring in the future. Their effectiveness is often uncertain, in part because the cases they have supported are not effectively monitored for long periods afterwards so the data on what happens next is not available. So the services are vulnerable.

Both of these have already been done in many places.

Wider reshaping

Or you could try something more radical. You could invest more, not less, in prevention or “demand management”. By providing more support for families earlier, you could keep more of them together, providing children with a home to go to in the long term. Or you could develop new models of care, such as the shared lives model in adult services, or supported living in the community rather than institutional care. These types of changes have the potential to create a more sustainable, lower cost model with better outcomes for everybody.

So how would you go about it? As an example here are some of the challenges in putting in place a preventative programme in children’s services:

  • Finance departments are wary of “invest to save” arguments. They know they will see the “invest”, but will they receive the “save”? They will need some convincing.
  • How would you target your potential families to support? How many families are in trouble, in comparison to how many end up in such trouble that children need to come into care? If you don’t get your referral criteria right, you’ll be supporting a bunch of families who may benefit, but who wouldn’t have cost you money down the line. You’d just be spending money.
  • What support programmes or interventions would you use? Which ones work? What evidence do you have that they do?
  • Who would provide the support? Should you build an in house team, or find a charity, social or commercial enterprise to fill the role?
  • If you do free up placements, will worried social workers start referring other children, knowing there are now other places available?
  • How would you know if you were being successful? Can you monitor those you have worked with for long enough to know what happened? Do you need to not work with some, to see what happened to them, and determine what might have been, and therefore whether you are working with the right families or not
  • Having established what services would make a difference, you find that they aren’t available locally. How do you enable the internal or external investment required to set them up?
  • How do you set up your new system flexibly enough that you can learn and adapt as you go, shifting resources to where the work seems most effective?

Most of the issues outlined above are soluble. Effective analysis of the population, intervention evidence and available market of providers will provide a strong starting point for version one of the change programme. Monitoring systems can be set up to provide the feedback loops needed, and outcome oriented contracts can enable the flexibility to end up with a system that can adapt and learn.

One of the trickiest issues left is procurement. To build a new system as outlined above requires partnership between government, private and civil society sectors, use of feedback, adaptation and learning in order to get to the right answer over time. Procurement models therefore start with a set of implicit assumptions that reliably get in the way of developing the right answer. Here are some of them:

  • We can with enough thought, plan out a complete answer at the beginning and then procure it.
  • The commissioner knows enough about what it needs to specify it in the procurement process
  • We can effectively split the design phase and the implementation phase
  • There is an effective market of provision available for whatever I want to procure
  • Service providers will be willing to contribute all their ideas, and invest considerably in the process, before we run an exercise where we may have to exclude them if they have any perceived advantage from the investment they put into the design phase.
  • Contracts should specify exactly what is to be done and for how much, and can’t be adapted based on the learning thereafter, particularly if that adaptation may mean that in retrospect another provider should have won the competition for the service.

In other words procurement is stuck in a world of linear strategic planning, while services exist in a complex environment with a variety of interdependencies and unexpected shocks. Redesign needs reasonably rapid iteration, feedback, and adaptation in order to be effective. If one of the reasons for government to outsource is to enable innovation and wider development of the provider market, then it seems a pity if the method for doing so leads to a rigid, unadaptable supply chain which has little ability or incentive to innovate in order to generate social outcomes more effectively.

There may be good news. New European procurement directives may help. As I said at the top, we’re writing on this over the next few weeks so should be back soon.

What could be wrong with the humble grant?

Grants are what make the social world go round. They are the standard product of exchange between funders (aka grant makers) and social organisations. But grants have a problem – a feedback problem.

The grant recipient, who might give the feedback, is usually going to be seeking more money from the grant provider. That can get in the way of complete frankness. Grant feedback is usually rose tinted and sometimes becomes a matter of how many times you can fit lovely, wonderful, creative, thoughtful and brilliant into each sentence. I remember being very disconcerted when I was a grant maker for a while. My jokes were suddenly funny, my thoughts pearls of wisdom, and my ideas boundless creativity. I found this uncomfortable. Most grant makers I know do too, and while they generally see through it and work hard to do the right thing, it is sometimes a difficult bubble from which to escape.

So here, with as much honesty as I dare, is a list of grant features and their issues. Thereafter I will put down some thoughts on possible alternatives or improvements.

No. Feature Bug
1

Grantmakers have carefully thought through programmes to which organisations apply. These ensure cohesion and impact

Applicants try to squeeze their work into ill-fitting boxes, refocusing on new measures and creating mission creep. Over time this can create an insidious battle for strategic control with charities playing cynical lip-service to funder wishes and funders becoming more and more controlling.
2

Grants are for a set period of time, up to three years, to avoid dependency

This means that charities have to fill a one third minimum funding void every year, independent of effectiveness or impact. Continuity of funding is effectively completely un-meritocratic.
3

Grants are for a set amount of money, to pay for specific things, and are paid according to a clear schedule to ensure the money is used effectively

This is fine when the plan is clear, but doing anything new one learns on the job and should adjust accordingly. Does business investment tell you exactly how and when to spend the money? In that culture change and adaptation are expected, not cause for concern and negotiation.
4

Applicants should demonstrate that they need the money, as otherwise the grant money could be more effectively used elsewhere

This can create an insidious reverse meritocracy. An effective organisation that looks impressive and organised, with effective financial planning may struggle to get funded. A disorganised one, in clapped out buildings and a handwringing story, may be more successful.
5

Grantees should not use grants for commercial gain or exploitation, results and any intellectual property should be made public

Many social impact ideas can be set up as social enterprises and seek to compete in a commercial market, but with a social product. They can’t get commercial funding, due to their social focus, but they may also struggle to get grant support because someone one day might make some money.
6

Grantmakers want to support innovation

So a charity with a long track record programming excellence can struggle to get its core programme funded. Also the required innovation needs to fit within a pre-defined programmatic area. In other words, we would like to support innovation we have already thought of…
7

Grantmakers want to support the frontline as their money is for saving children, not administration

So charities and social organisations are undermanaged, with poor IT and data systems, and have trouble with senior staff retention as staff leave to do other things if they want to start or support a family.

Put together, these features can:

  • Promote bureaucracy over innovation;
  • Promote mission creep or even fudging;
  • Have unsustainability built in from the start;
  • Are unmeritoctratic and can in fact support the poorer or more needy over the more effective.

Many of these features seem originally designed to fit the needs of foundation trustees or creators, who would like a series of ideas that feel fresh and exciting, that directly affect peoples lives, and that can be cheaply administered. Complicating issues such as commerciality are regarded as too fiddly, and are therefore excluded.

Grants 1.1

Many grantmakers, traditional ones as well as new types such as venture philanthropists, have tried to break at least some of these moulds. For example, many grant makers in the US provide follow-on funding and develop longer term relationships. Esmeé Fairbairn’s social investment fund allows them to be thoughtful around commerciality and social enterprise. The Big Lottery Fund’s Better Start programme aims to provide 10 year funding. Venture philanthropists often allow for more adaptation and provide support to enable the business plan rather than a pre-designed programme. Here are some further experimental grant structures designed to get round some of these issues.

  1. Some grantmakers just accepting that they want to support the strongest organisations in a given area, and then to continue to do so. They can then build long term customer style relationships, developing monitoring and feedback systems to ensure that the charity focuses on the needs of those that it serves and seeks to improve the service it provides. This may sound dull, but it would have a considerable impact.
  2. A success top-up grant. A grant maker provides a three year grant. In the event that the grant applicant is being successful, an additional 50% of the grant value becomes available automatically after say two or two and a half years. The grant maker continues to measure the same impact measures on the new money as on the three year grant, but doesn’t specify how and when the money should be spent. The grant maker can then compare the impact of the follow-on grants it is making with the initial grants. Grant makers should publish the proportion of grants that roll over to set expectations, for example the aim might be say two thirds roll over, or half if feeling more aggressive.
  3. A grant maker could decide an amount of money to tackle a social issue, but then provide a complete mix of different funding, grants, loans, commercial funding, to organisations tackling different aspects of the problem. By doing so, the grant maker learns about the issue from different angles and can add value to those it supports and become a genuine partner. The grant maker can become part of the dialogue around that issue and an agent of change. It could measure itself by the overall impact that it has on the issue, rather than individual grant success, enabling more risk taking and a more holistic strategy.
  4. A grant that converts to equity. To resolve issues of commerciality, grant makers can put the grant in place so that in the event the entity supported receives commercial funding the grant maker can expect to join that first round at the same valuation. They will have taken initial risk, but that is what the grant was for. This idea would need tinkering with, depending on circumstances, but should allay the commerciality concerns.

What are other people’s ideas? I would be grateful for any thoughts on improved grant making structures as I’m not sure I’m being very imaginative here.

Further data on the Peterborough Social Impact Bond

The Office of National Statistics provided further data on Peterborough at the end of July, this time on the complete first cohort of 1,000 prisoners.

While this is largely confirmatory information, the Ministry of Justice found a closer matching baseline, by focusing on local prisons rather than all national prisons. This responds to the concern that Peterborough may be hard to emulate or unrepresentative as it is local and therefore returns more of its prisoners to the local area.

The updated data looks like this:

Peterborough (and national equivalent) interim re-conviction figures of cohort 1 with a 6 month re-conviction period

 Peterborough

National local prisons

Discharge Period

Cohort size

Binary

Frequency

Binary

Frequency

 Sep05-Jun07

837

40.4%

74

36.60%

66

 Sep06- Jun08

1028

40.6%

81

37.80%

71

 Sep07- Jun09

1170

41.0%

85

38.30%

74

 Sep08- Jun10

1088

40.3%

84

37.30%

75

 Sep10- Jun12

1006

38.6%

78

39.30%

84

Binary: Reconviction rate over six months
Frequency: Frequency of reconviction events per 100 offenders within six months

A few topics to cover:

- Is this a better baseline and therefore does it give us greater confidence in the effect that Peterborough is having?

- Is this data good, or mixed as some have reported?

1. Is this a better baseline?
It should be, as it better matches the Peterborough cohort. As an experiment, I thought I would put together similar graphs to the ones before and compare them.

Data to March, with National baseline
Rebased reoffending data

Data to June with National local baselineRebased reoffending 2

And now the relative change graphs

Data to March, with National baseline
Peterborough relative to national

Data to June with National local baselinePeterborough relative to national2

What this shows visually is that the new baseline appears to be a better fit. Movements in the baseline prior to the intervention are closer to the movements in the Peterborough cohort, in other words the baseline appears to explain more of the movement in the Peterborough data. So it gives us greater confidence that we are seeing an intervention effect.

It also gives us a degree of greater confidence that we will get paid. The previous data ended with Peterborough’s frequency number equalling the national average. This one ends with Peterborough at least improving upon it. The propensity score matching process should bring out a comparison cohort that is even more similar, but of course we still haven’t tried it.

So, is it time to pop open the champagne and celebrate? Not yet. This is good news, but it is still only on six month data. We will be measured on whether we reduce offending over twelve months. What we can say is that our intervention appears to at least delay reoffending behaviour.

We should also say, this is only the first cohort of the first Social Impact Bond. It is incredibly early days so drawing significant conclusions at this stage is premature. On the other hand, we are learning and developing all the time, so the fact that we see a significant impact on such an early group is clearly exciting.

2. So is this data good, or mixed as some have reported?

We are cautious, because this is early days and early data. It isn’t a randomised control trial, sure. Nor is it the formal comparison cohort that will be developed for payment purposes using propensity score matching. But this is very positive data, on the best available information.

In the first set of results, which were also good, one of the caveats people put forward was that the Peterborough frequency was now only the national average. On this closer baseline this is no longer the case.

Another concern was that the jump in re-offending frequency in the national data should be treated with caution. I understand that, and see the potential for regression to the mean, but comparison with national data is more precise than looking at a comparison with historical data. Thus the 20% relative decline is the more useful figure than the 8% decline against historical figures, particularly given the strong correlation between the local prison data and the Peterborough data historically.

It is important to draw a distinction between responding with caution, on the basis of the caveats outlined above, and saying that results are “mixed” as we have seen in a few quarters. They’re not mixed, they’re surprisingly strong – but early and indicative at this stage.

First indications from Peterborough – what do they tell us?

Last week was a big week for Social Finance as reoffending data on the Peterborough pilot was published by the Office of National Statistics. This gives the first early sense of how our first Social Impact Bond is doing. In this blog I want to explore the results a little and some of their implications.

So, first, the numbers, or to give it its full title:

Peterborough (and national equivalent) interim re-conviction figures using a partial (19 month) cohort and a 6 month re-conviction period

 Peterborough

National

Discharge Period

Cohort size

Binary

Frequency

Binary

Frequency

 Sep05-Mar07

725

39.70%

72

36.60%

61

 Sep06-Mar08

870

40.30%

81

37.80%

64

 Sep07-Mar09

1031

40.70%

84

38.30%

68

 Sep08-Mar10

981

41.60%

87

37.30%

69

 Sep10-Mar12

844

39.20%

81

39.30%

79

Binary: Reconviction rate over six months
Frequency: Frequency of reconviction events per 100 offenders within six months

Three topics to cover:

-          Is the Peterborough SIB working?

-          What do these numbers tell us about whether investors are likely to get paid?

-          Do they have any implications for developing the national recidivism PbR work?

1.  Is the Peterborough SIB working?

Put simply, it would appear so. The best way to show this is to index the results so that you can see them together and then to plot Peterborough relative to the national data:

Rebased reoffending data

So, the key measure for us is the one that we will be paid on, the percentage change in frequency of reoffending against a comparison group, in this instance the national cohort.

Peterborough relative to national

On that basis Peterborough has shown a 23% relative decline to the national data. On a sample size of 844 this is likely to be statistically significant, so on reoffending within six months, rather than a year, it appears we are making a difference.

Any caveats? A number. This is on the basis of six month reoffending, not 12 months, so one could argue that the impact of our programme may lessen over time. The comparison group, of wider national reoffending, is not as carefully defined as the comparison group that we have developed in the Peterborough model proper, where the reoffending rates of a matched cohort from the police national computer is used. Given this, the comparator group 16% increase over a two year period is something of an outlier, but it is all we have to go on.

So plenty of caveats. But however much these figures are indicative and however tentative and careful we are being; for a programme in its infancy and on its first cohort, this is a great start.

2. What do we know about whether investors will get paid?

So, two completely different numbers to note here:

a) the 23% relative decline discussed above; and

b) the fact that after this change the frequency and binary metric for Peterborough are now in line with the national average.

In other words what we have achieved so far is to move Peterborough from its historically higher rate of recidivism, to the national average. Through one lens we have done tremendously well. Through another lens Peterborough did (almost) exactly the same as the national average. Which lens will be reflected in the comparison group drawn from the police national computer?

If the prisoners in Peterborough are different and thus reoffend more, then this should be picked up in the comparison group as each individual is matched to one as similar as possible.

If the local environment is different, the prison for example, or the courts system, or the police… Then it is much less clear whether that will be picked up by the comparison group. It could be in part, if prisoners going through Peterborough are relatively local (and about 70% are) then those factors could be picked up to some extent in their criminal history and be matched to prisoners from similar environments. For those that are more transient, for example those coming through from London, such effects are unlikely to be picked up.

Locally there has been speculation for a number of years around why Peterborough’s recidivism rate is higher than the national average, and most of that speculation has focused on prisoner mix. But I don’t believe anybody has any evidence to back that up.

So, this all adds up to probably a greater uncertainty as to whether we will be paid for outcomes than we have that the programme is generating outcomes.

3. Any implications for the development of the national PbR programme?

These numbers probably complicate the development of the national PbR programme in one significant way, they give the impression of an increasing trend in reoffending, while the wider crime stats in terms of amount of reported crime and the British Crime Survey has generally been going down.

The key requirement this creates is that the Ministry of Justice needs to be completely transparent with the data and analysis that it is using to develop the counterfactual data. It simply cannot credibly develop it on its own and then tell people the answer. Regional variation needs to be understood, historical variance needs to be understood, a dialogue is needed to develop an acceptable answer.

Secondly, it increases the potential, in my view, for a proportion of outcomes payment to come from a fixed sized pot that is shared out according to relative performance amongst providers. This will resolve some of the uncertainties in bidders minds and show them that, while they may be taking a risk, there is a defined amount of outcomes payments that will be made if they perform better than some of their peers.

For such a pot to work, there should be a requirement to give a minimum spend on rehabilitation in the bidding process. Open book accounting thereafter can ensure that bidders keep to their promises, but the amount that bidders are willing to invest in reducing reoffending can then be used as part of their assessment. This can be used to counter the issue in the Work Programme – that for profit maximising providers the outcomes payments for harder to reach groups are insufficient to invest in trying to get them back into work.

So, the idea that bidders demonstrate a minimum commitment, is vital to maintaining the programmes credibility – that it is about rehabilitation, as opposed to only being about cost cutting and privatisation.

A Step In the Right Direction But Not Enough

This blog was also posted on Social Finance’s blog here

Slowly we are getting to know more about the plans for probation and prisoner rehabilitation reform. We can see that some effort is being made to make the model work better following the consultation but is it enough to allow for a level playfield for all providers to take part? Will it achieve the ultimate goal reducing crime?

The key positive changes are as follows:

  • The whole idea that rehabilitation of prisoners and short sentence prisoners in particular has moved to centre stage and has become a key government policy is terrific and long overdue. Many should feel embarrassed that this has taken so long.
  • The idea of resettlement prisons, which will have a requirement to work with those providing rehabilitation services, and to which prisoners will be transferred at least three months before release. This resolves a key issue that effective rehabilitation needs to start in prison, rather than after.
  • The acceptance that a mix of binary and frequency measurement is required to make this system work. This may sound an esoteric point but is vital. A binary measure only pays in the event that a prisoner stops committing crime for twelve months. A frequency measure pays for reductions in offending across a cohort. Some people go through the prison system 10 or more times a year. If you have only a binary measure as was originally suggested the only correct financial decision when faced with such an individual (and required to give them a service) would be to give them a leaflet and tell them to go away. Further investment would invariably be loss-making as intense effort over a period of time and money is needed both to gain trust and thereafter help to move their lives in the right direction. Being paid on frequency and therefore acknowledging “distance travelled” will make it worthwhile financially.
  • The shift in number of contract areas from 16 to 21, and the different area sizes, are positive changes. This should mean that social organisations acting as primes, or a probation trust mutual, can bid.
  • But the positive impact of other elements in the response is less clear. The idea of standard contracts is interesting, but will they have to be finalised before the last round of bidding? In other words is there wriggle room?
  • The transparency piece sounds like a step in the right direction, but only a step. Input cost data and outcome payments should be transparently available across the market. In the public sector we see how much is spent on what and hopefully also get an idea (not always!) of the outcomes generated from that money. In the circumstances where you are building a new market, this data is even more important, not less. There will be enough benefit to incumbency without letting providers keep hold of this data. This will also make it clearer if a provider is not finding it economic to work with a particular cohort and is parking them.
  • The comments about women in prison were good to see, but they didn’t seem to imply that anything would be done to make the model work for women.

And there are areas where we simply don’t know anything:

  • What are the potential pricing expectations?
  • Will there be significant segmentation of the cohort and the pricing that goes with it?
  • Will men and women be priced the same? Needs and complexity are very different.
  • Is there room for alterations of pricing for specific groups as we learn over time? It seems deeply unlikely that it will be right first time.

So, at the end of this, what are concerns?

1. This is an incredibly complex, risky and ambitious programme of change. Tom Gash at the Institute for Government has written on this issue in his blog, with sensible recommendations for reducing the risk.

2. Bidding process and pace will favour incumbents

I was told by a private sector provider considering bidding for prisons that they had understood they should expect to bid in one round to learn how to possibly win in a later round. In other words, spend £1 million plus on a learning process before you stand a chance. Social organisations or probation trust mutuals don’t have that luxury. Those who know what the MoJ expects in large contracts will score better than those who are learning on the process. So the answer that it’s a level playing field simply doesn’t wash.

If charities are going to invest upwards of £250,000 of charitable funds and a considerable proportion of senior management time on a bidding process, they need more substance from the MoJ that they stand a realistic chance.

In addition, while there are some limited resources available to help test the mutual option, developing such a strategy and capacity takes time. So would developing a social prime and investment for it. The focus on the timetable above all else is in danger of defining the answer.

It should be a strategic imperative for the MoJ to end up with a mixed economy of private and social provision (and not just in the supply chain, at the prime level). There will be more learning, more constructive competitive tension, and probably greater investment in rehabilitation. European law should allow the MoJ to actively manage the market and they should do so, explicitly.

3. There is still room for gaming in the bidding

Gaming bidding is where an entity bids on the basis of having little intention of doing much rehabilitation, and makes money from input revenues without generating very many outcomes. Some seems to have occurred in the work programme, particularly around harder to reach groups. There are a number of ways that the MoJ can avoid this, examples include:

  • Requiring a certain level of investment in rehabilitation and monitoring it.
  • Scoring bidders on how much they say they will invest in rehabilitation and monitoring it.
  • Requiring transparency on input and outcome data and stating that bidders authenticity to what they said they would do will be assessed and those below a certain threshold won’t be allowed to bid again.
  • Without such measures, a sense that a low cost, gaming bid is likely to do better than a higher cost, rehabilitative bid will prevail

4. I’m not convinced the numbers add up

I can’t see into probation numbers, so I don’t know if it works to take out 20% of cost and provide an effective rehabilitation service for a wider community on top. But my instinct is that real rehabilitation will require real money. This money  is presently tied up in prisons. There should be a sense that these contracts can, if very successful, eat into the prison budget. What is fundamentally up for grabs here is what is the right allocation of resources between processing and punishing people, and trying to stop them doing it again. I wrote about this more substantially on another occasion. Read it here. My view is that the allocation that gets the number of future victims of crime to be as low as possible. In other words this is not about being nice to prisoners, or not nice to prisoners. It is about stopping crime and helping avoid further victims.

In conclusion, the MoJ is making some effort to allow this to work for a wider community than simply their incumbent private sector providers but not enough. The perceived need for speed and the inaccurate perception that they are building a level playing field are likely to undermine social sector interest in bidding at the top tier. The rehabilitation revolution should be about creating social value, reducing crime and reducing the costs of justice overall, and not simply about providing a lower cost privatised probation service. It would be a shame if at the end of the process this was how it was perceived.