Further data on the Peterborough Social Impact Bond

The Office of National Statistics provided further data on Peterborough at the end of July, this time on the complete first cohort of 1,000 prisoners.

While this is largely confirmatory information, the Ministry of Justice found a closer matching baseline, by focusing on local prisons rather than all national prisons. This responds to the concern that Peterborough may be hard to emulate or unrepresentative as it is local and therefore returns more of its prisoners to the local area.

The updated data looks like this:

Peterborough (and national equivalent) interim re-conviction figures of cohort 1 with a 6 month re-conviction period

 Peterborough

National local prisons

Discharge Period

Cohort size

Binary

Frequency

Binary

Frequency

 Sep05-Jun07

837

40.4%

74

36.60%

66

 Sep06- Jun08

1028

40.6%

81

37.80%

71

 Sep07- Jun09

1170

41.0%

85

38.30%

74

 Sep08- Jun10

1088

40.3%

84

37.30%

75

 Sep10- Jun12

1006

38.6%

78

39.30%

84

Binary: Reconviction rate over six months
Frequency: Frequency of reconviction events per 100 offenders within six months

A few topics to cover:

– Is this a better baseline and therefore does it give us greater confidence in the effect that Peterborough is having?

– Is this data good, or mixed as some have reported?

1. Is this a better baseline?
It should be, as it better matches the Peterborough cohort. As an experiment, I thought I would put together similar graphs to the ones before and compare them.

Data to March, with National baseline
Rebased reoffending data

Data to June with National local baselineRebased reoffending 2

And now the relative change graphs

Data to March, with National baseline
Peterborough relative to national

Data to June with National local baselinePeterborough relative to national2

What this shows visually is that the new baseline appears to be a better fit. Movements in the baseline prior to the intervention are closer to the movements in the Peterborough cohort, in other words the baseline appears to explain more of the movement in the Peterborough data. So it gives us greater confidence that we are seeing an intervention effect.

It also gives us a degree of greater confidence that we will get paid. The previous data ended with Peterborough’s frequency number equalling the national average. This one ends with Peterborough at least improving upon it. The propensity score matching process should bring out a comparison cohort that is even more similar, but of course we still haven’t tried it.

So, is it time to pop open the champagne and celebrate? Not yet. This is good news, but it is still only on six month data. We will be measured on whether we reduce offending over twelve months. What we can say is that our intervention appears to at least delay reoffending behaviour.

We should also say, this is only the first cohort of the first Social Impact Bond. It is incredibly early days so drawing significant conclusions at this stage is premature. On the other hand, we are learning and developing all the time, so the fact that we see a significant impact on such an early group is clearly exciting.

2. So is this data good, or mixed as some have reported?

We are cautious, because this is early days and early data. It isn’t a randomised control trial, sure. Nor is it the formal comparison cohort that will be developed for payment purposes using propensity score matching. But this is very positive data, on the best available information.

In the first set of results, which were also good, one of the caveats people put forward was that the Peterborough frequency was now only the national average. On this closer baseline this is no longer the case.

Another concern was that the jump in re-offending frequency in the national data should be treated with caution. I understand that, and see the potential for regression to the mean, but comparison with national data is more precise than looking at a comparison with historical data. Thus the 20% relative decline is the more useful figure than the 8% decline against historical figures, particularly given the strong correlation between the local prison data and the Peterborough data historically.

It is important to draw a distinction between responding with caution, on the basis of the caveats outlined above, and saying that results are “mixed” as we have seen in a few quarters. They’re not mixed, they’re surprisingly strong – but early and indicative at this stage.

Advertisements

First indications from Peterborough – what do they tell us?

Last week was a big week for Social Finance as reoffending data on the Peterborough pilot was published by the Office of National Statistics. This gives the first early sense of how our first Social Impact Bond is doing. In this blog I want to explore the results a little and some of their implications.

So, first, the numbers, or to give it its full title:

Peterborough (and national equivalent) interim re-conviction figures using a partial (19 month) cohort and a 6 month re-conviction period

 Peterborough

National

Discharge Period

Cohort size

Binary

Frequency

Binary

Frequency

 Sep05-Mar07

725

39.70%

72

36.60%

61

 Sep06-Mar08

870

40.30%

81

37.80%

64

 Sep07-Mar09

1031

40.70%

84

38.30%

68

 Sep08-Mar10

981

41.60%

87

37.30%

69

 Sep10-Mar12

844

39.20%

81

39.30%

79

Binary: Reconviction rate over six months
Frequency: Frequency of reconviction events per 100 offenders within six months

Three topics to cover:

–          Is the Peterborough SIB working?

–          What do these numbers tell us about whether investors are likely to get paid?

–          Do they have any implications for developing the national recidivism PbR work?

1.  Is the Peterborough SIB working?

Put simply, it would appear so. The best way to show this is to index the results so that you can see them together and then to plot Peterborough relative to the national data:

Rebased reoffending data

So, the key measure for us is the one that we will be paid on, the percentage change in frequency of reoffending against a comparison group, in this instance the national cohort.

Peterborough relative to national

On that basis Peterborough has shown a 23% relative decline to the national data. On a sample size of 844 this is likely to be statistically significant, so on reoffending within six months, rather than a year, it appears we are making a difference.

Any caveats? A number. This is on the basis of six month reoffending, not 12 months, so one could argue that the impact of our programme may lessen over time. The comparison group, of wider national reoffending, is not as carefully defined as the comparison group that we have developed in the Peterborough model proper, where the reoffending rates of a matched cohort from the police national computer is used. Given this, the comparator group 16% increase over a two year period is something of an outlier, but it is all we have to go on.

So plenty of caveats. But however much these figures are indicative and however tentative and careful we are being; for a programme in its infancy and on its first cohort, this is a great start.

2. What do we know about whether investors will get paid?

So, two completely different numbers to note here:

a) the 23% relative decline discussed above; and

b) the fact that after this change the frequency and binary metric for Peterborough are now in line with the national average.

In other words what we have achieved so far is to move Peterborough from its historically higher rate of recidivism, to the national average. Through one lens we have done tremendously well. Through another lens Peterborough did (almost) exactly the same as the national average. Which lens will be reflected in the comparison group drawn from the police national computer?

If the prisoners in Peterborough are different and thus reoffend more, then this should be picked up in the comparison group as each individual is matched to one as similar as possible.

If the local environment is different, the prison for example, or the courts system, or the police… Then it is much less clear whether that will be picked up by the comparison group. It could be in part, if prisoners going through Peterborough are relatively local (and about 70% are) then those factors could be picked up to some extent in their criminal history and be matched to prisoners from similar environments. For those that are more transient, for example those coming through from London, such effects are unlikely to be picked up.

Locally there has been speculation for a number of years around why Peterborough’s recidivism rate is higher than the national average, and most of that speculation has focused on prisoner mix. But I don’t believe anybody has any evidence to back that up.

So, this all adds up to probably a greater uncertainty as to whether we will be paid for outcomes than we have that the programme is generating outcomes.

3. Any implications for the development of the national PbR programme?

These numbers probably complicate the development of the national PbR programme in one significant way, they give the impression of an increasing trend in reoffending, while the wider crime stats in terms of amount of reported crime and the British Crime Survey has generally been going down.

The key requirement this creates is that the Ministry of Justice needs to be completely transparent with the data and analysis that it is using to develop the counterfactual data. It simply cannot credibly develop it on its own and then tell people the answer. Regional variation needs to be understood, historical variance needs to be understood, a dialogue is needed to develop an acceptable answer.

Secondly, it increases the potential, in my view, for a proportion of outcomes payment to come from a fixed sized pot that is shared out according to relative performance amongst providers. This will resolve some of the uncertainties in bidders minds and show them that, while they may be taking a risk, there is a defined amount of outcomes payments that will be made if they perform better than some of their peers.

For such a pot to work, there should be a requirement to give a minimum spend on rehabilitation in the bidding process. Open book accounting thereafter can ensure that bidders keep to their promises, but the amount that bidders are willing to invest in reducing reoffending can then be used as part of their assessment. This can be used to counter the issue in the Work Programme – that for profit maximising providers the outcomes payments for harder to reach groups are insufficient to invest in trying to get them back into work.

So, the idea that bidders demonstrate a minimum commitment, is vital to maintaining the programmes credibility – that it is about rehabilitation, as opposed to only being about cost cutting and privatisation.