Last weekend the Observer ran with the story on welfare reform and homelessness. A senior civil servant at CLG had written to the Prime Minister warning that the Government’s proposed welfare reforms could result in – among other negative consequences – 40,000 additional homeless households (as I discussed here). This raised questions about a Government willing to ignore its own evidence and the accuracy or otherwise of Ministerial statements to Parliament. Subsequently Grant Shapps has dismissed the 40,000 figure because it was based upon “out of date” information and didn’t relate to current government policy. He also announced a £20m fund for integrating homelessness prevention services, rolling out a model that has worked in London to the rest of the country.
A passage in yesterday’s blog by the Guardian HousingNetwork Editor caught my eye:
What this week’s events demonstrate … is that the government and the public welcome evidence-based policy …
The evidence behind the projected number of homeless households following welfare reform is less solid. This represents a worst case scenario and, while useful for early modelling of the policy, should not be leapt upon by the housing sector if it cannot be certain of its veracity.
… When lobbying the government, the housing sector must look to produce its own, unbiased, quantitative and peer reviewed evidence to back up the arguments.
This raises at least a couple of issues.
First, it isn’t at all clear that top of the list for either the government or the public is evidence-based policy. Much current policy on housing and welfare reform is quite clearly driven by belief rather than evidence. And much of it is welcomed by large sections of the public. Evidence beyond that produced by the Government indicates these policies are likely to have significant negative consequences. But it hasn’t had much impact. Evidence-based critique meets with Government denials that any such negative consequences will happen. Denial doesn’t really constitute a rational rebuttal. But it has so far been effective.
The Government’s attitude towards evidence is somewhat more contingent and political. Evidence will be invoked where it buttresses the case for pursuing a proposed course of action. But if the evidence is rather less sympathetic to their cause then it can be – sometimes rather blatantly – ignored. That isn’t unusual. Pretty much all policy making is like that.
It isn’t even entirely clear what Mr Shapps’ dismissal actually means. The CLG memo was produced at the start of 2011. What is it about the policy that has changed to make the outcomes forecast in the memo less likely? The reforms to LHA, for example, have not been modified significantly since then. The only change of substance is the removal of the punitive 10% cut in LHA for those who have been unemployed for 12 months.
Second, embedded in the statement above is a controversial perspective on the nature of evidence, particularly on what sort of weight can be placed upon forecasts. We can all agree that peer reviewed evidence is highly desirable. Unfortunately, it isn’t always available on a timescale that can actually influence policy. But the idea that it is possible to produce ex ante forecasts of demonstrable veracity is implausible.
Any forecast of policy impact must make assumptions about what is going to happen – in particular, how people are going to behave. Some of those assumptions may be grounded in well-evidenced causal relations. Some causal relations are less well-attested. We know very little as yet about the behavioural responses to rent or benefit changes affecting tenants in either the public or social rented sector. We know very little about the behavioural responses of private – or indeed public – landlords, although we are starting to get a sense of how LHA changes are being received. The relevant impact assessments produced by the Government acknowledge that there is much we don’t know that we’d want to know if we aspire to accurate forecasts.
So there are a lot of heroic assumptions embodied in all the modelling that is going on. That is inevitable. It applies to any subsequent modelling by CLG, DWP or anyone outside the government.
Trivially, you can only know whether a forecast is accurate after the event. And then it will only turn out to be accurate if all the assumptions made in producing the forecast turn out to hold. But by the time we know which forecast has turned out to be accurate it is too late for it to have much influence over the direction of policy.
None of this is to say that evidence is irrelevant to the debate. But it is to say that modelling and forecasts of what will happen following policy implementation can at best inform thinking. They can influence debate and policy making. There are more or less robust forecasts, but none represents the Truth upon which policy must to be based. It cedes too much power to the technocrats to think otherwise.
Most recent comments