I had not heard this corruption of ‘evidence based policy making’ until I read this post by John Springford discussing the Gerard Lyons (economic advisor to London Mayor Boris Johnson) report on the costs and benefits of the UK leaving the EU. The idea is very simple. Policy makers know a policy is right, not because of any evidence, but because they just know it is right. However they feel that they need to create the impression that their policy is evidence based, if only because those who oppose the policy keep quoting evidence. So they go about concocting some evidence that supports their policy.
So how do people (including journalists) who are not experts tell whether evidence is genuine or manufactured? There is no foolproof way of doing this, but here are some indicators that should make you at least suspicious that you are looking at policy based evidence making.
1) Who commissioned the research? The reasons for suspicion here are obvious, but this - like all the indicators discussed here - is not always decisive on its own. For example the UK government in 2003 commissioned extensive research on its 5 tests for joining the EU, but that evidence showed no sign of bias in favour of the eventual decision. In that particular case none of the following indicators were present.
2) Who did the research? I know I’ll get it in the neck for saying this, but if the analysis is done by academics you can be relatively confident that the analysis is of a reasonable quality and not overtly biased. In contrast, work commissioned from, say, an economic consultancy is less trustworthy. This follows from the incentives either group faces.
What about work done in house by a ‘think-tank’? Not all think tanks are the same, of course. Some that are sometimes called this are really more like branches of academia: in economics UK examples are the Institute for Fiscal Studies (IFS) or the National Institute (NIESR), and Brookings is the obvious US example. They have longstanding reputations for producing unbiased and objective analysis. There are others that are more political, with clear sympathies to the left or right (or for a stance on a particular issue), but that alone does not preclude quality analysis that can be fairly objective. An indicator that I have found useful in practice is whether the think tank is open about its funding sources (i.e. a variant of (1).) If it is not, what are they trying to hide?
3) Where do key numbers come from? If numbers come from some model or analysis that is not included in the report or is unpublished you should be suspicious. See, for example, modelling the revenue raised by the bedroom tax that I discussed here. Be even more suspicious if numbers seem to have no connection to evidence of any kind, as in the case of some of the benefits assumed for Scottish independence that I discussed here.
4) Is the analysis comprehensive, or does it only consider the policy’s strong points. For example, does the analysis of a cut in taxes on petrol ignore the additional pollution, congestion and carbon costs caused by extra car usage (see this study)? If analysis is partial, are there good reasons for this (apart from getting the answer you want), and how clearly do the conclusions of the study point out the consequential bias?
A variant of this is where analysis is made to appear comprehensive by either assuming something clearly unrealistic, or by simply making up numbers. For example, a study may assume that the revenue lost from cutting a particular tax is made up by raising a lump sum tax, even though lump sum taxes do not exist. Alternatively tax cuts may be financed by unspecified spending cuts - sometimes called a ‘magic asterisk budget’.
5) What is the counterfactual? By which I mean, what is the policy compared to? Is the counterfactual realistic? An example might be an analysis of the macroeconomic impact of austerity. It would be unrealistic to compare austerity with a policy where the path for debt was unsustainable. Equally it would be pointless to look at the costs and benefits of delaying austerity if constraints on monetary policy are ignored. (Delaying austerity until after the liquidity trap is over is useful because its impact on output can be offset by easier monetary policy.)
Any further suggestions on how to spot policy based evidence making?
Artikel keren lainnya:
Belum ada tanggapan untuk "Policy Based Evidence Making"
Posting Komentar