Poxy Means Testing: it’s Official!

(“A prox on both your houses”)[i]

The World Bank has recently – and some would say belatedly – undertaken a critical review of the Proxy Means Test (PMT)[ii], the approach to targeting that it has been advocating, uncritically, for the past decade.

The results are astonishing. Disguised beneath a splendidly econometric veneer, the raw findings that emerge demonstrate that the PMT is a wholly unsatisfactory targeting mechanism. Based on rigorous analysis of PMTs in nine sub-Saharan African countries (Burkina Faso, Ethiopia, Ghana, Malawi, Mali, Niger, Nigeria, Tanzania and Uganda), it finds the following: when using “Ordinary Least Squares results for Basic PMT” (the most common PMT approach), with a fixed poverty line of 20% of the population, “On average, the rate of inclusion errors implies that 48% of those identified as poor by the Basic PMT method are in fact non-poor”; and “The average exclusion error is sizeable, with 81% of those who are in the poorest 20% in terms of survey-based consumption being incorrectly identified as non-poor by the PMT method”.

Let’s just stop and think about this. What this means is that, if a country is encouraged to establish a poverty-targeted social assistance programme targeting the poorest 20% of its population, then its policy-makers will need to accept two facts: that almost half of the actual beneficiaries of the programme would be from outside the intended sub-group; and that fully four out of every five households intended to benefit would in reality be excluded from the programme. What kind of policy-maker would accept such lamentable targeting performance? In Mali, incidentally, not one single ultra-poor household was correctly identified by the PMT as being ultra-poor: an exclusion rate of 100%!

The paper goes on to suggest that certain refinements can improve the accuracy of econometric targeting. But the improvements are small, and the necessary refinements range from being unlikely to being wholly impractical in reality. At the unlikely end of the scale, one suggestion is to increase the coverage of such poverty-targeted programmes to 40% of the population. Yet there are practically no examples of this in Africa, and the reality is that the vast majority of PMT-based programmes target even fewer than the poorest 20%. At a more common level of 10% coverage, the targeting errors are likely to be significantly higher, especially since, as the paper states: “econometric targeting may have difficulty in identifying those who are very poor” and “PMT is missing many of the poorest households in all countries”. At the wholly impractical end of the scale, the proposal is to run an “Expanded PMT” with “far more data”. But here the paper itself accepts that: (a) “the improvement would have to be judged as modest”; and (b) “the field implementation of a PMT formula with many variables is expensive and difficult”.

Remember too that these underwhelming reported outcomes reflect only the inherent statistical inaccuracy of the PMT approach. As other papers have emphasised[iii], the overall performance of a PMT will inevitably be further compromised by a range of other factors. Many of these are touched on, but not explored, in the World Bank paper. Actually implementing the complex and unintuitive PMT approach is bound to introduce further errors (as the paper coyly admits “Field implementation may introduce idiosyncratic mistakes”, and “Most likely the methods will perform less well than our calculations suggest.”). And there are still further problems: (a) with a PMT’s perverse incentives (e.g. households not wanting to acquire assets or improve their dwelling for fear of being excluded); (b) with moral hazard (e.g. households being encouraged to lie about their situation in order to qualify); (c) with the actual costs involved in the targeting process; and (d) with the damage to social cohesion of an improperly understood and seemingly arbitrary selection procedure. As the paper acknowledges, “We present and compare the best-case results for the various methods reviewed, unaffected by potentially differential costs, ease of implementation, and susceptibility to manipulation and corruption”.

One final reason for the PMT’s inaccuracy – which the paper does explore – is that of its inability to respond to the dynamics of poverty. There is always a degree of churning in and out of poverty, and a PMT is very static: most PMTs are only re-run every five to ten years. The paper looks at the implications of this on targeting accuracy by using panel data and running the analysis with lags of one to two years. This shows that – even with such a small lag – inclusion error increases from 48% to 55%, and exclusion error from 81% to 90%. On this basis (which would become still worse over a longer time-lag), we would now need to be telling our putative policy-maker that his or her poverty-targeted social assistance will consequently include more unintended than intended beneficiaries; and that nine out of ten of the intended beneficiaries will be excluded from the programme. This is crazy: imagine trying to persuade a policy-maker to adopt a criminal justice system that resulted in more than 50% of all jail inmates being innocent, and nine out of ten criminals being found not guilty!

So what are the alternatives to PMTs? Well, the paper helpfully goes on to explore some options. It looks at various permutations of simpler, more transparent and more intuitive targeting approaches, premised on a basic income transfer either to all, or to selected categories of, the population (children, the elderly, widowed, disabled or orphaned). It assumes the same overall budget for all the options (though it doesn’t allow for the additional costs involved in running a PMT), and it looks at the comparative poverty impacts of each. The verdict: “even under seemingly ideal conditions, the ‘high-tech’ solutions to the targeting problem with imperfect information do not do much better than age-old methods using state-contingent transfers or even simpler basic income schemes. We find that an especially simple demographic ‘scorecard’ method can do almost as well as econometric targeting in terms of the impacts on poverty. Indeed, on allowing for likely lags in implementing PMT, the simpler categorical targeting methods perform better on average in bringing down the current poverty rate. This conclusion would undoubtedly be strengthened once the full costs of fine targeting are taken into account”.

The paper thus demonstrates conclusively that, in terms of poverty reduction in the real world, PMT performs worse than simpler categorical approaches or even basic income schemes…as well as being administratively costly, morally reprehensible and socially divisive.

Hurrah! But why has this taken so long? And what are the implications for those countries that the World Bank has already persuaded to sign up to such an execrable model?

Nicholas Freeland is an independent consultant with over thirty years professional experience in social protection, food security, and poverty reduction.

[i] To misquote Mercutio in “Romeo and Juliet” by William Shakespeare.

[ii] Brown, C, Ravallion, M and van de Walle, D (December 2016), “A Poor Means Test? Econometric Targeting in Africa”, World Bank Policy Research Working Paper 7915, Washington DC.

[iii] See for example, Kidd, S and Wylde, E (September 2011), “Targeting the Poorest: An assessment of the proxy means test methodology”, AusAID, Canberra; and Kidd, S, Gelders, B; Bailey-Athias, D (2017) “Exclusion by design: An assessment of the effectiveness of the proxy means test poverty targeting mechanism”, International Labour Office, Social Protection Department (SOCPRO), Geneva.

Also published on Medium.

6 Responses to “Poxy Means Testing: it’s Official!”

  1. I much appreciate this positive review of my research paper with Cait Brown and Dominique van de Walle. But I don’t think you are being fair to the World Bank. If the Bank was the monolithic, sluggish, institution so resistant to criticism and change that you paint in your blog post, why did the Bank support this research and allow the paper to be published in its own working paper series? The Bank’s researchers (of which I was one for many years, but left the Bank in 2012) are trying continually to assess and improve the data and methods used in its operations, and by policy makers.

  2. Stephen Kidd Reply

    Dear Martin,

    Thanks for this comment. I agree with you that there are alternative thinkers in the World Bank who do interesting work, such as this. But, this paper came from the Research Group in the Bank, who tend to do some great work. The paper was not from the Social Protection Group. In the latter we do find simplistic mantras such as the push for all countries to use PMTs, while introducing CCTs, public works and now the dreaded Social Registry, irrespective of the country context. I work in many countries and see the same recipe almost everywhere (although there are occasionally one or two distinct voices from the Bank, but they are rare). Many of the publications coming out of the Social Protection Group are little more than propaganda and often of low quality. See for example my critique of a WB paper that came out on targeting which you can find at: http://bit.ly/2gCZyYt . Amazingly, this paper was even quality assured. And, in our latest paper on the PMT, we pointed out recent World Bank SP Group publications that had claimed that the PMT is accurate (which we all know is nonsense). In 2011, we published a paper on PMTs – which you cited – but this apparently had no impact on the WB SP Group which continued to move ahead with promoting the PMT, causing heartache and damage to the well-being of millions of people worldwide. I’m glad that the Research Group is now beginning to challenge the SP Group, and I wish them success (but, believe me, it will be a challenge to change the worldview of the SP Group ideologues). As you recently said in a talk: targeting the poor has become a fetish. So, how do we de-programme these guys from the poverty targeting cult that they find themselves in and can’t seem to escape from?

  3. Yvonne Nawila Mwale Reply

    It is really interesting for me to see this kind of debate. The finding is long overdue. However, am grateful that the World Bank has come to terms with this. The PMT approach has contributed to low budget towards social development. In the context of not leaving anyone behind, it is important that universal approaches be advocated for. As Jimi Adesina loves putting it, there has been a diminishing view of social policy to social protection to social cash transfers. With so much of piloting for many years. Up scaling equally still leaves a lot of the intended behind. The number on social protection programs is increased on the basis of how many were covered the previous year and not looking at the population of those in poverty. There is much more to be done in de – programming and changing the mindset of policy makers towards poverty reduction. I will be looking forward to how this will be done. Worse for the promotion of single registry that aims at avoiding the much talked about double dipping. I love the idea that poverty is multidimensional and is not equivalent to lack of income/money/cash. There is nothing wrong in one person accessing three or more different social services to enhance their livelihoods. For Zambia now, when you hear social protection, you hear social cash transfers louder. PMT have caused a lot of harm than good, leaving communities disunited and broken between those receiving and not, those intended and not, full of lies justifying 14 dollars gotten bimonthly responsible for building houses among others.

  4. thomas gabrielle Reply

    Dear Both,

    Thank you for this interesting discussion. I come to SP from Food Security situation analysis and find many similarities in terms of concerns over how to target households. At WFP in Haiti we are using a modified (and simplified) version of the WB PMT, but having many problems with the methodology. Firstly the algorithm was so complex that even the international expert (WB) who wrote the algorithm made mistakes in coding which caused about 20% of the households to be mis-grouped. Secondly is always the complaint from local authorities, who are usually left out of the entire process due to fears of corruption, that many households do not ‘deserve’ to be included. Our government partner has also become quite skeptical. Personally (I am the guy who looks at the data), I am suspect of such methods to identify the ‘most vulnerable’ 15% when the poverty rate in Haiti is over 50% (under 1.50 a day). In the past couple of years, new methodologies have been tried in Haiti (SPI, Freq Listing) which we are most likely going to try. At least the Freq Listing will give us a way of including local knowledge to identify a first level grouping of ‘vulnerable households’ which we will then survey with a socio-economic tool. This will reduce the door-to-door from 100% to about 25-30%, and should (a) lower costs (making it more sustainable for the gov to run) and (b) include local knowledge in a cross-checking manner.
    There have also been many discussions over the Single Registry, which seems to me a very difficult system to achieve in Haiti where there is very low government involvement, funding, or real ownership of the strategy. I hope to read more from your individual and organizational work on this topic.


Leave a Reply

Your email address will not be published. Required fields are marked *