Third strike and you’re out? Rich data source reveals proxy means test failures

What do you do when your most important poverty measurement tool on a poverty outreach programme looks as if it is not working? Guest blogger Julie Lawson-McDowall writes.

Living standards in Zambia: Sources of data were not matching up

Living standards in Zambia: Sources of data were not matching up

Not so long ago, we at CRS’s Expanding Financial Inclusion (EFI) project ( ) had a bit of a moment: it looked as if our main poverty measurement tool wasn’t working very well. We were using the ‘Progress out of Poverty’ Index (PPI) to measure poverty (for those who haven’t come across it, the PPI is a poverty measurement tool that uses the answers to ten questions about a household’s characteristics and asset ownership to compute the likelihood that the household is living below different national and international poverty lines).  The PPI draws on national living conditions survey data to estimate a household’s poverty level and is thus a type of proxy means test.  Since the EFI project’s objective was to reach the poorest people to include them in our savings groups (Savings and Internal Lending Communities), we had a problem if we couldn’t measure whether we were reaching poor people or not.

So, why did we think we had a problem? The EFI project reached over half a million people across 4 countries (Burkina Faso, Senegal, Uganda and Zambia).  But, uniquely, for one group of 255 households in Zambia, we had two sources of data – and these two data sources were not lining up.  In addition to the PPI survey data, for these 255 households, we had data from a 2-year financial diaries study.  So, while the PPI data was telling us that in Zambia around three-quarters of our group members had at least an 89% chance of living on less than $1.25 a day, our financial diaries income and expenditure suggested differently.  What to do?

Thanks again to the diaries, we had a third – and unusually rich – data source for poverty measurement to hand: the 14 field officers who had spent two years living in the communities, each visiting the same 25 households weekly to collect financial data.  These officers knew ‘their’ households’ inside out.  We pulled them in for an in-depth wealth and wellbeing ranking exercise and reviewed the situation of the households from the point of view of income, resilience, assets and livelihoods.

When we compared the results from the 3 data sources (see Table 1, below), they all followed the same broad trend (which was a relief): for example, households that the wealth ranking placed as ‘well-off’ households were also less likely to be poor according to PPI likelihoods and had the highest expenditures, as measured by the Financial Diaries.

Screen Shot 2018-05-08 at 16.24.19

But, when we looked more closely, discrepancies began to emerge: for example, those households classified as ‘managing’ spent over one and a half times more than households classified as ‘poor’ but this difference was not reflected in the PPI scores where the two categories scores were only 7 points apart.  In short, the PPI was not sensitive to differences across households that were highly likely to be poor but where some were much poorer than others.

It was clear that the PPI was struggling to measure differences in the depth of poverty.  In other words, we had an absolute/relative poverty issue – nearly everyone in our Financial Diaries study was below the poverty line – but some scored 80% of the poverty line and some scored 40%.  Nonetheless, in relative terms, there were important enough wealth, asset, resilience and livelihoods differences between these households for them to be considered to be managing well or poor by local ‘experts’; this suggested that the PPI was not doing a great job in distinguishing between households with different income levels where most were below the poverty line.

But, as we dug deeper into the data, another problem became clear: PPI likelihood of poverty and income were not matching up. Households falling into the higher likelihood of poverty categories should have lower per capita expenditures and the relationship should be linear.  This means that the higher the poverty likelihood according to the PPI – in other words, the greater the chance of being poor – the lower should be the per capita expenditure.  Yet we found several situations where those in a lower PPI poverty likelihood category –  i.e. better off households – had a lower per capita expenditure than supposedly poorer households.  This is illustrated in the graph in Figure 1, below, which, for each mark, shows the number of households in each poverty likelihood category and the poverty likelihood percentage. Generally, the graph slopes from left to right, which is what one would expect—as the poverty likelihood decreases so per capita expenditure should increase. But the devil is in the details: at poverty likelihoods above 80 percent, as measured by the PPI, the line sometimes turns back on itself, sloping from right to left, suggesting that some households with a greater likelihood of poverty according to the PPI actually had higher per capita household expenditures than households with lower poverty likelihoods.

Screen Shot 2018-05-08 at 16.27.07

What can we conclude?  Overall, the PPI did a reasonable job of tracking broad trends in poverty for the households in the Financial Diaries study.  But it struggled to tell the difference among households that were living below the poverty line: and in Zambia, around 40% of the population is below the extreme poverty line and around 64% are below the basic poverty line.

What are the consequences for programming if the PPI is insensitive to depth of poverty differences?  First, it suggests that the PPI – a PMT – should not be used for targeting when the goal of targeting is to extend the depth of outreach among the poor, not simply to reach the poor. This means that, in Zambia at least, a PMT alone is not going to identify the poorest where so many are poor.

For those still wanting to use the PPI for targeting, we would recommend that the PPI should be complemented with other tools in future (although those tools, as ever, would depend on the objectives and the budget available).  Second, the data presented in this report suggest that the PPI should not be used to measure progress from one level of poverty to another—in other words it should not be used to measure progress within poverty, but only more general progress out of poverty—because the PPI is not precise enough.

Finally, what was great about this piece of work was that we were able to share and discuss our findings with the PPI team and listen to their ideas about how they are planning to continue to refine and improve this tool, for example, by making it more sensitive to sub-national poverty variations.

Anyone wanting to read the brief or full report for this study can click here.

The Expanding Financial Inclusion in Africa Project (2013-2017) was a Catholic Relief Services project, funded by the MasterCard Foundation.  This research was carried out with Samuel Beecher and Benjamin Allen from CRS and Guy Stuart of Microfinance Opportunities. 

Julie Lawson-McDowall works within some of the most effective recent programming responses to extreme poverty and gender inequality: cash based interventions, social protection, financial inclusion and women’s empowerment programming. She was the Research Coordinator for the Expanding Financial Inclusion in Africa Project run by CRS. Having undertaken long term work in Zambia, Somalia, Kenya, Malawi, Bangladesh and Pakistan, Julie is currently Oxfam’s Global Advisor on Social Protection, Cash and Resilience. 

One Response to “Third strike and you’re out? Rich data source reveals proxy means test failures”

  1. Thanks for this post, Julie, and it was indeed good to discuss these challenges with you and your team. We continue to improve the tool based on ongoing experience and feedback we get from users like CRS, so your work is critical. Our Technical Director responded about some of the changes we are making in a comment to this post on the PPI blog –

    One thing I’d like to caution against, though, is making a generalization from one application of the PPI (for a sample of 255 households in one part of one country) to Proxy Means Tests in general. Otherwise, points are well taken and we welcome more testing of the PPI in other contexts, which can help us continue to adapt and improve the tool. Reach out to us at

Leave a Reply

Your email address will not be published. Required fields are marked *