Tool Development

This page provides information on the development and methodology of the Poverty Assessment Tools.

Ongoing Tool Development

There are two general types of tools that have been developed and certified:

  • From existing survey data, such as household expenditure surveys
  • From data collected firsthand by means of field surveys

In either case, the data must include a household consumption expenditure or income benchmark data by which to classify households as truly very poor or not very-poor, along with indicators about the household (on demographics, housing characteristics, consumer durables or assets owned, and other categories). From this initially large pool of indicators, statistical methods are used to identify which 10 or 15 provide the strongest clues as to the poverty status of each household. The resulting short list of indicators provides the basis for the poverty assessment tool for that country.

The accuracy of a particular set of indicators—of a USAID Poverty Assessment Tool——is assessed by comparing the predicted poverty status with the “true” poverty status as established by the benchmark national survey data. This technical note [PDF:55KB] explains the project's measure of accuracy, and the different econometric techniques that the team has developed to increase the accuracy of the indicators—of each USAID country Poverty Assessment Tool.

It is important to recognize that these indicators are selected because of their collective ability to predict poverty levels. This is why it may sometimes appear that a particular question on a PAT may not individually seem like a good predictor of poverty, or that a question that local experts may consider a good predictor of poverty may not appear on the survey.

The priority of new country tools is generally governed by the levels of USAID microenterprise funding and the availability of high quality, existing data.   As tools are certified and posted to the website, the results from the accuracy tests [PDF:125KB] are updated to include newer tools.

The PAT Methodology

The approach used to construct new poverty assessment tools is built on the lessons learned and methods refined during the original AMAP Developing Poverty Assessment Tools Project. During that project, two types of field tests that were run sequentially: the tests of accuracy, run by survey firms in 2004, and the tests of practicality, run by microenterprise practitioners in 2005 and 2006.

The input of local practitioners and the microenterprise community at large during the test process is desirable to enhance ownership of the results. Practitioners submitted tools for testing and have been involved in meetings and workshops where they offer feedback on the overall design of the test methodology and later on the findings. Most importantly, they were the implementers of the tests of practicality (see below), which highlighted the cost and ease-of-use implications of sets of indicators that were found to be good poverty predictors through the accuracy tests.

The Test of Accuracy

In the first phase of testing—the tests of accuracy—households in four countries across the four main USAID regions were surveyed to test the predictive capacity of a variety of poverty measurement indicators. The selected countries were Bangladesh, Peru, Uganda, and Kazakhstan. These countries were selected on the basis of a number of criteria, which include: size of USAID microenterprise funding, both globally and within each region; intensity of microenterprise activities; existence of pre-existing expenditure data (such as provided by LSMS, SDA-IS, or other survey which includes an expenditure module) for calibration of the poverty line; and linguistic variation.

In each field test, representative samples of both client and non-client households in urban and rural settings answered questions from a composite survey [PDF:162KB] that covered a large range of potential poverty measurement indicators, including such categories as assets, food security, education, transportation, utilities and sanitation, social capital, and savings. The surveyors returned to the same households exactly 14 days later to implement an LSMS-based expenditure survey that provided the "benchmark"” [PDF:112KB] of poverty levels. This two-week interval creates a bounded-recall situation (for example, “since our last visit, how much rice did you buy?”) that increases the accuracy of this benchmark survey.

The composite survey contains groups of indicators submitted by microenterprise practitioners in 2003 and 2004. It was not a prototype of the tools that would be submitted to USAID for certification, but instead a “tool incubator” through which a large number of combinations of indicators were tested for accuracy. The composite survey instrument was adapted in a visit to the country by members of the poverty assessment team who worked with a local survey firm to revise, pilot and then implement this questionnaire.

The foremost methodological concern was to conduct the tests of accuracy in a controlled environment, which should be as similar as possible across countries. The key elements of this controlled environment included to:

  1. The selection of the survey firms that undertake the tests. This involves developing criteria for the selection process which will be applied uniformly across the different countries that will participate in the tests.

  2. The design and selection of the sample for the test. This requires the development of guidelines that can be applied in each of the test countries.

  3. Training of the enumerators, supervisors and management staff. A uniform training program is developed and implemented, so that differences in preparedness and in familiarity with the test questions do not bias the results.

  4. Rules and implementation schedule for the field work. The schedule and workload of enumerators and the extent and type of supervision is harmonized across the different tests.

  5. Data entry system. The same software is used for data entry, and the data entry operators receive the same amount of training.

Once data was collected, IRIS identified the set of indicators most closely associated with a household being “very poor”. The project team did so using a rigorous selection method and a series of statistical methods. First, it identified the 15 indicators that most closely track the per-capita household expenditures (or income). Second, its experts performed statistical analysis using 8 different estimators. Finally, in this manner, project staff identified the best-performing set model of 15 indicators (with associated weights) for accurately identifying the poverty status of households in each country. This analysis also allowed the team to examine the stability of predictors, i.e., whether the indicators are consistent across countries in predicting poverty levels.

The team also obtained LSMS household data from eight additional countries to ensure robust conclusions on tool construction and poverty accuracy.  Analyzed through the method described above, these data sets allowed for the development of poverty assessment tools for 12 countries in total.

As the analysis of the results from the accuracy tests progressed, a number of issues related to the measurement and improvement of accuracy had to be addressed by the IRIS team. This technical note [PDF:55KB] explains the evolution of the team's thinking about the most appropriate ways to measure accuracy, and the different econometric techniques that have been developed to increase the accuracy of the indicators.

Additional information about the field tests of accuracy—analysis of poverty accuracy by country, note on the implementation of the composite expenditures by local survey firms, and field notes from IRIS consultants--can be found here.

Tests of Practicality

In the second phase of testing—— the tests of practicality—local microenterprise practitioners tested draft tools, constructed from indicators selected in accuracy testing to provide information about a variety of criteria, especially cost (time, money, infrastructure, etc.) and process/implementation issues. The IRIS team considered a range of factors in assessing the tools for practicality. These included:

  • what data collection methodologies work best for which types of practitioners;
  • which indicators are easy or difficult to adapt, collect, and analyze;
  • how questionnaire length affects implementation;
  • how tools can be integrated into current data collection processes;
  • what level of expense is acceptable;
  • how staff can be adequately trained to implement such tools;
  • which data entry techniques can be implemented for the tools;
  • what quality control measures will be necessary; and MIS requirements.

Tests of practicality were implemented by 14 partner Microenterprise development organizations who collected information on issues regarding the cost and ease of use of each tool. This information, along with information collected by project team members during field debriefs, was compiled into a report submitted to USAID. Read about the experience of practitioners in the Notes from the Field submitted to microLINKS from Senegal, Peru, Tanzania, and Uganda.

Indicators that appeared among the “best 15” in at least one of the twelve countries were included in the draft tools and tested for their practicality. Each question was rated as to whether the respondent found it to be sensitive, difficult, or that it was perceived that she falsified her answer. The lessons learned from the practicality testing were used to remove impractical indicators from consideration for the final poverty assessment tools.

 

The information provided on this web site is not official U.S. government information and does not
represent the views or positions of the U.S. Agency for International Development or the U.S. Government.