Skip to content

The blog

The PEARL problem: when district data is all you have

In a country where the census has not been updated since 2011, how do you estimate what is happening at the block level? A note on small-area estimation and the politics of granularity.

India’s last census was conducted in 2011. It is now 2026. Every policy decision, every programme allocation, every baseline survey that references “population” is working with data from a country that no longer exists in the same shape.

The district is where most development programmes land. A District Collector sits at a desk, receives a budget, and tries to allocate it across blocks and sub-blocks. But the data she has — from the census, from NFHS, from HMIS — is almost always at the district level. She knows the district average. She does not know which blocks are doing well and which ones are quietly falling behind.

This is the problem PEARL was built to solve.

What PEARL does

PEARL is a small-area estimation methodology. It takes district-level survey data — the kind that already exists in NFHS rounds, HMIS reports, and various programme MIS systems — and uses statistical modelling to produce block-level estimates. The estimates are honest about what they are interpolating and what they are guessing. Each estimate comes with a confidence interval.

The technique is borrowed from survey statistics (the Fay-Herriot model and its extensions), adapted for Indian administrative geography. The key assumption: blocks within a district share structural characteristics (infrastructure, health system staffing, road connectivity) that can be modelled, but they differ enough that the district average hides real variation.

Why this matters now

The 2011 census is fifteen years old. The 2021 census was postponed indefinitely. NFHS-6 is in the field but will not produce block-level estimates. HMIS data is facility-level but not population-representative.

A programme manager in Khunti district (Jharkhand) or Dhubri (Assam) needs to know which blocks to prioritise for a maternal health intervention. The district average tells her nothing useful — it smooths over the blocks where the problem is worst.

PEARL fills that gap. It does not replace a census. It does not claim to. What it does is give a programme manager a defensible number at a sub-district level, with a clear statement of how much the estimate can be trusted.

The politics of granularity

Measurement is a political act. The choice to measure at the district level rather than the block level is a choice about who gets seen. A district-level indicator can show that “things are improving” even when specific blocks are getting worse — because the average hides the variation.

PEARL makes the variation visible. That is its real contribution. The statistics are straightforward (multilevel models, area-level random effects, auxiliary data from administrative sources). The politics is harder: once you show that Block A is doing twice as badly as Block B, someone has to explain why the budget was split evenly.

Where this sits

PEARL was developed for a climate and health fund working across four districts in India. The methodology is documented in a technical paper. The approach is generalisable — any country with district-level surveys and no recent census faces the same problem, and the same estimation approach applies.

The field kit companion to this work — The Measurement Checklist — asks practitioners twelve questions before they commission a study. Question 11 is: “What do you refuse to count?” PEARL is one answer to the reverse question: what happens when you refuse to stop counting just because the census stopped?

← More posts

WhatsApp