Skip to content

The blog

Two five-letter frameworks for the bit before the dashboard

On CLEAR (for the pathway) and VALID (for the indicator) — two reviewer's checklists to apply before the logframe gets signed.

There are two moments where most measurement work gets quietly worse than it needed to be. The first is when the boxes-and-arrows of a Theory of Change are already on the wall and nobody wants to ask whether the logic actually holds. The second is when somebody has proposed an indicator and the room nods because nobody has the heart to say it does not measure the thing it claims to measure.

This is a note on two reviewer’s checklists for those two moments. Both are now live on the Canvas — free, printable, one page each.

CLEAR — for the pathway

CLEAR is for the moment when a Theory of Change or causal pathway is being reviewed. Five questions. Each letter is doing a separable job:

  • Causal logic. Does the pathway clearly show how one step leads to the next? Where does the logic jump? Read the if-then between every step. If a plausible reader could mutter “and then a miracle happens here”, you have found the jump. Either fill it with an intermediate step or admit the leap is an assumption and move it to the assumptions row.
  • Level clarity. Are activities, outputs, intermediate outcomes, and outcomes placed at the right level? The most common error is dressing an output up as an outcome. Trainings delivered is an output. Practice changed is an outcome. If a row could be ticked off by the implementer alone, it is probably not an outcome — it is the work that produces one.
  • Essential missing links. Have key stakeholders and systems been considered while designing the pathway? A pathway that depends on a frontline worker, a panchayat, a school principal, or a district officer should name them somewhere. If the actors and structures the pathway runs through are invisible on the diagram, the pathway is borrowing their effort without crediting it.
  • Assumptions and risks. What assumptions need to be made explicit? Where could they break down? Every arrow is an assumption in disguise. Pull the strongest three out of the diagram and write them as full sentences. Then ask: under what conditions does this stop being true?
  • Repetition / overlap. What is repeated in another pathway? Should it be merged, moved, or removed? Organisations running multiple parallel pathways tend to double-count the same outcome from different angles. A clean pathway is honest about what it alone is producing.

The full one-pager is at /canvas/causal-pathway.

VALID — for the indicator

VALID is for the indicator that lands in the logframe. Five questions, again separable:

  • Valid. Does the indicator measure the construct named in the ToC, not a convenient proxy? Write the construct in plain language (“do mothers feel supported during the perinatal period”) and the indicator next to it (“count of calls received”). If a stranger reading the two would not believe the indicator captures the construct, you have a proxy problem. The proxy may still be useful, but call it what it is.
  • Actionable. Does a result on this indicator tell the implementer what to do next? If the answer is “we report it” rather than “we do X”, the indicator is for the dashboard, not for the work.
  • Linked. Does the indicator correspond to a specific node in the causal pathway, not the project as a whole? If the indicator floats above the whole programme — “lives improved”, “outcomes achieved” — it is measuring vibes. Anchor it to one step in the chain.
  • Independent of gaming. How hard is the indicator to manipulate without producing the underlying change? Goodhart’s law in operating clothes. If a frontline worker under pressure can move the number without doing the work, the indicator will be gamed long before the programme is evaluated.
  • Disaggregable. Can the indicator be broken down by the equity dimensions that matter for this programme? Average improvement is the place inequities go to hide. Decide up front which cuts the indicator must support — caste, class, gender, geography, age, disability — and check that the data system can actually produce them.

The full one-pager is at /canvas/indicator-test.

How they fit together

CLEAR is for the boxes and arrows. VALID is for the metric inside any one box. Run CLEAR on the pathway first, because there is no point asking whether an indicator is valid for a construct that is itself misplaced or unsupported. Once the pathway holds, run VALID on every indicator that has been proposed for any of its boxes.

Used together, they catch the two failure modes that the Measurement Trap book is trying to make legible: a pathway that does not say what it depends on, and an indicator that does not measure what it claims to. Most of the trouble at the evaluation stage was already baked into one or both of these moments.

The frameworks are free to use. The page sources are linked above. If you apply them on a real pathway and find a question missing — or one that is doing the wrong work — write in.

← More posts

WhatsApp