A funder asks: what is the impact of this art programme on children in government schools?
The evaluator reaches for the standard toolkit. Pre-post test. Control group. Attendance records. Maybe a socio-emotional learning scale — the CASEL framework, or the SDQ (Strengths and Difficulties Questionnaire). These are validated instruments. They measure anxiety, prosocial behaviour, peer problems, emotional symptoms.
They do not measure joy.
The problem with measuring art
Art is non-linear. A child who spends an hour painting does not produce a measurable output in the way that a child who spends an hour doing maths drills does. The maths drill yields a score. The painting yields an experience — and the experience might change the child’s relationship to the school, to self-expression, to risk-taking, to the feeling of being allowed to make something that has no correct answer.
These are real outcomes. They matter for the child’s development. They are also extremely hard to capture with instruments designed for linear, cognitive outcomes.
The standard response is to measure what you can and ignore what you cannot. Report attendance. Report the number of art sessions delivered. Maybe add a teacher satisfaction survey. The funder gets a number. The evaluator gets paid. The child’s experience of joy goes unrecorded.
We decided to try something different.
What we built
The evaluation design for a Mumbai schools art programme used a mixed-methods approach that tried to see the whole picture rather than the measurable fraction of it.
Classroom observation. Trained observers sat in art sessions and coded specific behaviours: spontaneous laughter, voluntary participation (raising a hand without being asked), helping a peer, trying something new after a failed first attempt, staying engaged past the session’s formal end. These are proxies for joy and engagement that a pre-post test cannot capture.
Child self-report. Simple questions, asked one-to-one, about how the session made them feel. Not Likert scales — open-ended prompts designed for children who may not be comfortable with formal assessment. “What was the best part?” “Did you want to keep going?” “Would you come back tomorrow if you could?”
Teacher perception. Teachers were asked about changes they observed in specific children — not aggregated scores, but named observations. “Riya used to sit in the back. Now she draws first and talks about it.” These micro-narratives are not generalisable, but they capture something a scale cannot.
Triangulation. The three data streams were cross-referenced. When a child showed up in the observation data as spontaneously engaged, in the self-report as wanting to continue, and in the teacher narrative as changed — that convergence was treated as evidence of impact, even though no single data point would survive a journal referee on its own.
What we found
The programme changed the atmosphere of the classrooms it operated in. Children in art sessions were more willing to take risks — not just artistic risks, but social ones. They spoke up more. They helped each other more. They laughed more. These changes were visible to observers and reported by teachers independently.
The methodology showed that non-linear, arts-based outcomes can be evaluated honestly — but only if the evaluation is willing to use methods that match the intervention’s shape. A randomised controlled trial would have required a control group of children denied art. The observation + self-report + teacher narrative approach was messier, but it was ethical and it captured what mattered.
The measurement lesson
Joy is a legitimate outcome. The fact that we lack standardised instruments for it is a failure of measurement science, not a failure of the outcome. When a funder asks “what is the impact?” and the evaluator says “we cannot measure joy,” the honest response is: then build a better instrument.
This is the argument at the heart of the measurement work we keep coming back to. The indicators we have shape what we see. When the indicator cannot see joy, the programme gets evaluated on attendance — and the thing that actually changed the children’s lives disappears from the record.
The Measurement Checklist asks: “Is the indicator easy to count, or worth knowing?” Joy is worth knowing. It is not easy to count. The gap between those two facts is where the interesting work happens.