The Indian edtech sector has a measurement problem, and it is a surprisingly simple one: nobody agrees on what “impact” means.
Product A measures learning gains using a pre-post test aligned to its own curriculum. Product B tracks “engagement” (time on app, sessions per week). Product C reports teacher satisfaction scores. Product D says its impact is “reach” — the number of schools using the product.
A funder sitting across from all four cannot compare them. A school principal choosing between them has no shared yardstick. A government education department trying to integrate edtech into its system has no framework for deciding which products are worth the bandwidth they consume.
This is the problem the EdTech Tulna framework was built to solve.
What Tulna does
Tulna means comparison in Hindi. The framework standardises what counts as an outcome across edtech products, so that a funder, a school, or a government department can compare apples to apples.
The framework works at three levels:
Level 1: Learning outcomes. Does the product change what a child knows or can do? Measured using curriculum-aligned assessments (ASER-style, NAS-aligned, or product-independent competency tests). This is the hardest level to meet and the one most products avoid.
Level 2: Pedagogical process. Does the product change what happens in the classroom? Teacher behaviour, student participation, time-on-task, question quality. Measured through classroom observation protocols.
Level 3: Adoption and usage. Is the product being used as intended? Download rates, active users, session frequency, teacher uptake. The easiest level to claim and the one most commonly reported as “impact.”
The politics of the framework
Asking edtech companies to adopt a common evaluation framework is like asking restaurants to agree on a shared food-safety inspection. Everyone says they support it. Nobody wants to go first. The companies with the weakest evidence prefer the status quo, where each defines its own success metrics. The companies with strong evidence prefer the framework, because comparison favours them.
The Tulna framework navigated this by making Level 3 (adoption) the entry point — easy to comply with, low threat — and then ratcheting expectations upward over funding cycles. A funder using Tulna could say: “For the first year, show us Level 3. By year two, we expect Level 2. By year three, Level 1 or explain why.”
This graduated approach worked because it gave companies time to build evaluation capacity rather than demanding it overnight.
What I learned
The interesting lesson from Tulna is about the politics of measurement, which is also what the book I am writing is about. When you standardise what “impact” means, you change who wins. The companies that were reporting “reach” as their primary metric were the ones with the least evidence of learning gains. The framework made that visible.
Measurement is power. The choice of indicator determines the story. When every product gets to tell its own story using its own metrics, nobody can tell whether the sector as a whole is making children learn more. Tulna created a shared language. The resistance to that language told you everything about who was confident in their product and who was not.
The parallel to development more broadly is exact: when a country defines its own poverty line and measures progress against it, the definition shapes the outcome. Change the line, and the number of poor people changes overnight — without a single person eating a better meal.
The measurement is the politics. That is the thread through all of this work.