top of page
  • web icons (2)
  • web icons
  • Youtube Icon
Search

When design meets the field : Notes from three experiments in Indian classrooms

  • 3 days ago
  • 6 min read

By Fatema Ranpura (Head of Design)

In collaboration with Anamika Mukherjee (Sr. Manager, Program Implementation) and Bhoomi Shah (Design team) 


Most design conversations in the social sector stay pretty upstream. Theory of change, user

journeys, wireframes. Very rarely do they follow the work all the way to a government school in Pune, or a BMC classroom in Mumbai. To the moment a parent actually picks up the phone to call another parent. To when a student gets handed a badge and walks back into class feeling, maybe for the first time, like they own something. 


That is the part we wanted to see for ourselves. 




Experiment 1: The gamification one - 

Can competitive visibility move the needle on engagement? 

Our first idea was, honestly, the obvious one. Make it competitive. Make it visible.


We ran this across 24 PMC secondary schools in Pune, with 1,442 students. Leaderboards for each school. Weekly progress posters going up on walls. Recognition for top schools, for teachers driving participation, for high-engagement students. The assumption was that visibility plus competition plus a little public recognition would do the heavy lifting.


Source: TAP, Poster used for the experiment 
Source: TAP, Poster used for the experiment 


Here is what actually happened. 


Access shot up. Tripled, in fact. Students were opening activities, schools were checking where they stood, and on the surface, it looked like the experiment was working. 


The Apprentice Project (TAP) works with underserved students across India, using an AI-powered chatbot called TAP Buddy to deliver personalized, 21st-century learning. On paper, we reach over 100,000 students across 3,000+ schools in 34 districts. That is real. But it is also incomplete. 


Because reach and completion are two very different animals. 


Source: TAP, Program Implementation team on ground activity 
Source: TAP, Program Implementation team on ground activity 

For months, our data kept pointing at the same quiet, frustrating pattern. Students were opening activities on TAP Buddy. They just were not finishing them. And you can tweak a UI all you want, but if the gap is not actually about the interface, no amount of design polish is going to close it. 


What we were looking at was a behavioural problem. And behavioural problems need a different kind of thinking. 


So our Design team and our Program Implementation team (the folks who are actually on the ground, in schools, every single week) sat down together and ran three field experiments. Three different levers, three different environments, one uncomfortable central question. What actually gets a student to finish what they started? 


But submissions barely moved; under 2% across five weeks.


That is the kind of number that makes you stop and actually look at what you built. And when we did, the pattern was pretty clear. In schools where a teacher was actively following up on the leaderboard, chasing students, making it matter, we saw real conversion. In schools where the leaderboard just went up and stayed there, it became wallpaper. 


Gamification, it turns out, is an amplifier. Not a trigger. Without some form of accountability underneath it, you get visibility. You do not get behavior change. 

 



Experiment 2: The parents one - 

What happens when you move accountability into the home? 

Okay so, if teachers drive completion in school, we thought, what drives it at home? That is 

a whole different ecosystem. So we tried something smaller and much more specific. 


One school. Pune PCMC. 110 parents in a room. 


Our PI team walked them through what TAP is, what their kids were actually working on, and what completion looked like in practice. From that group, we picked four parent influencers, and they started a small WhatsApp-based peer network. Their job was simple: nudge other parents in the cohort to encourage weekly activity completion at home. 


Source: Sourav Debnath, Unsplash 
Source: Sourav Debnath, Unsplash 

The numbers here told a very different story from Experiment 1. 


Access moved by just 5 percentage points. Not huge. But submissions jumped by 16 percentage points. And that is a meaningful shift. 


What changed was the household. Parents who understood what TAP was trying to do became a soft accountability layer around the student. The kid did not suddenly get more motivated. The environment around the kid got more supportive. And in most cases, that is what actually tipped completion over the edge. 




Experiment 3: The student influencers one - 

What happens when students influence each other? 

The third experiment was the one I was most nervous about, honestly. Adult-led accountability is relatively well-understood. Peer-led accountability among kids aged 10 to 13? A lot less predictable. 


Source: TAP, Program Implementation team on ground activity 
Source: TAP, Program Implementation team on ground activity 

We ran it at one BMC school in Mumbai, 900 students across grades 5 to 8. We picked 10 student influencers from across classrooms, briefed them, gave them real data (their classroom's registration numbers, engagement stats), and put a TAP Monitor Guidebook in their hands so they knew what to do. 


But the thing that surprised us was not the data we gave them. It was the identity. 


Badges. Stickers. Pouches. Visible, physical markers that said, loud and clear, you are responsible for this. 


Registrations jumped 43% in 20 days. 


Students influence students in ways institutions never really will. A classmate telling you to sign up just lands differently than a WhatsApp nudge from a program team. 


We knew this in theory. Seeing it move numbers that fast was something else. 


That said, the experiment also showed us its own soft spot. We had to coordinate every week. Check-ins, school visits, active monitoring. The moment that reinforcement thinned out, so did the momentum. Identity gave these kids ownership. But ownership on its own, without a support structure, is really hard to hold. 


Enthusiasm is not sustainability. That was the biggest lesson from this one.



So what did we actually learn? 


Each experiment moved a different part of the funnel, and that mattered. Gamification pulled people in. Parents pushed them across the finish line. Peer influence got them to sign up in the first place. These are not the same lever. They cannot be swapped around. A lot of strategies fail because they treat them as interchangeable when they are actually doing very different jobs. 


But the thing that stayed constant across all three? Teachers. 


Every single time, teacher ownership was the single most consistent variable. Where a POC teacher was engaged, outcomes moved. Where they were not, even our best interventions plateaued. No gamification model, no parent network, no student ambassador strategy was ever going to substitute for that. It is the foundation everything else sits on, and we should probably stop pretending otherwise. 


Source: TAP, Artwork created by students as part of TAP Buddy projects 
Source: TAP, Artwork created by students as part of TAP Buddy projects 

The other takeaway, which was more humbling: there is no low-touch shortcut to behaviour change. Not really. Every model we tested needed sustained human ownership to hold. Gamification without accountability becomes noise. Peer influence without structure fizzles. Parent activation without follow-up just evaporates. 


So what we are working toward now is not a single winning model. It is a blended one.

Teacher recognition anchoring accountability. Student influencers creating peer momentum. Parents reinforcing completion at home. Each piece doing what it is actually good at. Nothing doing everything. 



About scaling (because we get asked this a lot)


All three of these experiments have real constraints in their current form. They are high-touch. They need physical materials, active monitoring, people showing up consistently. That does not travel easily across hundreds of schools. 


The insights scale. The current execution does not. And being honest about the difference is, I think, what separates responsible scaling from the kind that just looks good in a report. 


Our next step is to iterate on the execution given the learnings, and come up with a scalable execution model for these experiments. 



The Role of evidence based Design Thinking at TAP


A lot of design work in the social sector runs on intuition and precedent. Things get replicated because they worked somewhere else, or because they looked good in a funder's deck. Experiments, when they happen at all, happen informally. And results tend to get filtered through what people want to hear. 


We are trying to work differently. The TAP Design team is built around the idea that design decisions should be grounded in evidence, not comfort. That means actual field experiments, measurable outcomes, honest reporting on what did not work, and a willingness to learn before we scale. 


It is not easy in a resource-constrained environment. These experiments cost the PI team a lot of coordination work. But I genuinely believe this is how good design in this sector has to function. Testable. Measurable. Willing to be wrong. 


We are still in the middle of figuring this out. We do not have the full answer yet. But the questions we are asking now are much sharper than the ones we started with, and the data behind them is better too. 


That's what design as a practice looks like when it's embedded in real-world systems, working shoulder-to-shoulder with implementation teams who know the field. 


Signing off, we will continue learning from these experiments and sharing our finding! 






 
 
bottom of page