At PRF we believe that rigorous evidence can be a powerful tool for social change. We also know that real-world evaluation, especially using experimental methods, is not for the faint-hearted.
That’s why we created a grant round specifically to support seven non-profit organisations embarking on experimental evaluations of their social impact programs. Our aim is to help them generate strong evidence of impact while also learning, together, what it takes to make these methods work in the complexity of everyday service delivery.
Earlier this week, the grant recipients met online for our second group check-in. The session reaffirmed a hunch we’ve had since the beginning: the challenges of running experimental evaluations are often shared, and so is the value of peer support.
Here are five lessons from the group that may be useful to others thinking about applying experimental methods to evaluate their programs.
1. Protocols build clarity and commitment
Several grant recipients reflected on how the process of writing trial protocols (or pre-analysis plans) helped sharpen their evaluation questions and clarify what success would look like. As one participant put it, “You want to collect all the data — but the protocol forces you to ask what you really need to know.”
The group also warned: publishing a protocol means you’ll need to explain and justify any changes later. That’s accountability, and that’s the point.
2. Ethics approvals aren’t just a hurdle: They’re a design tool
While ethics processes can be time-consuming, many teams described how they used them to improve study design. In some cases, discussions with ethics committees led to more ethical and practical recruitment or randomisation strategies. In others, the process prompted changes to protect participant wellbeing, particularly for young children or highly vulnerable groups.
3. Methodological trade-offs are inevitable
From video-coding parent-child interactions to designing surveys that won’t exhaust school-aged kids, teams are navigating the tension between rigour and feasibility. One team moved from observational assessments to self-report tools due to the challenges of logistics and resource constraints, while another debated the number of participants required to detect meaningful impact across multiple outcomes. These are hard trade-offs but surfacing them early gives teams the best chance to balance scientific validity with real-world pragmatism.
4. Timelines slip, but learning happens anyway
Several teams reported delays: ethics processes took longer than expected, implementation partners needed more time to onboard, and recruitment proved tricky. But these delays also opened up space for deeper stakeholder engagement and iterative refinement of plans. As one team noted, “We built fat into the timeline, and thank goodness we did.”
5. This work is better (and more joyful) together
Perhaps the most powerful moment of the session came before the evaluation updates even began. Each participant shared a recent moment of joy, from sunrise swims and horse camp drop-offs to warm loaves of sourdough and kids on comedy festival stages. It reminded us that evaluation isn’t just technical work, it’s human work.
And while each team’s project is different, the emotional terrain (excitement, doubt, tenacity, curiosity) is something we all recognise. The relationships being built across this cohort are becoming as valuable as the technical assistance we offer. That’s something we’ll be nurturing over the coming months.
Thinking about an experimental evaluation?
Our grantees recommend:
- Start small and build in time for iteration.
- Be clear from the start about what success looks like and how big a difference you expect your program to make.
- Talk to others doing similar work and don’t be afraid to ask for help.
- Expect delays but use them well.
- And most importantly, stay anchored in your purpose: understanding whether your program is making the difference it’s meant to.
We’re grateful to the grant recipients for sharing their lessons and learning in real-time, and we look forward to sharing more insights as the evaluations unfold.