Experimental evaluation designs, such as Randomised Controlled Trials (RCTs) which are used widely in medical trials, are underused in Australia’s social impact sector, according to PRF Measurement, Evaluation, Research and Learning Associate Virginia Poggio. The hesitation, she says, often stems from misconceptions about their complexity, cost, and ethical implications.
Virginia is part of the team facilitating PRF’s Experimental Evaluation open grant round, to build understanding and experience of evaluation techniques in Australia to better measure and create social impact.
“With the right support and approach, these methods can profoundly enhance our understanding and effectiveness in addressing social issues,” she says.
The ultimate goal of the grants, says Virginia, is to motivate the sector to embrace experimental evaluations as a powerful tool within standard evaluation practice.
“The potential of experimental evaluations lies in their ability to provide rigorous evidence of what worked, for whom, and under what conditions. By isolating the effects of specific interventions, they can reveal causality rather than mere correlation. This depth of insight is not only valuable for organisations working to create meaningful change but can also contribute to a broader evidence base that can inform policy and practice across the sector.”
Ahead of grant EOIs closing on 23 July, we asked Virginia to bust four common myths about experimental evaluations.
MYTH ONE: They’re expensive
One prevalent myth is that experimental designs are prohibitively expensive and resource intensive. While it's true that experimental designs require careful planning and execution, the costs can be manageable, especially when considered against the benefits of robust, actionable data. Inspired by similar successful experiences (like Arnold Ventures in the USA), our open grant round aims to demonstrate that with grants of up to $300,000, meaningful experimental evaluations can be conducted.
MYTH TWO: They’re unethical
Another common misconception is that experimental designs are inherently unethical, particularly in social impact contexts where vulnerable populations are involved. However, when designed and implemented with ethical guidelines at the forefront, experimental designs can be conducted in a manner that respects participants and minimises harm. For instance, random allocation of people from a waitlist to participate in a program does not change the number of people who ultimately receive the program but rather determines a fair way to select participants from the waitlist in contexts of scarce resources. PRF is committed to supporting ethical evaluations, including partnering with First Nations researchers to ensure cultural appropriateness and sensitivity.
MYTH THREE: They’re complex
One widespread myth is that experimental evaluations, particularly RCTs, are inherently complex and difficult to implement. This misconception often deters organisations from using these powerful methods, fearing they lack the expertise or resources to manage the intricate design and execution. However, with proper guidance and support, the complexity of these evaluations can be significantly reduced. By breaking down the process into manageable steps and providing access to expert evaluators, these grants intend to demonstrate that experimental evaluations can be straightforward and accessible, enabling organisations to harness their full potential for social impact measurement.
MYTH FOUR: You can only choose one
A prevalent myth is that organizations must choose between experimental methods and other evaluation designs, such as quasi- or non-experimental approaches. In reality, these approaches can and should be used together to provide a more comprehensive understanding of an intervention's impact. For example, experimental methods, such as RCTs, can offer robust evidence on the effectiveness of an intervention by isolating its impact from other variables. However, they can be greatly complemented by contextual insights into the experiences and perceptions of participants. By combining these methods, organizations can gain a fuller, more nuanced picture of how and why an intervention works, addressing not just the "what" but also the "how" and "why" behind the outcomes. PRF encourages this integrative approach to ensure that evaluations are both rigorous and deeply informative, capturing the complexity of social impact interventions.
Learn more about the Experimental Evaluation open grant round.