It’s no secret that solving society’s myriad challenges is hard. Really hard. On one hand, millions have left poverty in the last few decades and the world has seen significant progress toward the Millennium Development Goals. On the other hand, the history of the social sector is littered with examples of failed projects and billions of wasted dollars in everything from foreign aid to developing countries to school reform in the United States. Even so-called silver bullets like microfinance and charter schools eventually result in mixed results and unmet expectations, with new innovations following old, ineffective patterns.
Upon close examination, many of these efforts fall prey to similar pitfalls. In our experience with hundreds of organizations working in dozens of sectors around the globe, too many funders, investors, and practitioners take it as an article of faith that their efforts for good will work. But good intentions are not enough; passionate efforts don’t always equal impact. The problem is that this pivotal fact usually goes unnoticed.
In contrast, those who generate significant positive impact, whether through traditional or innovative means, typically take a more “scientific” approach to social impact. Rather than assuming X program will leady to Y outcome, the science of social impact views strategies, programs, and activities more as theories that need to be tested and refined over time. In our experience, this involves a simple formula:
Rigorous Theory of Change + Hypothesis-Driven Measurement
+ Continuous Improvement = Positive Impact
Of course, the elements of this formula are not new. The challenge to implement this formula is two-fold. First, repetition in setting the record straight is warranted: well-intended do-gooders still continue to simply assume impact. And second, even leading organizations separate the three components of the formula, fail to infuse the theory or the improvement with data, or both. The consequence is a lot of time crafting an impact strategy or conducting a robust evaluation without proving (or disproving) the underlying theory or improving the outcome.
Here are a few practical tips to be a bit more scientific in funding, investing, or service delivery in the social sector.
Question Your Theory
Anyone can scribble out a plausible theory of change (aka ‘impact strategy’ or ‘logic model’), even one that includes the three essential components: (1) a definition of the target population, (2) a description of the near- and long-term outcomes, and (3) the programs, services, or experiences they will provide. Good social sector scientists go beyond what’s plausible to grapple with what is and isn’t reasonable based on available data. The next time you’re developing a theory of change, take the time to:
- Feign Ignorance – Even if you really have been there and done that, use all the evidence you can to inform and test your strategy. Whether you simply interview a few potential beneficiaries or do a meta-analysis of the research literature, gather whatever evidence you can to guide your thinking.
- Acknowledge Your Assumptions – Look back through your strategy while asking questions like: “Am I sure X will lead to Y?”, “What if this piece of the puzzle isn’t right?”, and “What else might be necessary for success?” Making assumptions explicit will expose design flaws and encourage intentional learning over time.
Measure to Prove and Improve
Once you have a data-driven theory of change on paper, treat it like any other theory—as a hypothesis to be tested. This is where measurement comes in. But resist your evaluators’ instincts to jump straight to methodologies and metrics. First, decide what you need to know to (a) prove (or disprove) that each of the elements works as expected and (b) improve how effectively your initial outcomes result in ultimate success for all participants. For example, what data will you need to answer ‘prove’ and ‘improve’ questions like the following?
- Are we serving the ‘right’ population? What else can we learn about who they are, what they value, and the challenges they face? How can we more effectively attract and/or select the ‘right’ people?
- Did we achieve the near-term, intermediate, and ultimate outcomes we hoped for? What explains the variance in outcomes among participants?
- Did we deliver the program elements we wanted in a consistent, high quality manner? Which program elements correlate most with positive outcomes?
Once you’ve connected measurement with a disprovable theory, identifying appropriate evaluation indicators is straightforward—just identify one or more metric for each piece of your strategy. You’ll still have to prioritize which measurement questions you’ll start with, and then decide what methods you’ll use to collect, review, and analyze the data. This takes skill, time, and resources. But the work is clearer and the benefits greater when measurement is driven by a desire to prove and improve a detailed impact strategy.
Implement Like an Engineer
Lofty theories and intentional measurement are important, but what really matters is performance. Or at least performance and, over time, performance improvement. Meaningful improvement happens naturally when funding or program activities are viewed primarily as an experiment meant to test a theory, and when you have data that addresses the various components of the theory. You’re already asking the “Did it work?” and “How can we improve?” questions. Though change is never easy, good leadership and organizational awareness regarding the answers to these questions will make it easier for people throughout the organization to innovate and improve.
For decades, most work in the social sector has been dominated by matters of the heart—what people believe and care about. That’s one of the sector’s greatest attributes and shouldn’t be ignored or erased. But to solve social ills, belief and passion need to be combined with headier approaches that question everything, use data to drive success, and constantly look for better solutions.
 After decades of experimentation and expansion with microfinance, for example, the movement involving billions of dollars and hundreds of millions of people has resulted in mixed, negative, controversial, and even tragic outcomes.
 A tell-tale sign of this mentality is equating activity with impact. If your evidence of impact is akin to the number of microloans repaid or students taught or wells dug, think again. Those are indicators of activity, not social change.