Philanthropreneurship Forum Alert

Alert me before session starts

Evidence is all the craze in development today. Rigorous evidence generation in the form of Randomized Controlled Trials (RCTs) has reached its peak, and the sector has become enamored with it. But are we really designing for impact on the first place, and doing that “rigorously” too?

In my prior role as a consultant for many development projects, I realized to my horror that a typical 5-year project runway leaves but weeks for design and adaptation at the very beginning, and little budgetary and contractual room for “pivots” along the way. Imagine a business raising venture capital with a static plan that does not course-correct through market feedback!

So even when we measure impact rigorously and find sub-par results, do we really know if it was the intervention itself that was at fault? Is it also not possible that the intervention was not adapted to the context, the mechanism of impact not properly vetted, or not personalized to the diverse needs of an obviously diverse population – throwing an otherwise promising intervention into the waste bin prematurely in the process?

Last month, an evaluation on a large and expensive health project using social franchising and telemedicine in Bihar (India) reported disappointing results. However, as the paper reports (and I can concur from a brief first-hand exposure to the project), the organization had little opportunity to thoroughly experiment and learn about their market before scaling up the intervention, and the evaluation was essentially imposed on them as a condition for the generous funding before even they felt confident it worked. I would go a step further to hypothesize that there was a poor fit between the target impact outcomes and the choice of intervention on the first place.

Ironically, this also compromised the quality of the evaluation itself, as the implementers inevitably realized after the research commenced that they must make fixes here and tweaks there. In the end, the study could not be completed as an RCT as was originally envisioned.

I think the solution is a combination of three things. First, we must move away from a silver-bullet-solution-mindset (which, sadly, RCTs seem to propagate further) towards a problem-mindset. Second, we must learn to rigorously articulate, measure and optimize our theory of change. And third, financiers must offer flexibility to implementers to tweak and pivot their design with the end goal of impact being held constant, and only impose rigorous evaluations when there is a sufficient degree of confidence based on a validated theory of change.

At Jeeon, we have been using Human Centered Design as a philosophy to help us remain in the problem space longer than usual. Instead of jumping into solutions for primary healthcare in rural areas, we started by deeply understanding the problem and visualizing current patient behaviors. Once we did, we formulated a theory of change (we call it the “impact map”) that outlines how it is we aim to achieve impact, and defined what impact success would look like. Three years into our existence, we are still continuously updating our TOC, testing and measuring different aspects of it through rapid prototypes, micro-experiments (a.k.a. A/B tests), and slowly arriving at the solution that could ultimately work in solving this problem. Very few development donors would allow three years to find a solution to a hugely complex and multidimensional problem such as this, but without it, success would be purely accidental.

Another corollary to remaining in the problem space is to understand that the problem might have different constituent parts in different contexts, and the same solution might not apply. At Jeeon, we realize that success in Bangladesh does not translate to success elsewhere. We would need to undergo a similar exploration and experimentation phase in each country we operate (albeit perhaps shorter), to adapt ourselves to local idiosyncrasies. (NB: This sounds like a truism, but as a brief foray into development will tell you, this lesson is often disregarded when replicating “silver bullet” programs without regard to contextual variances, such as in the case of deworming.)

As a largely digitized service, Jeeon also generates vast amounts of data from our operations itself, which can help us do much more than just measure our theory of change. We can find patterns in the data for “bright spots” (the combinations of factors that generate the best results), measure and improve performance of our providers, and personalize the experience for our patients to optimize experience, satisfaction and impact – such as SMS based lifestyle reminders for chronic patients. If Amazon and Netflix can recommend things to buy and movies to watch based on your history, why can’t we personalize health services based on the patients’ history of interactions with us, and design courses of action based on our past experience with similar patients?

It is understandably nerve-wrecking for a donor or investor to open their wallets and sit through this long and uncertain period of experimentation and fine-tuning of services. But if an organization makes their impact map and associated experimentations transparent (as we try to do), it can help investors rest easy knowing that the problem is being approached systematically and using real evidence even at the design level.

This approach is probably also not relevant for all interventions and projects out there. In humanitarian relief, for example, time is of the essence, and the intervention is usually a simple transfer of resources. In other simple interventions where the mechanism of action is widely understood and cannot vary with context, such as vaccinations, this approach is also less necessary (although I would argue that ensuring vaccination schedule compliance could look very different in different contexts, because humans are inherently different based on culture and context). For a wide range of other interventions, optimizing for impact before worrying about measurement is however absolutely critical.

With the era of Big Data looming over us, and indeed already here in the private sector, how will we as social workers and scientists adapt our approaches to take advantage of these new technologies and plan our impact more deliberately? Or will we keep shooting arrows in the dark and hope some of them just might hit the Bull’s Eye?