2011-12-05

Co.Exist

Global Giving Turns Evaluating Aid Into A Game

Using analytics to figure out whether development programs are working is an important new quest. The online giving organization is paying locals to collect simple stories about the impact of aid, to see if the dollars are adding up to anything.

Working in communications for an international NGO, I wonder about our reputation. Are people talking about us in communities around the world where our work has impact? If they do or if they did, I wonder if they have good things to say about our work, including when we’re not around.

Even given positive, verified impact on income, access to water, health and education, political empowerment or other areas, if target communities still feel like international or even local NGOs add no value to their communities, or worse take value away, it makes me rethink the meaning of impact.

With the 4th High Level Forum on Aid Effectiveness having just concluded, measuring impact has never been more under the microscope. The accepted gold standard of measuring impact, randomized controlled trials (RCTs), championed by the likes of Esther Duflo, Abhijit Banerjee, and Dean Karlan, still depend on the evaluator being an outsider with a hypothesis to test, rather than a hinging on a target community-member with a human experience and human desires to live up to.

The community or end-user perspective has been making waves in project design for global development and social enterprise, helping to improve impact in many ways. But when it comes to measuring impact, community perspective has been hard to pin down.

Make no mistake, RCTs are vital for determining if projects are doing what they’ve promised to do, but the lack of community perspective in measuring impact strikes a dissonant chord with the rest of global development’s talk of more local ownership of development projects. If only there was a reliable, replicable, rapid way to find out how target community members talk about development and the projects and organizations in that space, whether international or local in origin.

The GlobalGiving Foundation is working on a method to do that. Already famous for its crowdsourcing of global development project funding, starting in 2009 GlobalGiving piloted the Storytelling Project as an experiment to crowdsource impact evaluation to target community members, seeking what they say or would say about the work of development organizations, international and local.

“The challenge is three-fold,” says Marc Maxson, GlobalGiving’s lead consultant on the Storytelling Project: “Trying to capture those discussions quickly and reliably; gleaning valuable insight from those discussions that can then inform and improve the work of organizations in the community; and lastly, making the whole process desirable for organizations that don’t have time or money to do traditional evaluations.”

Maxson adds, “There are some 4 million small organizations that do most of the charity work in the world. Big funding agencies probably support 4 percent of these. Here is a method for the rest.”

The story collection process starts with an open-ended question: “Tell us about a time when a person or organization tried to change something in your community.”

“Evaluations are a game,” says Maxson, “The sooner you recognize that you--as an evaluator--are playing a game, the sooner you can redesign this game to be fun for the participants, and incentivize people to reveal honest truths about their community.”

In GlobalGiving’s "game," scribes are told to collect at least two stories about two different events or NGOs who tried to help someone or change something in the community. Scribes are paid 10 to 15 cents per story and can collect 10 to 100 in a month. GlobalGiving then analyzes sets of stories in a dozen different ways to see who is performing the task and who is just sending back junk.

The stories then get fed into Sensemaker, software licensed from U.K.-based Cognitive Edge, along with Wordle and other semantic tools, to reveal patterns and potential biases across stories in aggregate that provide a snapshot of how people talk about change in their community, and to whom they attribute it.

For nonprofits and potential donors, “this helps you see what you’re doing through the eyes of the beneficiaries,” explained John Hecklinger, chief program officer for GlobalGiving, to the Stanford Social Innovation Review last summer. It also helps because it’s cheap. For that same article, Maxson estimated it cost only 5% of what a typical third-party independent evaluation costs.

So far, GlobalGiving has collected and analyzed over 26,000 stories from around 5,000 community members in Kenya and Uganda. They’re getting over 1,000 new stories a month from 50 towns and cities across the two countries, and they have plans to expand further.

After two years working out kinks and gathering initial rounds of stories, says GlobalGiving Director of Programs Britt Lake, “We’ve reached the point where it’s becoming useful to us. We’ve begun to use it in our due diligence process for approving organizations to participate in our regular Open Challenges, and we’re encountering organizations on the ground who are or will be using Storytelling data to change how they work.”

GlobalGiving has also recently begun to share this feedback more actively with local NGOs in community sessions.

But what about the experts? Don’t their opinions have any weight? It depends. As Nobel Laureate Daniel Kahneman recently said in Time magazine, in some fields “it’s been shown that experts are just not better than a dice-throwing monkey.” Maxson takes experts to task on his own blog, in a post using fantasy football as an example of the perils of relying too heavily on "experts."

“Ultimately, experts are very good at figuring out how to do things in development,” says Maxson, “But when it comes to predicting what communities want to prioritize, so-called experts fail miserably.”

Can the community perspective make development work better and have greater impact? Of course donors and implementers want to have and to show impact from their perspective; moving the needle on education, health, income, political empowerment and other areas is still their bottom line.

But I still say reputation cannot be ignored. No matter how much a project or organization might have helped boost someone’s income, do people really say it’s improved their well-being--even when project staff isn’t around?

Add New Comment

3 Comments

  • andrewdewey

    Glad to see this subject written about and the movement towards more accountability of development work. It's need in any context of development. Too often, charitable work is generalized as being good no matter what and revolves around the donor decision to give, not about the end result. 

  • How Matters

    I am, like many others, worried about the implications of international aid donors moving towards making randomized control trials yet another conditionality of aid, more food to satisfy their seemingly insatiable appetite for evidence. Randomized control trials are especially troubling if behavioral economists operate with an assumption that poor people don’t know what’s good for them, the flip side being that someone else must, which simply continues the "expertise infusion" model of international aid. 
    Sixty years of development aid hasn’t reduced poverty using existing evaluation methods, yes. But sixty years of development aid that has squashed local initiatives by not giving the due attention to how that aid makes people feel, is, I believe, perhaps one of our biggest challenges in making aid more effective. The prevalent, yet not often exposed negative attitudes, behaviors and perceptions towards local people and organizations in the aid world is something that has been under-reported, insufficiently documented, and poorly-studied. 

    Kudos to GlobalGiving for turning this on it's head. If you'd like to learn more about Marc Maxson, the Director of GlobalGiving's storytelling project, see: http://www.how-matters.org/201...

  • Aaron Ausland

    This is an interesting idea. It sounds a bit like Rick Davies' Most Significant Change method, but makes use of new technology that can assess large quantities of narrative data. I wonder, however, how useful this will really turn out to be for evaluating projects. I would think that you'd need to collect lots and lots of data for this method to be valid, which implies collecting stories from a broad set of communities that may have little in common in terms of their development or change narrative. I wonder how useful would this be, for example, as a way of evaluating a specific project in a smaller population. It seems to me that this might be a more useful way of baselining a broader area, rather than trying to evaluate a particular project or intervention. But, this is definitely something to watch as a new development in the evaluation field.

    Also, I totally disagree with the statement that evaluation is a game. Participation can be made somewhat fun or at least engaging and meaningful for the community, but it isn't a game and having fun may not be a very helpful objective to link to the evaluation process.