Working in communications for an international NGO, I wonder about our reputation. Are people talking about us in communities around the world where our work has impact? If they do or if they did, I wonder if they have good things to say about our work, including when we’re not around.
Even given positive, verified impact on income, access to water, health and education, political empowerment or other areas, if target communities still feel like international or even local NGOs add no value to their communities, or worse take value away, it makes me rethink the meaning of impact.
With the 4th High Level Forum on Aid Effectiveness having just concluded, measuring impact has never been more under the microscope. The accepted gold standard of measuring impact, randomized controlled trials (RCTs), championed by the likes of Esther Duflo, Abhijit Banerjee, and Dean Karlan, still depend on the evaluator being an outsider with a hypothesis to test, rather than a hinging on a target community-member with a human experience and human desires to live up to.
The community or end-user perspective has been making waves in project design for global development and social enterprise, helping to improve impact in many ways. But when it comes to measuring impact, community perspective has been hard to pin down.
Make no mistake, RCTs are vital for determining if projects are doing what they’ve promised to do, but the lack of community perspective in measuring impact strikes a dissonant chord with the rest of global development’s talk of more local ownership of development projects. If only there was a reliable, replicable, rapid way to find out how target community members talk about development and the projects and organizations in that space, whether international or local in origin.
The GlobalGiving Foundation is working on a method to do that. Already famous for its crowdsourcing of global development project funding, starting in 2009 GlobalGiving piloted the Storytelling Project as an experiment to crowdsource impact evaluation to target community members, seeking what they say or would say about the work of development organizations, international and local.
“The challenge is three-fold,” says Marc Maxson, GlobalGiving’s lead consultant on the Storytelling Project: “Trying to capture those discussions quickly and reliably; gleaning valuable insight from those discussions that can then inform and improve the work of organizations in the community; and lastly, making the whole process desirable for organizations that don’t have time or money to do traditional evaluations.”
Maxson adds, “There are some 4 million small organizations that do most of the charity work in the world. Big funding agencies probably support 4 percent of these. Here is a method for the rest.”
The story collection process starts with an open-ended question: “Tell us about a time when a person or organization tried to change something in your community.”
“Evaluations are a game,” says Maxson, “The sooner you recognize that you--as an evaluator--are playing a game, the sooner you can redesign this game to be fun for the participants, and incentivize people to reveal honest truths about their community.”
In GlobalGiving’s "game," scribes are told to collect at least two stories about two different events or NGOs who tried to help someone or change something in the community. Scribes are paid 10 to 15 cents per story and can collect 10 to 100 in a month. GlobalGiving then analyzes sets of stories in a dozen different ways to see who is performing the task and who is just sending back junk.
The stories then get fed into Sensemaker, software licensed from U.K.-based Cognitive Edge, along with Wordle and other semantic tools, to reveal patterns and potential biases across stories in aggregate that provide a snapshot of how people talk about change in their community, and to whom they attribute it.
For nonprofits and potential donors, “this helps you see what you’re doing through the eyes of the beneficiaries,” explained John Hecklinger, chief program officer for GlobalGiving, to the Stanford Social Innovation Review last summer. It also helps because it’s cheap. For that same article, Maxson estimated it cost only 5% of what a typical third-party independent evaluation costs.
So far, GlobalGiving has collected and analyzed over 26,000 stories from around 5,000 community members in Kenya and Uganda. They’re getting over 1,000 new stories a month from 50 towns and cities across the two countries, and they have plans to expand further.
After two years working out kinks and gathering initial rounds of stories, says GlobalGiving Director of Programs Britt Lake, “We’ve reached the point where it’s becoming useful to us. We’ve begun to use it in our due diligence process for approving organizations to participate in our regular Open Challenges, and we’re encountering organizations on the ground who are or will be using Storytelling data to change how they work.”
GlobalGiving has also recently begun to share this feedback more actively with local NGOs in community sessions.
But what about the experts? Don’t their opinions have any weight? It depends. As Nobel Laureate Daniel Kahneman recently said in Time magazine, in some fields “it’s been shown that experts are just not better than a dice-throwing monkey.” Maxson takes experts to task on his own blog, in a post using fantasy football as an example of the perils of relying too heavily on "experts."
“Ultimately, experts are very good at figuring out how to do things in development,” says Maxson, “But when it comes to predicting what communities want to prioritize, so-called experts fail miserably.”
Can the community perspective make development work better and have greater impact? Of course donors and implementers want to have and to show impact from their perspective; moving the needle on education, health, income, political empowerment and other areas is still their bottom line.
But I still say reputation cannot be ignored. No matter how much a project or organization might have helped boost someone’s income, do people really say it’s improved their well-being--even when project staff isn’t around?