I attended two morning sessions at GPF on Day 3 before I had to leave to catch my flight home. The first was a presentation by Omidyar Network on how they combine grants, programme-related investing and mission-related investing; this was followed by a very large session on measurement and metrics. Here are some thoughts on the two sessions.
→ Given that it was held at 8am, Omidyar Network’s session was surprisingly well attended, which I think is at least an encouraging sign that more funders are beginning to take the idea of flexible funding (the ability to advance one’s mission via grants, equity investments or loans, including using the corpus for mission-related investments) seriously. Less encouraging was Omidyar Network’s description of the organization they have evolved to make flexible funding work. In short, this involves having a foundation with no staff; the foundation’s work is outsourced to an LLC (a for-profit corporate structure). This allows the same ‘programme officers’ to make grants, programme-related equity investments or just straight for-profit investments according to what is most appropriate for the situation. The folks from Omidyar also noted that making the system work involved having a very close working relationship between programme officers and legal and finance staff. That’s discouraging because it’s hard to see many foundations actually reorganizing themselves this deeply – and that would be a real barrier to success with flexible funding.
→ One of the more interesting comments made by the Omidyar group was that they fund ‘open-source for-profit social entrepreneurs’ (that’s my phrase, not theirs), the idea being that they are willing to invest in entrepreneurs who are working to prove a model that can then be shared with others. The example given was a for-profit private school in the notorious Nairobi slum of Kibera. If the venture works to provide affordable quality education in Kibera, it can presumably work (with some tweaking of course) in any urban slum in the developing world. When I tweeted this perspective, Paul Hudnut, a founder of EnviroFit (one of the first ‘social entrepreneurs’) and a blogger/tweeter who goes by the handle of BOPreneur, asked me (via Twitter) to follow up on whether Omidyar planned to share failures as well as the successes – Paul’s point being that open source requires publishing everything you do so that the community can decide what to use or not use. When I posed the question, it was clear that this was something the folks at Omidyar had not yet considered. They noted that they hadn’t been investing long enough to have failures (or true successes), so it had not yet come up. I’m hoping that they’ll take Paul’s excellent question seriously and develop a plan to publish their failures. That would be a truly remarkable break with traditional philanthropic practice, far more than using flexible funding.
→ Even 36 hours later, I’m still not quite sure what to make of the monster metrics and measurement session. There was an impressive and wide-ranging panel (I counted 14 members) and a lot of interesting things were said, and useful examples shared. Mark Kramer from FSG summarized their recent report on shared measurement strategies – Kramer called the system that has every grantee reporting separately to every funder a ‘major barrier’ to bringing about large change. On the encouraging side, the Greater Kansas City Community Foundation shared some positive experiences and successes launching a new community-wide charity evaluation system called DonorEdge, and Pew Charitable Trusts discussed the success of the Cultural Data Project – another shared measurement system, this one focused on arts organizations and encompassing 12 funders and 150 art institutions across eight states. There was broad agreement that measurement needed to be seen as a tool to improve performance now, not as a way to prove effectiveness in the past.
But in the end I think I was more discouraged than encouraged because it seemed that everyone needed to genuflect to the idea that randomized control trials (RCTs), the gold standard of measurement, were too expensive, too narrow, too hard or just useless. I’ve written elsewhere that I believe this kicking of RCTs is ultimately due to fear that they will show that only small change is ever possible. But as the global health presentations showed the day before, metrics are all too easy to manipulate when they are not based on soundly designed evaluations. And if the sector continues to cast aspersions on the best tool we have for measurement, I despair that metrics will ever be used to make wise decisions rather than to justify what we already want to do.
Tim Ogden is editor-in-chief of Philanthropy Action and an executive partner of Sona Partners. Emailtimothy.ogden@philanthropyaction.com
To read Tim’s blog go to http://www.philanthropyaction.com
Comments (0)