Impact calls are the future of transparency

phone recordingTransparency is a building block buzz-word of the social sector. While there seems to be general consensus that transparency is important, the proprietary actions of social sector actors runs contrary to that idealized vision.

Given this imbalanced rhetoric to reality ratio, I was especially intrigued by Guidestar’s new approach to sharing its progress with the public. Guidestar is experimenting with what it calls “impact calls”, quarterly webinars where the nonprofit’s leadership discusses its finances, impact, and strategic road map. The concept of the impact call is modeled on the quarterly earning calls publicly traded companies hold for shareholders.

On May 12 Guidestar held its second quarterly impact call, the first I have had the opportunity to listen in on. The impact call provided a solid overview of the organization’s finances and projected revenues, as well as a short and mid-term strategic road map, although the call was quite a bit lighter on actual impact reporting.

During the call, Guidestar CEO Jacob Harold explained that Guidestar’s impact assessment strategy is still evolving, and that the organization is developing an impact measurement dashboard it may present at the next impact call. The insinuation seems to be that as the impact measurement tool evolves, Guidestar will be better positioned to report its outcomes.

Although I was disappointed not to hear much about Guidestar’s impact on its impact call, I was nonetheless impressed with the concept and even found the sharing of less exhilarating (although more easily enumerated) metrics such as subscribers and web-usage statistics a great step toward real nonprofit transparency.

A criticism of earnings calls is that the quarterly reporting encourages companies to focus on short-term gains at the expense of long term progress. Guidestar CFO James Lum wisely cautioned that while Guidestar is committed to reporting quarterly results, the organization’s focus is on its long-term strategy. I think this is the right sentiment to have, and hope Guidestar doesn’t feel pressured over time to start optimizing for short-term gains to score favorable headlines in philanthropy media at the exense of the big picture.

This should be a trend

The impact call is an obnoxiously obvious idea. Everyone should be doing this, although I’m not sure many organizations will. Kudos to Guidestar for taking this step, I would love to see, at the very least, foundations follow suit.

While it would be great for every nonprofit to host quarterly impact calls, I’m not sure many folks would care to tune in. Guidestar is the right organization to pioneer this approach because many of its constituents are nonprofits themselves, and more likely to consume this type of information. Similarly, foundations (directly) invest in nonprofits, who would not only be interested in hearing more about how foundations think, but could benefit from learning about foundations’ thought-processes, strategic planning, and overall claims of impact.

Transparency is easy when you’re winning. It will be interesting to see if this type of hyper-transparency holds when findings are less than stellar. The Hewlett Foundation has demonstrated a willingness to embrace this type of transparency in their recent decision to discontinue the Nonprofit Marketplace Initiative, which they announced with the explanation that evaluators found ” our grants have not made much of a dent” in the intended outcomes.

Publicizing wins and losses is the future of transparency. Impact calls are a compelling medium to communicate those findings. I look forward to the next Guidestar impact call, especially if the next call has more impact in it.

Using word clouds to select answer options

Answer optionsSelecting the right questions for your survey instruments can be tough. Equally difficult is identifying the right answer options for the questions you ask people. When selecting answer options, ideally you would provide enoughoptions to get some meaningful feedback and variation in responses, but not too many answer options as to overwhelm survey respondents.

Before launching any survey instrument it’s preferable to do what is called a survey pretest. A pretest is where you get a subsample of people who are like your intended survey audience, and you ask them for feedback on each of your survey questions and answer options. However, pretesting isn’t always possible.

I’ve been working with a nonprofit called Team Tassy that provides workforce services to families in Menelas, Haiti. Team Tassy wanted to learn more about the employability of families in its targeted communities by conducting a survey at a free medical clinic day the organization sponsored.

One of the questions on the survey asked what work related skills each of the respondents possessed. The problem was that we didn’t know whether we were providing the right answer options to the skills question.

Ideally we would have pretested the question to get feedback on what types of skills should be included in the answer options. However, pulling together a focus group abroad would have provided logistical challenges, making pretesting less of a viable option.

Since we were not able to pretest the answer options, instead Team Tassy took its best guess as to what the answer options should have been, and provided an option for respondents to fill in any other skills not included as part of the question’s answer options. We planned to use the free-form options to better learn what job skills options should have been included.

Team Tassy collected more than 250 surveys at the medical clinic it sponsored. Given the relatively large number of surveys, reading through each of the free-form answers wasn’t completely practical. Instead, we built a word cloud of the free-form skills options to get a visual idea of what types of skills were most mentioned.

Skills word cloud

The word cloud revealed that several individuals reported having merchant and dress making related skills, options that were not included among the original answer options. Going forward, Team Tassy will now include these options on future skills questions.

Word clouds are a pretty low-tech approach to data analysis. But they can be really effective, especially for getting quick feedback on what types of answer options you might included on your surveys.

Help yourself to my ideas

Steal ideasI spent too much time and effort at my now defunct company worrying about people stealing my ideas. By the time I was wrapping up Idealistics, I thought about open sourcing the code I paid Github a monthly fee to keep private, all to realize it would have taken a ton of effort to get anyone to care that I was giving my software away for free.

If I had Idealistics to do again, I would have spent my energy spreading my ideas rather than paying to keep them secret.

I think there is a lot of value in being open regardless of the industry one is in, but there certainly is value in openness in the social sector. We’re supposed to be in the business of solving social problems after all.

Given the value of openess, and general rallying cry around nonprofit transparency, I can’t help but wonder why I’ve been coming across so many nonprofits intent on lawyering up to “protect” their intellectual property.

I’ve been doing a lot of contracting work recently. Really interesting stuff, and a bunch of insights that could probably help out a whole slew of social interventions. And I can’t tell you about any of it. I’m contractually obligated not to.

The social sector exists in this weird space between the public and private sector. We run private entities intended for public benefit. In the process we develop proprietary solutions to public problems.

That last sentence makes my head hurt.

We need to make a decision as to what we are, and what we stand for. You can’t have proprietary collective impact. The patent system is designed to allow companies to lock in competitive advantages to singularly reap the benefits of their investments over a period of time. Our investments are supposed to be public. So why the hell are startup social enterprises seeking patents on technological solutions to connect low-income families to social programs? Who benefits from those patents? Certainly not the public. Definitely not the poor.

I’m grateful to be working on an exciting range of contracts. While I’m under contract not to say anything about that work, going forward I’ll certainly be more open about my own ideas, even if I intend to monetize them.

If my ideas are any good, and can actually create real social value, be my guest and help yourself to my ideas.

Hire a Chief Data Officer

Chief data officerI’ve been doing a lot of consulting recently, which has resulted in a several months long hiatus from writing on this site. Happily, my greater volume of consulting engagements has given me more opportunities to give people bad (and hopefully some good) advise, which means more content for Full Contact Philanthropy.

Recently I have been reflecting on some particularly bad advise I gave to one of my customers. Over the summer I was hired by a large provider of housing and homeless services to improve the operational speed at which chronically homeless clients were being placed into housing. The project went well, and by the end of the project the executive team was seeing the value data can bring to their organization on an ongoing basis

The executive director asked me to draft a memo outlining what the organization should look for in an internal analytics hire. Ideally, the executive said, the hire would be able to work both on social outcomes data as well as helping the development team improve its use of donor data. I advised the executive against hiring one person to oversee all of the organization’s data needs, as I felt there was value in having specific domain experience (such as a background in homeless services or fundraising) before jumping into an issue specific data set.

I was wrong.

By telling the executive he should hire two different analysts, I scared him off of bringing in more data talent entirely, as I took what already looked like a budget sheet stretch (one new hire with a non-trivial skill set) to something completely out of reach.

Furthermore, while domain experience is important, the organization already had sufficient internal domain expertise. The development folks know development really well. And the program team is top notch. What they didn’t have was internal capacity in sifting through the volumes of data the organization collected each day.

Instead of trying to argue for hiring analysts who intimately know the organization’s core mission, instead I should have advised a management structure where the development and program teams make data requests to a data team, allowing development and program staff to identify the right questions, and letting the data team (or individual to start) focus on answering those questions with the data available.

As I’ve thought about this issue further, and gotten a closer look at the data needs of both the program and development sides of nonprofits, the more I am convinced that having a Chief Data Officer (someone whose sole responsibility is focusing on the data needs of the entire organization) makes a lot of sense.

The idea of a Chief Data Officer has been growing in popularity in the for-profit world. There are some nonprofits that have had success employing chief data officers as well. However, the idea of the Chief Data Officer has not permeated throughout the social sector. Instead, the nonprofits that have employed heads of data have more obviously quantifiable interventions, generally nonprofits that focus on measuring online engagement like DoSomething.

However, there is an exciting, and much broader opportunity, for various types of organizations to bring in Chief Data Officers. Indeed, regardless of what your organization does, every organization (business, nonprofit, foundation, whatever) traffics in some sort of information. Given the importance of data, not just now but historically as well, a Chief Data Officer is as logical, and essential, a hire as good director of programs, development, and Chief Financial Officer.

I’ve complained before that the rhetoric around data in the social sector is too hallow, and thinking too shallow. Part of the block in moving from concept to action in realizing the value of data is that organizations have not invested sufficiently in figuring out how data works in their managerial structures.

Mario Morino rightly encouraged the sector to think more intelligently about how to manage to outcomes. Managing to outcomes is not just about outcomes reporting software, but investing in people and process. I couldn’t be more excited about the fact that my work is giving me the opportunity to help organizations think more seriously about how to build data cultures. It’s a theme I’m passionate about and plan to expand on more in subsequent posts.

Tying charitable deductions to outcomes

IncentivesWhile the jury is out on the effectiveness of social impact bonds (SIB), the fundamental idea of rewarding investment in effective social interventions makes a lot of sense. That core tenant of social impact bonds is so compelling that I’m surprised such thinking has not spilled into the charitable deduction debate.

Ideally, the charitable deduction allows donors to write-off investments in the betterment of society. But with 1.5 million nonprofits in the United States, our definition of public benefit is broad, a point underscored by the clear divide in the types of charities middle income individuals donate to versus the wealthy.

Borrowing from social impact bonds, I started thinking about a charitable deduction schedule that allowed donors to write-off outcomes rather than our current approach which limits donors to writing-off inputs.

Under this tax deduction scheme, charitable organizations’ deduction rates would be tiered based on the marginal benefit of each additional dollar donated. The marginal benefit component allows the donation rate to be tempered not just based on societal outcomes (for example, Carnegie Mellon University has a good argument that its students create a lot of economic value, present company excluded), but what effect each additional dollar has on social outcomes. This caveat is similar to GiveWell’s consideration of not just an organization’s effectiveness, but also its room for additional funding.

Tying charitable deductions to outcomes would open up much of the promise of SIB’s to all nonprofits, allowing high functioning nonprofits to market higher deduction rates to potential donors. Obviously such an approach would be fraught with evaluative difficulties, although no more so than SIBs.

I have lamented in the past how our current funding environment rewards nonprofits for investing in marketing over outcomes. A tier system that assigns deduction rates based on outcomes would better align organizations around maximizing social value. Wasn’t that the point of the charitable deduction in the first place?

Cash Transfer Equivalency Calculator

calculator_1266675Closing my company has given me the time to pursue a number of small projects. One of those projects is a concept I wrote about last month called the Cash Transfer Equivalency (CTE). The CTE is a simple investment standard that a program officer or social investor can use to assess whether a social program might deliver more value than simply giving equal amounts of cash away.

To make the CTE easier to use, I wrote a web-based CTE calculator that allows users to enter a program’s cost, the number of people the program intends to serve, and the estimated value of that service to each of the intended beneficiaries. Based on those inputs, the CTE calculator estimates whether the proposed social intervention will provide more value than simply giving money away.

The CTE calculator is an easy use to initial assessment tool for grant making insitituions evaluating new grant opportunities. Importantly, because the CTE translates social value into monetary terms, one could use the CTE calculator to compare two or more unlike funding opportunities.

Example

There isn’t much to the CTE calculator, so if you are so inclined you can skip this quick tutorial and give it a try now. But for clarities sake, let’s run through the following example.

Let’s say we are approached to fund a youth focused musical enrichment event. The potential grantee is requesting $7,500 to hold a one day concert for low-income kids. Our first step in the CTE calculator is to enter the program cost, in this case $7,500.

Step 1

The youth concert expects 200 kids to attend. In step two, we enter the expected number of people affected by the program as 200.

Step 2

You’ll notice in step two we didn’t just enter the number of people, but we also need to answer the “average value” column. The “average value” is our best guess as to how much each kid would have been willing to pay to attend the concert, were the program not being provided free of charge. In this case, we put in an estimate of $35 per person.

With those three simple answers, the calculator calculates the CTE and suggests whether the program is worth investing in.

Step 3

With our youth concert example, the system calculates a CTE of 0.93. Because the CTE is below 1 (the point of indifference between doing the program and giving away equal amounts of cash) the calculator determines that the program is not worth investing in.

More simply, the basic mechanics of the CTE are exposed in the average cost versus the expected average value. At a program cost of $7,500 with 200 concert attendees, the average cost per person is $37.50. However, the expected average value we entered was just $35 per youth. Therefore, the program cost more on average than the value we expect each youth to receive.

This is a pretty straightforward example. Where the CTE calculator gets more interesting is when a program targets more than one recipient group. Using the youth concert example, you could imagine not just calculating the return to the kids, but perhaps their parents as well. The calculator allows you to enter any number of target groups, calculating the CTE for each group as well as a weighted average CTE across groups.

Using the CTE calculator

I wish I had the CTE calculator when I was working in a financial intermediary making grants to community development corporations in Pittsburgh. The calculator would have allowed me to more quickly weed out bad investments, and more importantly would have provided a much needed standard method for preliminarily assessing the high volume of incoming grant requests.

If I was still working as a grantmaker I would make the CTE a part of our initial grant application assessment. Each application would be assigned a CTE score by an individual program officer. The grants with the highest CTE scores would then go to committee for further consideration.

Because the CTE score hinges on the assumed monetary value per program recipient, the investment committee would likely debate the value assumption in the model. This is a good thing and illustrates the CTE method’s strength. Because the CTE score is driven as much by our best guess of the monetary value to the beneficiaries as it is a function of cost, the CTE focuses investment committees to have frank discussions about the value they believe their grant making will create.

You can check out the CTE calculator here and use it as you’d like.

Nonprofit consultants, beware of window shoppers

Window_Shopping_by_CLWOOKEYI’m no fan of nonprofit consultants, despite being one myself. But nonprofit consultants are people too, although we’re not always treated as such by the organizations we serve.

As knowledge workers, what we know is what we sell. Yet the courting process for securing work (multiple meetings, requests for proposal, etc), requires that we disclose methodologies to potential customers.

I get that outlining approaches to potential customers is a necessary part of the process. It allows both parties to determine whether the consultant is a good fit. But every consultant has stories of laying out a methodology, entertaining a number of questions from excited sounding staff and board members, all to have those same ideas implemented by another vendor or the organization’s staff.

No hire, no attribution, nothing.

This is a pretty messed up approach, and if you are a non-profit consultant looky-loo (you know who you are), please stop.

I’m not perfect at avoiding nonprofit consulting window-shoppers, but with some experience under my belt I’ve certainly gotten better at avoiding these organizations. Here are a few tips to avoid being a victim of thought theft.

  1. Qualify customers - Before filling out a request for proposal (RFP) or agreeing to meetings, look up an organization’s 990 on Guidestar and check out their annual revenue. If revenue is tight and the proposed scope looks to be outside their budget, you might have a window shopper on your hands.
  2. Be wary of unsolicited requests for proposals - Organizations are typically required to get more than one bid for a project, even if they have a preferred vendor in mind. I’ve certainly had some luck with organizations sending me RFPs out of the blue, but I’m generally wary of these “opportunities”, as they tend to be fishing expeditions for pre-selected vendors.
  3. Be judicious with your time - Window shoppers have a nasty habit of setting up multiple meetings, wasting your time while sucking you dry of your hard fought good ideas. Value your time. If you don’t, they won’t. And if a nonprofit is asking for too much face time without any commitment, it might be time to walk.
  4. Ask around - Ask other consultants about nonprofits you are thinking of working with. I’ve avoided some bad contracts by tapping my network.

My tendency, like other (good) nonprofit consultants is to be helpful. I love geeking out on all things social sector. While the nonprofit sector is accustomed to receiving pro-bono help, manipulating nonprofit consultants looking for work into offering up their ideas for nothing is contrary to the principles of our do-gooding industry.

How social proof subjugates program evaluation

social-proof11About a year and a half ago, The Verge wrote an incredible exposé on the seedy underworld of get-rich-quick fake bossiness gurus, who prey on hapless victims down on their luck and in need of cash.

The basic scam is to sell a wide range of “products” to people aspiring to startup one-person businesses. Each of these products is basically a PDF document full of shallow advice that recommend further products in the series to achieve success.

To the discerning eye, it’s not terribly difficult to spot business self-help website nonsense. They all basically look like some derivation of this.

At the heart of the online marketing underwold is the concept of “social proof”. These business guru scammers collude to make it appear as though they are experts in business, and insanely wealthy. They do so by linking to each others’ websites to manipulate search engine rankings, and quoting one another on their respective websites, giving the illusion that each of these individuals is endorsed by other experts.

I’ve been sitting on this topic for quite some time, always thinking back to the concept of social proof when any new social sector “break through” initiative is touted loudly in the media without a shred of evidence that said intervention actually works.

Who needs evaluation when you have publicity?

Good stories trump good data in the media, and questionable ideas that sound plausible are shrouded in social proof and promoted as though they were ideas worth spreading.

For those of us in the social sector, it’s (generally) easy to spot initiatives with exaggerated claims of success. But for the casual donor and untrained eye, the origin of enormous amounts of support to philanthropic causes, the difference between real outcomes and social proof can be illusive.

I’m not sure how one might go about tackling this problem. There are plenty of nonprofits that try to be honest about their results for internal improvement, and to a lesser extent are transparent with their donors about their findings.

But the incentive is always there to manufacture positive publicity by promoting misleading claims of impact. And more importantly, getting other nonprofits, coalitions, businesses, politicians, and media outlets to repeat those claims, thus creating truth.

The social proof versus program evaluation conundrum is a non-trivial puzzle. Donor education programs are more likely to appeal to savvy donors in the first place, so donor education is at least a difficult path, if not a non-starter.

For nonprofits, favoring actual proof over social proof is a poison pill, as high flying headlines and endorsements by public figures in major publications will always trump more down-to-earth claims of impact.

It’s an interesting question without a clear answer. The cost of not figuring it out is donor capital flowing to compelling sounding claims, rather than actual results.

Philanthropic taking – who should decide what’s good for us?

sit-down-shut-up-and-do-as-you-re-toldThe social sector is rife with strange power dynamics. On the one hand the social sector is about giving, both the act of giving and the perceived selflessness of being rich enough to have money to spare.

So obsessed are we with giving, that it seems a quarter of social sector organizations have some form of “give” in their name.

The corollary to giving, of course, is taking.

A focus on “philanthropic giving” sounds noble, and sexy. So sexy and admirable is philanthropic giving that the Knight Foundation recently released a playbook for organizing giving days, and countless technology companies are springing up to make every day donors feel like heroes, while taking 3% donation processing fees.

Noble indeed.

But heroes need victims to save. In the social sector, the counterpart to philanthropic giving is philanthropic taking. Folks like me are hired by nonprofits and foundations to help them model their theories of change, a fancy way of defining one group’s (the “philanthropic giver”) vision for another (the “philanthropic taker”).

Nonprofits like the Family Independence Initiative (FII) have called out this traditional take on philanthropy, opting instead for their yet unproven strategy of having low-income families organize themselves out of poverty. The core argument here is that the top down model of philanthropic providers setting objectives for target populations is not only troublingly paternalistic, but also ineffective.

The fundamental premise of one person setting objectives for another is that the latter person doesn’t know what he or she needs. Proponents of this approach in anti-poverty interventions point to what they perceive to be, effectively, the economic irrationality of the poor.

Yet there is evidence to suggest the poor, even the extreme poor, are quite economically rationale, and that the poor maximize their happiness as any other economic actor does, as argued in the excellent book Poor Economics.

While I might make a different decision than you, that doesn’t necessarily make either of us irrational. We simply assign different values to different outcomes. The same is true of the poor.

The trouble is that foundations and nonprofits are in the business of assigning their values to other people’s problems. And why shouldn’t they? It’s their time, and their money, shouldn’t it be spent to maximize their social ambitions?

Sure, why not. But what if the best way to see a given change in the world is to listen for solutions rather than preach them?

That’s the simple premise behind the budding beneficiary feedback movement, which argues we need to listen more to the voices of so-called program recipients (and perhaps less so from folks like myself).

This line of thinking is what lead me to ultimately consider the Cash Transfer Equivalency (CTE) metric, which I introduced in a post last week. While the CTE is admittedly an imperfect evaluative metric, as a planning tool it is a simple way to quantify the value a program participant excepts to receive from a social intervention.

If philanthropic giving is about more than self-aggrandizement, we would be wise to reconceive program participants as more than philanthropic takers.

Technology’s role in the social sector

know_your_roleI’m no fan of Silicon Valley’s offerings for the social sector, but Causes’ relaunch yesterday sparked an interesting conversation in my Twitter feed on the role of technology in our line of work.

Causes is repositioning itself as a platform for civic engagement. Like Change.org before it, Causes works with organizations from varying political ideologies, a political-neutral business practice that has drawn the ire of activists in the past.

In the following exchange, Twitter user @mcbyrne argued with Causes CEO Matthew Mahan that by supporting non-liberal initiatives like the NRA, Causes could not justly claim to be a platform for social good:

Matthew responded by suggesting that Causes is simply a platform for social actions, and that technology is politically neutral:

Which lead to the following counter-argument:

Well, that’s just not true at all. Democracy is letting people voice their opinions. Indeed, what is more democratic than empowering idiots to speak their mind?

The fundamental issue in this debate is what the proper role of technology in the social sector ought to be.

I chose a career in the social sector because I have strongly held beliefs on what change I want to see in the world. Working in the social sector allows me to spend my time working on issues that I care about.

But in my work, like anyone else’s work, I use a lot of tools. And those tools don’t have political agendas. People do, I do, but not my tools. If Causes is indeed a tool, an amplification vessel for political action, then why should the platform take a stand on what people promote on the network?

Technology has done wonders for the world, and even the social sector. But the commercial technology that has aided the social sector has not had stated social agendas. Obviously computers and office productivity software make much of the work we do in the social sector possible. But we don’t think of it as “social sector software”.

In so far as Causes is just a platform for social actions, simply a tool, I find nothing offensive about the company working with organizations from varying ideologies.

Technology’s role in the social sector is no different than technology’s role anywhere. Technology should be useful. It should make it easier to accomplish what we want to see happen in the world. I’m not sure the new Causes is going to set the social sector on fire, but I have no desire to burn it down either.