Digging into the Foundation Center’s Glasspockets grants data

road-workAs you may know, the Foundation Center has been collecting detailed grants data from some of the largest U.S. foundations for the last few years through its Glasspockets initiative. What you may not know is that Glasspockets has developed a simple programmatic way to access its data through an open API (a way for programmers to easily access information).

For those technically inclined, I developed and published an R wrapper for accessing and loading Glasspockets queries on Github. For those less technical, R is a popular open soure statistical software package that I use for data analysis.

The Glasspockets API plus the R library allows me to easily search Glasspockets grants, which I’m planning on mining for future blog posts. For my initial pass with the data I started looking at the Gates Foundation’s giving, specifically looking at the seasonality in their grant making.

Gates giving by month

I’ve long lamented how our tax policy drives seasonal giving, creating peaks and valleys in donations. Next week’s Giving Tuesday is a clever way to try to capitalize on this unfortunate reality.

While the typical donor might not think about giving until year end, I would think that large foundations would be less seasonal in their giving. In the case of the Gates Foundation, I would also be wrong.

Using data from Glasspockets, I constructed the following chart which shows the sum of giving by month from 2011–2014 (up to November 2014 that is). A quick glance shows that November, the 11th month, dwarfs grants made in any other month. Indeed, the Gates Foundation made grants totalling $3,204,224,816 in the Novembers from 2011–2014.

gates_giving_by_month

I’m not necessarily arguing that it’s a bad thing that Gates giving isn’t more spread out. I did however assume that it would not so closely match the regular donor public’s patterns of giving.

I’m interested to explore whether this same seasonality, especially dominance of giving in November, holds true for other foundations or not. More importantly, I’m looking forward to digging deeper into the Glasspockets data. If you are a fellow R user, feel free to grab the library and jump in as well.

How founder culture threatens the social sector

entrepreneur-superhero-picJob titles can be incredibly misleading, yet are frustratingly persuasive. Most of my career I’ve been the “founder”, “CEO”, and “principal” of a company I created. During my eight years running my company I was always amazed by the instant credibility these lofty titles commanded. Never mind that as a founder you are not only the “CEO” and “founder”, but also the “office coordinate”, “janitor”, and just about everything else.

Nevertheless, people invariably assumed my decision to create a company meant that I was both a leader and that I must have been incredibly successful. Like most founders, in truth I was neither a leader nor terribly successful.

Yet the title of “founder” afforded me instant credibility, which occasionally translated into invitations to speak at and join so-called social sector leadership groups.

Now that I have transitioned from founder to employee, I am by all measures more capable of generating social value. I am more experienced, far more technically competent, and have never had a better sense of what does and does not work.

I took a job because I suck

I was recently nominated by a contact of mine to join a social sector trade group focused on improving how the sector uses data to create impact, a topic I write about regularly.

I have joined various similar efforts in the past when I had less to say on the topic than I do now. Despite being less qualified in the past, folks were always happy to have a “founder” and “leader” join their groups.

I was pretty surprised when my nomination was turned down, literally based on where I rank on the FII website (I’m the 9th person listed on the management team page, obviously ordered from best to worst employee). To be clear, there are lots of good reasons to turn me down for just about anything, but this didn’t strike me as one of them.

When I ran Idealistics, it was assumed that I was a leader. Now that I work for FII, I am apparently a follower.

Why Millennials don’t want your nonprofit job

At various times I’ve heard mid and late career social sector professionals complain that Millennials are too inclined to start new organizations than to join existing ones. The logic invariably goes that we can create more social impact by joining together than fragmenting our efforts into a sea of undistinguished startups.

Yet the professional environment we have collectively created is one that fetishizes social entrepreneurs as “visionaries” and “leaders”, while employees are largely discarded as 9–5 building blocks.

It’s no wonder Millennials don’t want your nonprofit job. No one wants to be considered a building block. I sure as hell don’t.

Ultimately I closed Idealistics, and joined FII because I lost faith in the former’s ability to create social impact and grew to believe the most value I could create was by joining the latter. I never imagined I was trading recognition for results. But that’s exactly what ended up happening.

Founder culture

The social sector is unique from the rest of the economy because of the ideal that we go into this line of work for something bigger than ourselves.

Yet our obsession with founder culture masks the core value that makes the social sector worth existing in the first place. It values individualism in a way that encourages people to optimize their careers over how they are perceived at the expense of the social impact they create.

There are legitimate reasons to create new organizations. There are also legitimate reasons to join existing ones. Both of those decisions should be driven by which opportunity puts one in position to create the most social value. Obsession with founder culture disrupts this calculus, threatening the core values that make this sector worth existing in the first place.

Leveling with donors while keeping hope alive

lloyd-300x300Over the last year I’ve started to pay more attention to the fundraising side of the nonprofit equation. While those of use who live and breath the social sector hope donors will become less persuaded by financial overhead ratios and more focused on data driven giving, the evidence suggests donors aren’t especially interested in deeply researching their charitable gifts.

A recent report on donor behaviors adds more fuel to the fire, as researchers found evidence to suggest that donors don’t simply not care for data driven giving, it might actually turn them off completely. Paul Slovic, a psychologist at the University of Oregon conducted an experiment where he “told volunteers about a young girl suffering from starvation and then measured how much the volunteers were willing to donate to help her. He presented another group of volunteers with the same story of the starving little girl — but this time, also told them about the millions of others suffering from starvation.”

The volunteers who were simply told the story about the young girl suffering from starvation were more inclined to give than those told both the story of the young girl suffering and given the metric of the millions more suffering.

On face, this finding seems to suggest that fundraising teams should stick to stories and keep the data to themselves. I think the finding is more nuanced than that, and doesn’t necessarily suggest donors don’t care about the numbers. Instead, Slovic tells NPR:

“It’s really about the sense of efficacy,” Slovic says. “If our brain … creates an illusion of non-efficacy, people could be demotivated by thinking, ‘Well, this is such a big problem. Is my donation going to be effective in any way?’”

Slovic’s research suggests that the way to combat this hopelessness is to give people a sense that their intervention can, in fact, make a difference.

The big harm

Our impulse as story tellers is to tell a big story, with a “big harm”. We want to prove that we’re tackling a big problem, and we end up weaving a grandiose narrative for donors to try to communicate the enormity of the issues we care so deeply about. The intent of course is to convince other folks they should care about these issues as much as we do.

The evidence suggests this approach doesn’t work.

It doesn’t work in part because we tell too big of a story, especially relative to the donor ask. Donors want to convert their money into happiness (utility). Once a donor figures out that the marginal value of a dollar given to [insert your cause here] is zero, the rational thing to do is to not give money away, as donors derive zero utility from zero impact.

Outcomes ownership

About a year ago I started playing with a concept I call outcomes ownership. The idea is to calculate a donor’s percentage “ownership” of an organization’s outcomes. Calculating outcomes ownership is rather trivial, as a donor simply “owns” an organization’s total outcomes (over a period of time) multiplied by a donor’s contribution over total revenue.

For example, if an organization counts people placed into full-time jobs as an outcome, and an organization placed 10 people into jobs with $1,000, then a donor who gave $200 would own 200/1000 = 20% of revenue, and therefore 10*20% = 2 job placements.

What I like about this approach is that it reframes what one gets from giving in familiar terms. People who invest are generally comfortable with the idea of owning infinitesimally small percentages of public traded companies, and therefore claiming an equally small portion of profits.

Selling people on a big harm then asking for a small donation masks the fact that a small donation is part of a larger pot. Explaining to donors what their $10 buys is a fool’s errand because the fact is that $10 doesn’t buy much. Period.

An investment analogy might help balance leveling with donors by acknowledging the donation is a small part of a larger whole, while keeping hope alive that the cumulation of multiple small investments pools to a significant sum capable of creating impact.

Telling average stories

UNIVERSAL SETUPI’ve never been terribly comfortable with the social sector’s obsession with story telling. It’s not that I don’t understand that stories can be powerful. People can resonate with stories in a way that they can’t with numbers. Indeed, evidence suggests that while story telling can help drive donors to give, quantitative data can actually risk turning donors off.

My problem with story telling is not the story telling itself per se, but that stories can be misleading. Perhaps more important, because story selection is driven by nonprofit fundraising and public relations people rather than those focused on data integrity, the stories told are invariably positive outliers.

I’m not the only one concerned about how stories can be misleading. GiveDirectly, a nonprofit that provides unconditional cash transfers to those living in extreme poverty, wrote an important post on how to best balance donor demand for stories with the organization’s core tenant to present its findings in unbiased ways.

In a blogpost last week, GiveDirectly outlined a set of standards it will hold itself to when sharing stories, and more importantly deciding which stories to share. The rules are worth reading, and are included in full below.

To keep ourselves honest when doing so, we’ve decided to stick to three rules:

  • Share everything, as in this blog post on interesting spending choices;
  • Select recipients randomly so that every recipient’s story has an equal chance of being shared, as we do weekly on Facebook. Or, explicitly state if the recipient was not chosen randomly and why, as in this post on a recipient who experienced an adverse event; and/or
  • Provide contextualizing data so the reader can determine how representative of the average the story is. For example, if we relay a case of a woman who used her transfer to pay for a surgery, we’ll also share any data we have on average spending on medical expenses.

Finding the average story

GiveDirectly’s strategy to select stories at random is compelling. A randomly selected story holds some probability of being positive, with the complimenting probability that the story is negative. When was the last time you saw a nonprofit share a negative story?

But more interesting than sharing randomly selected stories is to systematically tell average stories. Finding average stories is not an uncomplicated task, especially since one can be average on one metric (income for example), but far from average on another (like health).

One possible approach to identifying average stories is to use a machine learning clustering algorithm, such as k-means. Roughly, the k-means algorithm takes a dataset of individuals with various data-points, and places individuals into groups with others that possess similar attributes. This type of clustering is regularly used for things like customer segmentation, but can work equally as well for grouping targets of program interventions.

Improving on GiveDirectly’s approach, instead of telling random stories from the entire population, you could instead pull stories from within clusters, providing the average demographics and outcomes from each group as context for stories told.

Story telling versus truth telling

I’m not against stories as defined as a qualitative accounting of an individuals lived experience. There is always more richness in a narrative than a quantitative dataset. However, I am opposed to story telling when it’s really just a pseudonym for bullshit.

Good story telling does not just elicit a reaction from donors, it communicates the truth in a way that quantitative data never can. Even if sharing quantitative data isn’t part of an organization’s strategy for engaging donors, data should help guide which stories are shared.

The Red Cross’ obvious disaster

obvious-mathAt the end of October ProPublica and NPR released a joint investigation titled “The Red Cross’ Secret Disaster”, looking into the gulf between the American Red Cross’s fundraising prowess in the aftermath of Hurricane Sandy and the realities of its numerous stumbles in providing the relief the organization so publicly fundraised against. Indeed, to many the Red Cross seemed far better prepared to raise funds in the wake of Sandy than to deploy them effectively toward disaster relief.

The most shocking thing to me in all of these allegations against the Red Cross is that the general donor public is actually surprised that the Red Cross (or pretty much any nonprofit for that matter) prioritizes how it’s perceived over all else. A central driving tenant of every entity, be it a nonprofit, for-profit, or bunny rabbit, is to do what you can today to survive until tomorrow.

Although ProPublica and NPR have positioned their piece as an exposé on the American Red Cross’ failures during Hurricane Sandy, I read the piece more as a statement on how the market realities of running a nonprofit creates adverse incentives, driving organizations to raise funds at the expense of what their stated core missions are.

Funders of change

Most for-profit organizations create a product that they sell directly to consumers. In the nonprofit sector, the funders of a program’s interventions are typically not the recipients of those services. Since the recipients of aid are not the funders, they can’t logically be the focal points of self sustaining organizations.

ProPublica and NPR rake the American Red Cross over the coals for diverting disaster equipment toward a photo-op with model Heidi Klum, an example used in the article to demonstrate executives’ backwards priorities. While those suffering in a disaster probably have no interest in Heidi Klum slowly strolling down a street handing out bottled water as cameras roll, the reality is that those images help the Red Cross raise money.

And however bad some might argue the Red Cross is at providing disaster relief, it’s obviously damn good at raising funds. Like any well run self serving organization (and what organization isn’t at least somewhat self serving?), the Red Cross finely tunes its fundraising strategy. If the monetization opportunity was in providing top notch disaster relief, I can assure you Sandy outcomes would have been different.

But the reality is that most nonprofits’ monetization strategies have very little to do with their programs’ missions. Instead, nonprofits raise funds by and large on their abilities to get donors to exchange their money for the warm-glow of giving.

Smarter giving

For the donors who are outraged at the Red Cross’ alleged ineptitude and emphasis on media exposure over outcomes, I’m hopeful they become aware of their unwitting complicitness in this so called secrete disaster.

For all the arguments about how we need more money in the social sector, I’m more persuaded by those who call for smarter giving. To me, smart giving is giving that is driven by a donor’s best guess of the value created by an organization, optimally influenced by evidence instead of celebrity.

Organizations like GiveWell have carved out narrow niches to better inform donors with specific preferences, although I suspect donors seeking advice from the likes of GiveWell are likely to be unimpressed by Heidi Klum in a disaster response vehicle in the first place.

The big money, and the big challenge, is in the general donor public. The fact is that the Red Cross knows the donor public very, very well.

So long as donors show a preference for media hype over results, shrewdly optimized organizations like the Red Cross will deliver the product their (paying) customers demand.

Why I joined the Family Independence Initiative

FII logo orangeOn September 1st, 2014 I joined the Family Independence Initiative (FII) as the Director of Analytics. When I decided to shut down Idealistics in the summer of 2013 I figured the next step for me would be joining a team focusing on domestic and/or international poverty. In the time between shutting down Idealistics and joining FII I consulted with a range of nonprofits, foundations, and businesses, none of which felt quite like something I wanted to dedicate several years of my life to.

I decided to close Idealistics in part because I had lost faith that the company’s technologies were really helping the nonprofits I worked with to create social impact. Indeed, I had lost faith that anti-poverty interventions had much effect at all.

I had come to realize that so much of my work, like the social sector itself, had been based on a misguided paradigm of a nonprofit sector providing solutions to distressed “clients” in “need” of answers. This paradigm doesn’t only oversimplify the poor, it plainly gets them wrong.

As I was losing faith in the efficacy of the program driven nonprofit model, my interest in cash transfers was growing considerably. In my personal giving, I support GiveDirectly, a nonprofit that experiments with giving unconditional cash transfers to those living in extreme poverty in the developing world. While the evidence around conditional cash transfers is pretty compelling, and the evidence base for unconditional cash transfers is growing, what I find most compelling about the unconditional cash transfer model has less to do with the transfer of money and more to do with the underlying notion of trusting people living in poverty.

Positive deviance

Positive deviance is a phenomenon whereby certain individuals given the same circumstance and access to raw materials are able to achieve better outcomes than their peers. The term was first coined by nutritionists and applied by those studying how certain families in rural Vietnam in the 1990s were able to provide better nutrition for their children than most families in the same areas.

I had not learned about the term positive deviance until joining FII, but it instantly resolved much of what I’ve been uncomfortable with about the social sector for so long and why I was attracted to models like GiveDirectly and FII.

I have never lived in any type of poverty, from extreme poverty in the developing world to domestic poverty as defined by the U.S. federal poverty line. It is asinine to believe I should see a pathway out of poverty, given my complete ignorance of any of its realities. Yet asinine I’ve been for the last decade of my career.

In FII I found an organization that is less interested in solving the problems of the poor, and instead more interested in learning about how the poor improve themselves, their families, and their own communities.

Data driven nonprofit

There’s a lot of talk in the social sector about data driven nonprofits. I spent eight years at Idealistics and an additional year as an independent consultant working with nonprofits to try to help them improve their data infrastructures to little success.

I’ve written before about the pitfalls of poor data literacy in the social sector, but data literacy is something that can be overcome and hired into an organization. What cannot be acquired through new hires is a data culture. A data culture requires the organization from top to bottom to truly adhere to not only investing in the ability to mine data for feedback, but then taking that feedback and turning it into organizational change.

Given the nonprofit model of developing a theory of change, then fundraising around that model, it’s not terribly surprising that nonprofits struggle to be data driven. A data driven nonprofit must be willing to accept not only that its theory of change might be wrong, but instead it must expect that it is most likely wrong.

Indeed, in graduate school my econometrics professor taught me that “all models are wrong, but some are useful”. An analyst’s role is to develop models that are knowingly wrong that hopefully get less wrong through time, as more data is acquired. This approach of iterative improvement and willingness to shift key assumptions is antithetical to how nonprofits are largely financed. Too often, a funder invests in a nonprofit on the assumption that the nonprofit’s theory of change is correct, leaving the nonprofit to use data to justify the funder’s investment rather than to use data to identify where the organization is wrong, and how to improve.

Investing in people, not nonprofits

I did not get into the social sector because I love nonprofits. I got into the social sector because I love people. Somewhere along the way, my career became about serving nonprofits, not serving people, not serving communities.

The word “service” is a popular way to describe our line of work in the nonprofit sector. Of course, when I receive “service” I expect to get what I want. I seek out services to help me achieve a goal, my goal, not a goal someone else has determined for me.

At FII, my job is to use data to learn from families how they improve themselves and their communities. I’m not tasked with proving a particular model, instead I’m learning about how families define success on their own terms, and how we (collectively) can invest in the incredible initiatives already underway by people that we (in the social sector) for too long have considered objects of change instead of agents of change.

I couldn’t be more thrilled.

Impact calls are the future of transparency

phone recordingTransparency is a building block buzz-word of the social sector. While there seems to be general consensus that transparency is important, the proprietary actions of social sector actors runs contrary to that idealized vision.

Given this imbalanced rhetoric to reality ratio, I was especially intrigued by Guidestar’s new approach to sharing its progress with the public. Guidestar is experimenting with what it calls “impact calls”, quarterly webinars where the nonprofit’s leadership discusses its finances, impact, and strategic road map. The concept of the impact call is modeled on the quarterly earning calls publicly traded companies hold for shareholders.

On May 12 Guidestar held its second quarterly impact call, the first I have had the opportunity to listen in on. The impact call provided a solid overview of the organization’s finances and projected revenues, as well as a short and mid-term strategic road map, although the call was quite a bit lighter on actual impact reporting.

During the call, Guidestar CEO Jacob Harold explained that Guidestar’s impact assessment strategy is still evolving, and that the organization is developing an impact measurement dashboard it may present at the next impact call. The insinuation seems to be that as the impact measurement tool evolves, Guidestar will be better positioned to report its outcomes.

Although I was disappointed not to hear much about Guidestar’s impact on its impact call, I was nonetheless impressed with the concept and even found the sharing of less exhilarating (although more easily enumerated) metrics such as subscribers and web-usage statistics a great step toward real nonprofit transparency.

A criticism of earnings calls is that the quarterly reporting encourages companies to focus on short-term gains at the expense of long term progress. Guidestar CFO James Lum wisely cautioned that while Guidestar is committed to reporting quarterly results, the organization’s focus is on its long-term strategy. I think this is the right sentiment to have, and hope Guidestar doesn’t feel pressured over time to start optimizing for short-term gains to score favorable headlines in philanthropy media at the exense of the big picture.

This should be a trend

The impact call is an obnoxiously obvious idea. Everyone should be doing this, although I’m not sure many organizations will. Kudos to Guidestar for taking this step, I would love to see, at the very least, foundations follow suit.

While it would be great for every nonprofit to host quarterly impact calls, I’m not sure many folks would care to tune in. Guidestar is the right organization to pioneer this approach because many of its constituents are nonprofits themselves, and more likely to consume this type of information. Similarly, foundations (directly) invest in nonprofits, who would not only be interested in hearing more about how foundations think, but could benefit from learning about foundations’ thought-processes, strategic planning, and overall claims of impact.

Transparency is easy when you’re winning. It will be interesting to see if this type of hyper-transparency holds when findings are less than stellar. The Hewlett Foundation has demonstrated a willingness to embrace this type of transparency in their recent decision to discontinue the Nonprofit Marketplace Initiative, which they announced with the explanation that evaluators found ” our grants have not made much of a dent” in the intended outcomes.

Publicizing wins and losses is the future of transparency. Impact calls are a compelling medium to communicate those findings. I look forward to the next Guidestar impact call, especially if the next call has more impact in it.

Using word clouds to select answer options

Answer optionsSelecting the right questions for your survey instruments can be tough. Equally difficult is identifying the right answer options for the questions you ask people. When selecting answer options, ideally you would provide enoughoptions to get some meaningful feedback and variation in responses, but not too many answer options as to overwhelm survey respondents.

Before launching any survey instrument it’s preferable to do what is called a survey pretest. A pretest is where you get a subsample of people who are like your intended survey audience, and you ask them for feedback on each of your survey questions and answer options. However, pretesting isn’t always possible.

I’ve been working with a nonprofit called Team Tassy that provides workforce services to families in Menelas, Haiti. Team Tassy wanted to learn more about the employability of families in its targeted communities by conducting a survey at a free medical clinic day the organization sponsored.

One of the questions on the survey asked what work related skills each of the respondents possessed. The problem was that we didn’t know whether we were providing the right answer options to the skills question.

Ideally we would have pretested the question to get feedback on what types of skills should be included in the answer options. However, pulling together a focus group abroad would have provided logistical challenges, making pretesting less of a viable option.

Since we were not able to pretest the answer options, instead Team Tassy took its best guess as to what the answer options should have been, and provided an option for respondents to fill in any other skills not included as part of the question’s answer options. We planned to use the free-form options to better learn what job skills options should have been included.

Team Tassy collected more than 250 surveys at the medical clinic it sponsored. Given the relatively large number of surveys, reading through each of the free-form answers wasn’t completely practical. Instead, we built a word cloud of the free-form skills options to get a visual idea of what types of skills were most mentioned.

Skills word cloud

The word cloud revealed that several individuals reported having merchant and dress making related skills, options that were not included among the original answer options. Going forward, Team Tassy will now include these options on future skills questions.

Word clouds are a pretty low-tech approach to data analysis. But they can be really effective, especially for getting quick feedback on what types of answer options you might included on your surveys.

Help yourself to my ideas

Steal ideasI spent too much time and effort at my now defunct company worrying about people stealing my ideas. By the time I was wrapping up Idealistics, I thought about open sourcing the code I paid Github a monthly fee to keep private, all to realize it would have taken a ton of effort to get anyone to care that I was giving my software away for free.

If I had Idealistics to do again, I would have spent my energy spreading my ideas rather than paying to keep them secret.

I think there is a lot of value in being open regardless of the industry one is in, but there certainly is value in openness in the social sector. We’re supposed to be in the business of solving social problems after all.

Given the value of openess, and general rallying cry around nonprofit transparency, I can’t help but wonder why I’ve been coming across so many nonprofits intent on lawyering up to “protect” their intellectual property.

I’ve been doing a lot of contracting work recently. Really interesting stuff, and a bunch of insights that could probably help out a whole slew of social interventions. And I can’t tell you about any of it. I’m contractually obligated not to.

The social sector exists in this weird space between the public and private sector. We run private entities intended for public benefit. In the process we develop proprietary solutions to public problems.

That last sentence makes my head hurt.

We need to make a decision as to what we are, and what we stand for. You can’t have proprietary collective impact. The patent system is designed to allow companies to lock in competitive advantages to singularly reap the benefits of their investments over a period of time. Our investments are supposed to be public. So why the hell are startup social enterprises seeking patents on technological solutions to connect low-income families to social programs? Who benefits from those patents? Certainly not the public. Definitely not the poor.

I’m grateful to be working on an exciting range of contracts. While I’m under contract not to say anything about that work, going forward I’ll certainly be more open about my own ideas, even if I intend to monetize them.

If my ideas are any good, and can actually create real social value, be my guest and help yourself to my ideas.

Hire a Chief Data Officer

Chief data officerI’ve been doing a lot of consulting recently, which has resulted in a several months long hiatus from writing on this site. Happily, my greater volume of consulting engagements has given me more opportunities to give people bad (and hopefully some good) advise, which means more content for Full Contact Philanthropy.

Recently I have been reflecting on some particularly bad advise I gave to one of my customers. Over the summer I was hired by a large provider of housing and homeless services to improve the operational speed at which chronically homeless clients were being placed into housing. The project went well, and by the end of the project the executive team was seeing the value data can bring to their organization on an ongoing basis

The executive director asked me to draft a memo outlining what the organization should look for in an internal analytics hire. Ideally, the executive said, the hire would be able to work both on social outcomes data as well as helping the development team improve its use of donor data. I advised the executive against hiring one person to oversee all of the organization’s data needs, as I felt there was value in having specific domain experience (such as a background in homeless services or fundraising) before jumping into an issue specific data set.

I was wrong.

By telling the executive he should hire two different analysts, I scared him off of bringing in more data talent entirely, as I took what already looked like a budget sheet stretch (one new hire with a non-trivial skill set) to something completely out of reach.

Furthermore, while domain experience is important, the organization already had sufficient internal domain expertise. The development folks know development really well. And the program team is top notch. What they didn’t have was internal capacity in sifting through the volumes of data the organization collected each day.

Instead of trying to argue for hiring analysts who intimately know the organization’s core mission, instead I should have advised a management structure where the development and program teams make data requests to a data team, allowing development and program staff to identify the right questions, and letting the data team (or individual to start) focus on answering those questions with the data available.

As I’ve thought about this issue further, and gotten a closer look at the data needs of both the program and development sides of nonprofits, the more I am convinced that having a Chief Data Officer (someone whose sole responsibility is focusing on the data needs of the entire organization) makes a lot of sense.

The idea of a Chief Data Officer has been growing in popularity in the for-profit world. There are some nonprofits that have had success employing chief data officers as well. However, the idea of the Chief Data Officer has not permeated throughout the social sector. Instead, the nonprofits that have employed heads of data have more obviously quantifiable interventions, generally nonprofits that focus on measuring online engagement like DoSomething.

However, there is an exciting, and much broader opportunity, for various types of organizations to bring in Chief Data Officers. Indeed, regardless of what your organization does, every organization (business, nonprofit, foundation, whatever) traffics in some sort of information. Given the importance of data, not just now but historically as well, a Chief Data Officer is as logical, and essential, a hire as good director of programs, development, and Chief Financial Officer.

I’ve complained before that the rhetoric around data in the social sector is too hallow, and thinking too shallow. Part of the block in moving from concept to action in realizing the value of data is that organizations have not invested sufficiently in figuring out how data works in their managerial structures.

Mario Morino rightly encouraged the sector to think more intelligently about how to manage to outcomes. Managing to outcomes is not just about outcomes reporting software, but investing in people and process. I couldn’t be more excited about the fact that my work is giving me the opportunity to help organizations think more seriously about how to build data cultures. It’s a theme I’m passionate about and plan to expand on more in subsequent posts.