Search This Blog

Thursday, March 31, 2011

Accountability for NPO Dialogue?

Charity Navigator currently bases accountability ratings on information provided on the 990 form and the organization's websites. In addition to this type of disclosure however, I would like to see nonprofit organizations held accountable for "Dialogue." How are they engaging their stakeholders? What, if any, actions are they taking based on this dialogue?

I found an interesting article by Saxon and Guo (2011) in which they propose that web-based accountability is not only disclosure but also dialogue, which they define as consisting of "solicitation of stakeholder input" and "interactive engagement." At the library I can document how many patrons use the library internet access per month and demonstrate a need for that access, but that does not mean that the library is asking if other services would be better, or if there are additional services needed such as classes on how to use the computers.

Given the problems with feedback within the sub-sector of human services as identified by Dr. Campbell in "Is constituent feedback living up to it's promise?" such as lack of capacity, the value of the data collected, and disagreement over the purpose of feedback itself, holding NPOs accountable for feedback solicitation does have its problems. But I would propose that the accountability goal not be a glowing success in feedback solicitation, but simply evidence as to any efforts successful or not made by the organization to gain understanding of what is in fact needed by the community from the organization. One goal of mine for the Tioga County Historical Society is to facilitate a few focus groups to determine the needs of area educators in order to provide appropriate programs based on our exhibits.

If it was decided to make dialogue a component of accountability either by the government, or other organizations like Charity Navigator or the Better Business Bureau, what might be the best way to do so? Given the contingency framework by Ebrahim and Rangan, is it possible for all organizations to collect feedback? Would you support Saxton and Guo's dialogue accountability? While they frame it in a web-based context, I would support it in any form - at least on the size/financial level of organizations large enough to be included at Charity Navigator; these organizations would seem to have the capacity for the most part to at least make some efforts in pursuing dialogue accountability, either through a website or through a simple print-based survey.

Tuesday, March 29, 2011

a "right" navigator

This week’s reading makes me think about the objectiveness in those various evaluation resources. Different institute pay different importance to different factors in an organization’s output or outcome. So how can we choose an evaluation institute to trust when we are trying to use it as an indicator for an organization’s effectiveness? In charity navigator’s website we can see that it put revenue as the most important indicator. However, is it suitable to every case?
In my opinion, revenue is a strong indicator for the effectiveness of an organization. However, it is definitely not the only indicator. We should also look at other factors. In some cases, an organization’s mission statement can also be very important when we are trying to pick an organization to donate to. For example, when we want to help people in poverty, we will never able to make the right judgment by merely looking at the organization’s rates of revenue. Instead, we should first of all find out which organizations are working to help the poor.
Therefore, we need to pick the right navigator when we want to use them for different purposes. Besides, we should also understand that those navigators are a source but should not be the only reason for our decisions.

Who Will Save Your Soul?

In Ken's Commentary: The Battle for the Soul of the Nonprofit Sector, Mr. Berger challenges the strategies put in place for the Charity Navigator to evaluate specific organizations. He elaborates on two of the most important questions faced by nonprofits (1. How to define the work being done, 2. How to measure the value of the work being done). He acknowledges that this question is difficult to answer, but is it even possible to answer? There are so many different factors to look into when trying to determine how effective an organization is, and there is no way to look at every aspect of the organization in terms of evaluation. I understand that the point is that the Charity Navigator is trying to do the best they can to satisfy as many areas of interest as possible (in terms of an organization's functionality). In the last statement of this text, Berger states, "The question we must answer is whether we measure, manage and deliver true, verifiable and meaningful results, or simply continue 'the work' with no reliable idea of where our efforts are leading and whether they are truly helping." This is a very loaded question and requires much though, but will we ever be able to come up with one solution?

Should Foundations Measure Impact, Too (If They Don’t Already)?

While reading the working paper by Ebrahim & Rangan (2010) for this week, I found myself intrigued by the authors' statement that: “funders such as foundations, governmental departments, and international aid agencies, are far better positioned than most nonprofits to measure impacts. A foundation that supports efforts in health care, for example, is uniquely situated to see how the work of all its grantees might link together – to connect the dots among a series of outputs and outcomes, to analyze how they lead to impacts” (p. 30).

As I read this statement, I wondered how many foundations actually measure the impact they have as funders of selected groups of nonprofit organizations. I don’t know enough about foundations to answer this question. From what I do know about foundations, it is clear that foundations often require their grantees to measure their performance. However, it is unclear to me if foundations measure their own performance, in terms of the impact the portfolio of nonprofits they fund has as a whole.

My opinion on the subject is that foundations should measure impact, if it is feasible for them to do so. The authors explain that it sometimes does not make sense for organizations to measure impact, depending on their operational strategies and theories of change. Therefore, it seems as though it would not always make sense for foundations to measure their impact, if they mostly fund organizations whose operational strategies and theories of change make it difficult or illogical to do so.

Do any of you know if many foundations measure impact? How important do you think it is for foundations to measure impact? Do you agree with the authors that foundations “are far better positioned than most nonprofits to measure impacts”?

Thoughts and Reflections on Ebrahim and Rangan

Thus far, I view the Ebrahim and Katuri Rangan article (2010) to be the most useful framework for effectiveness, impact, and measurement we have read.

Throughout our various conversations in class I have struggled to objectively define output, outcomes, and impact. I really wish we read this article earlier, but maybe the struggle with these terms was apart of our learning process. I find that the table Rebecca put together in her blog from the reading is a clear example of a useful framework or typology to assess the performance of various nonprofits. I have struggled this semester with the "one-size fits all" notion that nonprofits can evaluated in the same ways, especially considering the extreme variability of nonprofits in terms of mission and theories of change. After our previous nonprofit course, I admit I was attracted to the idea of seeking to evaluate organizations based solely upon financials, but am now conscious of the multitudes of means to measure effectiveness and impact.

In the reading I was drawn to the problem of causality for nonprofits when they are attempting express their effectiveness in terms of output, outcomes, and impact. The problem of proving causality seems to be the biggest problem for nonprofits who wish to relate their own activities to effective fulfillment of their mission or cause. I am certainly glad that I learned about causality last semester, and the significant difficulty of proving it. This will no doubt be important if I wish to administratively contribute to a nonprofit in the future.

As Ebrahim and Kasturi Rangan showed, nonprofit managers of the future are going to have to contend with an increased amount of scrutiny pertaining to accountability and effectiveness (from both funders and the public). Social media will more than likely augment this trend as we can see with both our social media and Charity Navigator projects. Thus, it falls on us to develop a concrete understanding of evaluation theory and practice if we wish to be more effective as future nonprofit professionals.

Who Should Evaluate?

We talked about the importance of evaluation, methods of evaluation, challenges of evaluation and what should be evaluated a lot. During the last class we also raised the question of competence of individuals and organizations who are conducting evaluation.

I remember once we had an independent evaluation of the project funded by Spanish agency. The task of the expert from Spain was to evaluate 4 year project in four rural communities of Armenia having interviews with beneficiaries and project team within three days. Her evaluation showed that the project was not effective. When the report arrived it was clear why it was not successful. The expert was really good in evaluation methods but she did not know anything about the cultural specifics of the region, anything about the governance system, and the time was not sufficient to discover all these aspects, analyze and then prepare the report. The interpreter with whom she worked used different terms while interviewing the beneficiaries and beneficiaries were answering “no” to very obvious questions. One of them was “Did the project team organized training and other activities to encourage community mobilization? “. The interviewees answered no because they have never heard term mobilization. The project team used other translation for the word mobilization while working with the community.This is a small example of the independent evaluation that could destroy the collaboration of the donor agency and implementing agency. Luckily the donor agency was willing to hear the implementing agency's feedback and justification regarding each point of the evaluation. There was another 4 year project approved after that evaluation and the evaluation of that project was also independent but was done through a professional agency located in Armenia.

So who can or should evaluate? Organizations themselves ? Audit or other independent agencies who are professionals in the field of evaluation ? Beneficiaries ? Volunteers/ community members? Evaluation of each of these groups has its advantages and disadvantages. Any ideas ?

Raising Malawi and Other Celebrity Charities

This weekend I read some news about Madonna's charity, Raising Malawi, which made me think more about Berger, Penna, and Goldberg's article, "The Battle for the Soul of the Nonprofit Sector." Madonna co-founded the organization in 2006, and was planning on opening a school for girls in Malawi. Now, however, these plans have been scrapped after allegations of financial mismanagement on the board of directors. Here are two articles that detail the situation more. Both include references to Madonna's good intentions, but lack of lack of expertise and knowledge about what it actually takes to make a nonprofit work.

The Calgary Herald article states that "Madonna's attempt at opening a girls' school in Malawi was well-intentioned, but misguided. She made the common mistake of attempting to start a school from scratch rather than partnering with a credible organization that already exists. ...It is a sad, but all-too-common occurrence that threatens to scare off donors from reputable efforts." Yes, Madonna (and Oprah, and Wyclef Jean, and all of the other celebrities with failed charities) do have good intentions when trying to use their connections and money to change the world. They are likely to apply "promising ideas to social problems without the necessary follow-up and confirmation," one of Berger et al's observations about the trends in the public sector. Celebrities who wish to help should use their power and money to make a difference, but the way to do this is not to create programs that will not be followed up on, and are not designed to produce results. This will only waste the opportunity that these high-profile people have to actually create change (and will waste money-- Time Magazine reports that $3.8 million has already been spent on plans for Madonna's school, with nothing to show for it!) They should instead partner with existing organizations that are demonstrating effectiveness and creating real impact.

Charity Navigator Limitation

These days I focus a part of evaluation in Charity Navigator named Financials. Based on my relative background, I think there is a big limitation in this part, because only use absolute numbers to evaluate the organization without considering its mission, development, and future capacity. I need evaluate two organizations in Charity Navigator project. One is much bigger or more successful than another apparently based on what they reported in 990 forms. One reason is the difference between their mission and working fields, the bigger one is no doubt has more social influences and stronger networks. But, based on Forces for Good and other materials we read this semester, I think we should add something more than those already exist. We'd better to put some attentions on their future development. According to their Financials part again, the smaller one shows a good future growth, compared the increasing rate from the beginning of the year and the end of the year. In other parts, comprehensively evaluating their future development is also important for us, but the on-line evaluation form does not provide spaces to do that. I think I can only do it in my own report.

Do you think so?

Nothing Really New (but comment, please do!)

This week’s readings continue to outline the difficulty of assessing nonprofit effectiveness. I think I have read about this familiar topic before, though it is genuinely worth additional contemplation. After taking a short respite from my ongoing academic reflections (also known as the first spring break), I resolve that we must interpret effectiveness as an immeasurable entity.

First, effectiveness is a construct, assembled from understandings of other ideas and real world phenomena to which we ascribe meaning based on our subjective knowledge and experiences. No one understands effectiveness in the same manner as another, let alone in the context of different nonprofit organizations. Next, social issues and causes, which numerous nonprofits toil to solve and support, are also constructs with far-reaching and historical implications. We attempt to define social issues based on supposed “root causes,” using the analogy of plant anatomy to tacitly defer greater understanding. Perhaps, social issues are so deeply interwoven among each other that we are doing a great disservice to ourselves in attempting to define individual social issues as distinct and possessing distinct causes. To create an equally constructive metaphor, social issues are the threads in an untieable knot, connecting our shoes during a footrace we must win. Clearly, the solution is Velcro (or those cool Nike Airs Marty McFly used in Back to the Future II, http://www.youtube.com/watch?v=28Wa5L-fkkM), which involves adopting a different manner of approach altogether. As obvious as the idea is, consider all social issues as stemming from the human condition (i.e. society, which is social, oddly enough), whereas “the” root is found in our curious, and often arbitrary social institutions, norms, and traditions which we follow because they are all we know.

Though there is ever-present room for new theoretical foundations to describe these esoteric ideas, new theories will just add to the pile, specifically the pile of books and papers assigned to students. I deduce that we should appreciate the complexity of these sorts of concepts and continue to engage in dialogue. We are all students participating in the exercise of learning that some ideas are beyond all of our comprehensions. Naturally, we should stick to the multi-millennial mission (of the human organization, established sometime before clocks were invented) of “one slow step forward at a time,” though hopefully in greater harmony and prosperity as our world advances. After all, the world may very well subsist without our species and our accompanying loads of garbage. So let us make the best of it.

And let us proceed with the Charity Navigator exercises. I am as ready as I can be.

Monday, March 28, 2011

Three Bridges from Charity Navigator to What We Have Learned

The article “The Battle for the Soul of the Nonprofit Sector” written by Berger et al is not long, but from this article, which seems an argument to prove that Charity Navigator (CN) is of significant meaning, I found several linkages from what we have learned in this course so far.

  • Results – Oriented Rating. In the book Forces for Good, Crutchfield and Grant write about people in nonprofit organizations are extremely result – oriented. While this topic surprised me a lot, I found it very true in CN’s statements. In the first of the three observations in this article, Berger et al refer that much effort aimed at addressing social problems failed to produce “results” over the last few decades. In fact, the process that CN evaluate a nonprofit organization is mostly the process to evaluate the results of the organization. Through several standards, raters would be able to rate the results of effectiveness, fiscal soundness and accountability of a certain organization. From my point of view, if Crutchfield and Grant are right that nonprofits are results – oriented, then the method that CN use to measure the results should not be doubted.

  • Transparency and Accessibility. One of the two critical that CN use to judge whether the organization worth donation and will provide most impact is “for the information regarding performance to be made not only available but readily accessible to the public”. In the book, The Networked Nonprofit, Kanter and Fine raise the example that many nonprofits are using social media to promote themselves, but most of them don’t have a platform for customers to leave comments and give suggestions, for fear of getting negative feedbacks and harm the organizations themselves. Kanter and Fine explained that leave a place for comments actually will do more good than harm to the organization. CN provides a transparent and interactive platform for publics to evaluate the organizations, even though they don’t have a place for comment in their own websites.


  • Crowdsourcing. It is no doubt that CN did an excellent job in crowdsourcing. By providing the information to those who have relevant background knowledge and are interested in this evaluation, CN not only meet its mission by crowdsourcing, but also improved its awareness. On the other hand, raters gained valuable experience through the process. It is a win-win situation for both the crowd and the organization.

Advisory from Charity Navigator and Governor Cuomo

I also came across this “ADVISORY” while looking for one of the organizations on the Charity Navigator website, this is clearly something that potential donors should be aware of but may have overlooked in the news. (**The link provided to the information from Governor Cuomo is also very interesting, and will only take a minute to read**)

Consistent Ratings?

Last week, I had posted some information I retrieved from an article called “the rating game.” In it, the authors had compared the ratings received by several organizations on various websites. I thought that since we have been talking consistently about the organization in Forces for Good, I would do the same. All of the websites I used have different ratings systems (like Charity Navigator’s star system) so they cannot be compared that easily. Regardless, I thought it was interesting to see the comparison, particularly since Crutchfield and Grant thought these organizations had the greatest IMPACT. Impact, of course, being something many of these websites fail to consider and a weakness we have discussed.

So, here it is.



Here is the description provided by Guidestar about the requirements to become a Guidestar Exchange Partner.

The GuideStar Exchange is an initiative designed to connect nonprofits with current and potential supporters. With millions of people coming to GuideStar to learn more about nonprofit organizations, the GuideStar Exchange allows nonprofits to share a wealth of up-to-date information with GuideStar's vast on-line audience of grantmakers and individual donors.

Exchange members are nonprofits that have updated their nonprofit reports to the fullest—sharing information, documentation, photos, and video with GuideStar's visitors.

Becoming a GuideStar Exchange member is free of charge. To join, an organization needs to update its report page, completing all required fields for membership. Here are some of the required fields:

• An independent audit of financial statements (organizations with total revenues greater than $1 million)
• A GuideStar Basic Financial Statement (organizations with revenues less than $200,000)
• An independent audit of financial statements (community foundations with assets of $5 million or more)
• An independent review of financial statements (community foundations with assets less than $5 million)

** Interestingly, like the other sites, the focus appears to be on financial information.

Rating Grameen Bank: The Impact of a Multi-dimensional Rating System

The Battle for the Soul of the Nonprofit Sector mentions how unique the nonprofit sector is in its ability to address many social problems (5). A fact that all of us have come to learn as MPA students. The article used Grameen Bank as an example of an unique nonprofit. Grameen Bank is a nonprofit that is known as a bank for the poor. Ironically, during the break I had read a MPA student’s blog who interned at Grameen Bank . (If you are interested in reading his blog, here is the link: http://gbinternship.blogspot.com/search?updated-max=2010-12-06T18%3A24%3A00-08%3A00&max-results=7


The intern described his experience as one that opened his eyes to the inner-workings of the bank. The intern believed that he had seen enough to develop strong criticisms of the operational practices of Grameen Bank. For instance, he suspected that center managers were performing their audits out of formality without following prescribed methods they were trained with. While this is just one intern’s critique of Grameen’s operational practices, I can’t help but wonder the impact a multi-dimensional rating system would have on Grameen Bank’s current 4-star rating on Charity Navigator. Would this add pressure to Grameen Bank’s central mangers to reevaluate their audit methods?

Charity Navigator experiance

Well, I decided to be brave enough to begin the evaluation process on the Charity Navigator process. My two organizations are very interesting and it appears as if one is going to receive a far better rating than the other. I'm unsure if this was done by design or not. However, if some of you haven't looked at it yet, here are a few things to look for:

1. I found it difficult to be confident in quite a few of my answers. I think the "confidence percentage" option at each rating question is a very good tool, because I was rarely at 100%.

2. It seems as if quite a few of the answers to the questions can all come from the same link. Again, not sure if this is a good or bad thing but I found myself pasting the same link in for each question.

3. There seems to be heavy reliance on the 990 form. Simply inputting numbers from the form is actually the easy part of the evaluation.

4. For me, the 5 minute rule cam into effect quite often on some of the questions. I still haven't determined if it is because some of the questions are hard to understand in the first place, or the websites are lacking that info. I think you guys will get that same feeling.

All in all, it has been a nice experience using Charity Navigator. It's very easy to use, and that is very important. Good luck to everyone as you rate your charities in the next week.

Saturday, March 26, 2011

Ode to Charity Evaluation

Oh multitude of non-profit organizations,
services for homeless and centers for education,
They say we should evaluate you
but are telling us your financials just won't do.

There seems to be no effective way
to evaluate the use of money we pay.
Do we count the number of programs you run
or how many kids come to you for their fun?

Should we check your 990 even though its confusing
but even then, it's mostly financials we are using.
We could look into your annual report
but the numbers there you can also distort.

Maybe we should look at practices or collaboration
or how you compare with others around the nation.
We can look at how you understand diversity
or have you evaluated by a university.

We can make graphs or charts
or look at where your impact starts.
Or maybe whether or not you're "green"
because that is also a popular scene.

There seems to be no best way
to evaluate all of the charities today.
Until we are able to better understand,
it seems financial reports maintain the upper hand.

After looking at the articles for this week, it seems to be more information or ideas on the same question - how can we effectively evaluate the work of non-profit organizations, and is it possible at all? In searching for different kinds of charity evaluations, I stumbled upon , which uses a variety of approaches to evaluate several charities in the Seattle area. The site allows donors to select an area of interest, such as education, the arts, community development, or basic needs, and investigate organizations that are involved in those activities or services. The foundation does not give the organizations a scored rating like Charity Navigator, but does provide a program overview, limited financial statements (perhaps a good thing?), and then an evaluation. In the evaluation section, organizations have several facets listed - although no organization is evaluated on all of the elements. These facets include an explanation of how an organization incorporates best practices, collaborates with others, demonstrates proven success, maintains financial health, appreciates diversity and cultural competency, and engages in strong leadership and sustainability practices. I liked this approach because, although organizations are not evaluated on all facets (most will have 3 or 4), there is information about how the organization responds to each element, rather than a simple number which defines success. In all of the reading we are doing it seems that there are several approaches to attempting to evaluate effectiveness, but that there is not a clear way to do that which can be used for different types and sizes of non-profits, hence the lovely piece of poetic art above.