The weaknesses they noticed included;
• They rely too heavily on simple analysis and ratios derived from poor quality financial data
• They overemphasize financial efficiency while ignoring the question of program effectiveness
• They do a poor job of conducting analysis in important qualitative areas such as management strength, governance quality, and organizational transparency
The author’s also stated that a more effective rating system would include the following elements:
• Improved financial data that is reviewed over 3 to 5 years and put in the context of narrowly defined peer cohorts
• Qualitative evaluation of the organization’s intangibles in areas like brand, management quality, governance, and transparency
• Some review of the organization’s program effectiveness, including both qualitative critique by objective experts in the field, and, where appropriate “customer” feedback from either the donor or the aid recipients perspective
• An opportunity for comment or response by the organization being rated
Obviously, as we all know, and the author’s admitted, incorporating all of these elements would be time consuming and difficult. Regardless, the authors provide some interesting suggestion on how to go about implementing them (CLICK HERE – Read it, I promise it isn’t that long.)
I also wanted to include the charts from the article, in case you didn’t have a chance to read it. The first is the author’s analysis of the rating organizations, including Forbes, here, there is a great deal of variety.
The second is a comparison of the ratings given to the 7 organizations who received the greatest amount of support following the tsunami of 2004.
* If you click on the charts they are bigger and easier to read *
Hi, Casondra,
ReplyDeleteI would definitely agree with you on this point. I thought about the same thing these days. “Watchdogs” such as Charity Navigator are a great idea to evaluate the performance of nonprofits in a third party’s point of view, which can be more objective and fair. However, since the standards to judge the performance of nonprofits vary, it is hard for “watchdogs” to make a plausible enough judgment. While this seems an intrinsic problem, I would doubt whether the idea “rating the raters” is a good solution. Since each raters has their own standards in rating, which are better in some aspects and not so good in others. So it can be hard to rate the taters because what judgment will they use to rate the raters? Will there then be “rating those who rate the raters”?
I would prefer to build a transparent platform for raters and organizations being rate to communicate. So nonprofits who think they are misjudged because of wrong standards can argue and give feedback, and raters, on the other hand, will be able to alter their rate if they are persuaded.
Thanks for posting this, Casondra. I think it is important to compare these types of websites and see what each one is capable of, because, as we know, none of them are completely ideal. The article helps us see what is missing in the full picture of an organization's effectiveness. I agree that an very important aspect that is missing from the sites is a qualitative description of the organizations' faults and successes. If administrators of the websites take this, and the other improvements, into account, donors will be able to gain a better understanding of the organizations. I think the changes will just have to be made incrementally, because they really would be a major draw on time and resources.
ReplyDelete