Friday, March 23, 2012

Ethics and Community Psychology


We all are enthusiastic about the current push to identify the competencies we bring to our work in communities and to learn how to best train students to acquire the competencies. Did you know that one of the ethical principles of psychologists is to be competent? Standard 2 of the ethical principles is “competence” – that a psychologist will only practice within the person’s competence boundaries, is obligated to acquire training to become competent and maintain competence, etc. See http://www.apa.org/ethics/code/index.aspx .

One of the proposed competencies is evaluation. The importance of this competency was brought home to me recently when I reviewed an evaluation of a civic organization that had been conducted by a company that claims competence in conducting evaluations. In brief, like so many nonprofits these days struggling to survive, the Board of Directors of the civic organization was considering eliminating a ten year old program (call it “S”) because it was not financially self sustaining. Prior large financial donations from corporations and foundations had faded away so the parent nonprofit organization was subsidizing program S to keep it going. As overall resources tightened, the Board decided to rethink its continuance of its subsidy of program S and therefore to question the value of S’s brand. In this weak economy, I presume that many organizations are similarly scrutinizing their programs, divisions, etc. to excise the weaker units.

The Board of Directors contracted with a local company (a full service management company, in existence 15 years, that has contracts ranging from the federal government down to small community based organizations) to conduct an evaluation of program S. On paper, the company appeared competent. It defined evaluation as: “.. a process that critically examines a program. It involves collecting and analyzing information about a program’s activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions. Evaluation is essentially the systematic investigation of the merit, worth, or significance of any object, activity, or program.” So far, so good. The materials go on to assert that a great evaluation should employ “rigorous methodology” and should be “inclusive,” “complete,” take in “diverse viewpoints,” etc

And yet …. I noted that the company’s content-filled website does not list the number of employees nor does it reveal a single name or expertise or background of its employees.
The sum total of the “data” for the completed “evaluation” was from one 90-minute focus group involving seven participants in the program (out of a pool of over 200). The final report was presented as a power point (only) and was wholly nonanalytic. Much time went into the company learning about program S and into recording and transcribing the focus group proceedings. They claimed to have used qualitative analysis software and “developed codes” (the codes being “strengths, challenges, suggestions.”) And despite all the accoutrements of a “rigorous methodology,” the body of the evaluation merely consisted of somewhat random quotes from the focus group participants, dealing with trivial or person specific issues OR that were trite. That suggests to me that the questions posed were not sufficiently incisive and the personnel conducting the focus group were not sufficiently skilled to guide the discussion so as to probe more deeply. In any case, this evaluation can be characterized by the Gertrude Stein quote – “there is no there there.”

Further, several of the negative quotes were so specific that the nonprofit staff could easily identify the person making the comment. (The staff had recruited the focus group participants.) For example, one person is quoted as saying that the staff had never taken him/her up on his/her volunteer offer to do x. I learned that the focus group participants were not informed that anything they said could be quoted, verbatim, although they were not attributed by name.

The company’s final recommendations were out of touch with the organization’s reality, e.g., one recommendation was to hire more staff to organize volunteers (whereas the organization is operating in financial crisis mode now and in the foreseeable future). The “next step” was to use the focus group results to “rebrand” program S with enhancements, even though the “focus group results” were inadequate to inform any substantive or feasible change. Needless to say, the organization (which had invested scarce resources in this effort) was unimpressed. The evaluation did not assist the Board of Directors in exercising its responsibility. Another program evaluation thrown in the trash.

We can (and must) do better in terms of the competence we bring to our work.

Gloria Levin

1 comment:

  1. Great post Gloria! I had the same experience with two of my clients. Both had the same evaluator before I came on board. To get me up to speed on what had been done they shared some prior reports. I was shocked at the inappropriate graphs, amount of information that had no purpose, text that did not match graphs (what was the real finding?), and other poor quality. This person is a university professor who sits on national boards and has multiple contracts.

    Similarly, I saw a study done for a nearby city that had an N of 24 and made sweeping generalizations about the findings based on the percentages. From these large differences they made numerous recommendations for community action. But, were these the right actions?

    Which leaves the question, when we find these things, how do we diplomatically point them out without appearing to have an agenda other than ethics?

    Susan Wolfe

    ReplyDelete