Saturday, March 31, 2012



Arts-Based Methods and their Value to Community Psychology

 Hello again! As you may remember from our last posting we (Katherine and Kyrah) are doing a posting series on arts-based research (ABR) approaches to community-based participatory research (CBPR). In this post we highlight some arts-based approaches to CBPR, as well as, the potential for these methods.

        ABR approaches incorporate music, dance, photography, visual art, and theatre into the research process through data collection, analysis, or dissemination (Leavy, 2009). Incorporating such methods into a CBPR framework creates a unique research agenda. Such an agenda reflects feminist values (using art to understand a person’s perspective or viewpoint), promotes a non-hierarchical structure to the research process (co-creating art to be used as data that will directly influence social structures or policies), challenges the status quo of conventional research paradigms, and allows for a unique approach to understanding community context through culturally relevant channels of communication (Haraway, 2001; Harding, 1987; Leavy, 2009; Rappaport, 2005; Wallerstein & Duran, 2003). As is the case with a great deal of community psychology work, there is also an explicit focus on social justice (Rappaport, 2005).

       Photovoice is one arts-based participatory research method. This approach was introduced by Wang and Burris (1994) to understand women’s health issues in rural China. Photovoice projects are almost entirely participant driven from developing the research questions, analyzing data, to disseminating what was learned. Data collection for Photovoice projects begins by presenting participants with simple, concise questions, called framing questions (these reflect the project theme). Participants then go through multiple rounds of taking photographs and writing  narratives for each framing question. These are discussed during in-person meetings. Participants also help to qualitatively analyze the data prior to putting together a public outreach tool (Wang, 1999). These tend to take the shape of photography exhibits or digital stories. Either option presents the photos, narratives, and themes that emerged throughout the project. This culmination is particularly emancipatory as the audience (which is decided on by the participants) tends to be community leaders who can learn from the participants’ voices and implement policies and regulations according to the perspectives of those whose viewpoints are often unheard.

       Ethnodrama, ethnotheater, and performance ethnography are other approaches to arts-based participatory research (Leavy, 2009). These approaches are sometimes viewed as distinct, but in general refer to the process of analyzing, translating, and disseminating research through dramatic performance (Leavy, 2009). Research may be previously collected through interviews, focus groups, field notes, or other conventional methods and writing a script based on research findings may also allow the research to make composite sketches about specific themes found throughout the data (Leavy, 2009). Performance ethnography also creates a unique opportunity for the researcher to recognize/assert their role. As suggested by many feminist scholars, deciding the researchers’ role in the performance and how their role interacts with the script or other performers establishes reflexivity as an integral part of the research process (Leavy, 2009).
        Overall, arts-based scholars have explored methods related to narrative analysis, poetry, music, performance, dance/movement, and visual arts (Leavy, 2009). These approaches may be bridged together (for example using music and performance as one), or supplemented by conventional research methods.

       We believe that there is a special place for ABR in community psychology.  We want to note that arts-based research is not new; and that it is a set of practices that have been used for years to capture the experience of individuals and communities.  It has been gaining popularity as a form of qualitative research among psychologists. Because arts-based methods are action oriented and participatory in nature, many psychologists have found it useful for engaging individuals and groups in the process of addressing the needs and strengths of the community. It creates an opportunity for researchers and practitioners to use diverse, creative strategies for conducting and evaluating research. It is no doubt that community psychologists understand the importance of social context, and arts-based practices can be a powerful tool used to identify and analyze the contexts in which people live.  It is a tool that can be crafted by community and used as a sustainable catalyst for change.

         While there are challenges to using arts-based approaches (e.g., community buy-in; time-consuming), the benefits of such work can be far-reaching. It is common for researchers to use arts-based methods (e.g., photovoice) to supplement the overarching research agenda. Consider a project that involves addressing gang violence among adolescents. There are important methods (i.e., surveys, neighborhood data) used to assess the context and the needs of a community. However, imagine that teens are actively involved in photographing and writing narratives about their community. Imagine the youth presenting their findings to key stakeholders and suggesting what to do about gang violence. The point is that arts-based methods can be a viable tool for citizen participation, using and building on community strengths, empowerment, and sustainable change.  In essence, community psychology research and practice can benefit from arts-based approaches.

**This post was written by Katherine Cloutier from Michigan State University, and Kyrah Brown from Wichita State University.

References
Haraway, D. (2001). Situated knowledges: The science question in feminism and the privilege of partial perspective. In M. Lederman & I. Bartsch (Eds.), The gender and science reader (pp. 169-188). London: Routledge.
Harding, S. (1987). Introduction: Is there a feminist method? In S. Harding (Ed.), Feminism and methodology: Social science issues (pp. 1-14). Bloomington: Indiana University Press.
Leavy, P. (2009). Method meets art: Arts-based research practice. New York: The Guilford Press.
Rappaport, J. (2005). Community psychology is (thank God) more than science. American Journal of Community Psychology, 35(3/4), 231-238. doi: 10.1007/s10464-005-3402-6
Wallerstein, N., & Duran, B. (2003). The conceptual, historical, and practice roots of community based participatory research and related participatory traditions. In M. Minkler & N. Wallerstein (Eds.), Community-based participatory research for health (pp. 27-52). San Francisco: Jossey-Bass.
Wang, C. C. (1999). Photovoice: A participatory action research strategy applied to women's health. Journal of Women's Health, 8(2), 185-192.
Wang, C., & Burris, M. A. (1994). Empowerment through photo novella: Portraits of participation. Health Education & Behavior, 21(2), 171-186. doi: 10.1177/109019819402100204


Friday, March 23, 2012

Ethics and Community Psychology


We all are enthusiastic about the current push to identify the competencies we bring to our work in communities and to learn how to best train students to acquire the competencies. Did you know that one of the ethical principles of psychologists is to be competent? Standard 2 of the ethical principles is “competence” – that a psychologist will only practice within the person’s competence boundaries, is obligated to acquire training to become competent and maintain competence, etc. See http://www.apa.org/ethics/code/index.aspx .

One of the proposed competencies is evaluation. The importance of this competency was brought home to me recently when I reviewed an evaluation of a civic organization that had been conducted by a company that claims competence in conducting evaluations. In brief, like so many nonprofits these days struggling to survive, the Board of Directors of the civic organization was considering eliminating a ten year old program (call it “S”) because it was not financially self sustaining. Prior large financial donations from corporations and foundations had faded away so the parent nonprofit organization was subsidizing program S to keep it going. As overall resources tightened, the Board decided to rethink its continuance of its subsidy of program S and therefore to question the value of S’s brand. In this weak economy, I presume that many organizations are similarly scrutinizing their programs, divisions, etc. to excise the weaker units.

The Board of Directors contracted with a local company (a full service management company, in existence 15 years, that has contracts ranging from the federal government down to small community based organizations) to conduct an evaluation of program S. On paper, the company appeared competent. It defined evaluation as: “.. a process that critically examines a program. It involves collecting and analyzing information about a program’s activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions. Evaluation is essentially the systematic investigation of the merit, worth, or significance of any object, activity, or program.” So far, so good. The materials go on to assert that a great evaluation should employ “rigorous methodology” and should be “inclusive,” “complete,” take in “diverse viewpoints,” etc

And yet …. I noted that the company’s content-filled website does not list the number of employees nor does it reveal a single name or expertise or background of its employees.
The sum total of the “data” for the completed “evaluation” was from one 90-minute focus group involving seven participants in the program (out of a pool of over 200). The final report was presented as a power point (only) and was wholly nonanalytic. Much time went into the company learning about program S and into recording and transcribing the focus group proceedings. They claimed to have used qualitative analysis software and “developed codes” (the codes being “strengths, challenges, suggestions.”) And despite all the accoutrements of a “rigorous methodology,” the body of the evaluation merely consisted of somewhat random quotes from the focus group participants, dealing with trivial or person specific issues OR that were trite. That suggests to me that the questions posed were not sufficiently incisive and the personnel conducting the focus group were not sufficiently skilled to guide the discussion so as to probe more deeply. In any case, this evaluation can be characterized by the Gertrude Stein quote – “there is no there there.”

Further, several of the negative quotes were so specific that the nonprofit staff could easily identify the person making the comment. (The staff had recruited the focus group participants.) For example, one person is quoted as saying that the staff had never taken him/her up on his/her volunteer offer to do x. I learned that the focus group participants were not informed that anything they said could be quoted, verbatim, although they were not attributed by name.

The company’s final recommendations were out of touch with the organization’s reality, e.g., one recommendation was to hire more staff to organize volunteers (whereas the organization is operating in financial crisis mode now and in the foreseeable future). The “next step” was to use the focus group results to “rebrand” program S with enhancements, even though the “focus group results” were inadequate to inform any substantive or feasible change. Needless to say, the organization (which had invested scarce resources in this effort) was unimpressed. The evaluation did not assist the Board of Directors in exercising its responsibility. Another program evaluation thrown in the trash.

We can (and must) do better in terms of the competence we bring to our work.

Gloria Levin

Thursday, March 15, 2012

What do Community Psychologists do?

So I missed this meme by about a month, but while it was going strong my colleague in the Community & Cultural Psychology program here at UH Manoa created this tribute to what the world thinks a "community psychologist" does.

I think she nailed it. What do you think? How do you explain what you do, and does anyone ever really get it right?



Gina Cardazone, courtesy of Ashley Anglin 
University of Hawaiʻi at Mānoa

Thursday, March 8, 2012

Psychology of Climate Change: Communicating How We Think


Community psychologists are very interested in understanding cultural ways of knowing. The concept of ‘ways of knowing’ is meant to indicate that our outlook on life is greatly impacted by how we are raised in our culture—the things we think about, how we think about these things, different ways we define a problem, different solutions we might come up with, etc. That is not to say that everyone within a culture thinks exactly the same.

A quick exercise:
  1. If you are American, think for a minute and define “American culture.” Do you fit this definition?
  2. Now pretend you are not American, and define “American culture.” How well do you fit this definition?
[If you are not American, substitute your own country or culture of your choice.]

As an American, I can think of a zillion ways in which I am not your average American. But when I place myself in the shoes of an outsider looking in, oh yeah, I can see a zillion ways in which I’m a pretty typical American. These are different ways of knowing. In studying cultural ways of knowing, patterns emerge. Yes, huge variations also exist, but cultural ways of knowing can help us to understand both the patterns and the variations. It’s all a matter of the outlook you have, and something psychologists like to call “the problem definition.”
One of my favorite examples of cultural divergences in the problem definition comes from a study done in the late 1960s. Greenfield (1997) summarized it this way:

“Cole, Gay, Glick, and Sharp (1971) took an object-sorting task to Liberia, where they presented it to their Kpelle participants. There were 20 objects that divided evenly into the linguistic categories of food, implements, food containers, and clothing. Instead of doing the taxonomic sorts expected by the researchers, participants persistently made functional pairings (Glick, 1968). For example, rather than sorting objects into groups of tools and foods, participants would put a potato and a knife together because “you take the knife and cut the potato” (Cole et al., 1971, pg. 79). According to Glick, participants often justified their pairings by stating “that a wise man could only do such and such” (Glick, 1968, p. 13). In total exasperation, the researchers “finally said, ‘How would a fool do it?’ The result was a set of nice linguistically ordered categories—four of them with five items each” (Glick, 1968, p. 13). In short, the researchers’ criterion for intelligent behavior was the participants’ criterion for foolish; the participants’ criterion for wise behavior was the researchers’ criterion for stupid.

In this example, researchers and participants were all clearly able to understand and demonstrate all possible solutions to the task. But because of their different cultural ways of knowing, they tended to define the problem and the solution in different ways. Their initial assumptions were different, and so they tended to draw different conclusions about what they should do.

Another quick exercise:
  1. Google ‘climate change’ and read any article or watch any video clip that pops up.
  2. Scroll down to the comments and read them until you feel your brain turn to pudding.

You may have noticed that people think about climate change very differently. We have different ways of defining the problem (e.g., global threat of environmental catastrophe, liberal conspiracy to rule the earth, real thing that’s happening but no biggie, etc.). With each of these problem definitions comes a separate set of solutions (e.g., large-scale mitigation and adaptation efforts, decentralize power and let the free market reign, do nothing because it’ll sort itself out, etc.).

There is actually a scientific consensus on climate change that has been consistent for multiple decades. Plenty of uncertainties remain, including how bad the problem is going to be depending on what people decided to do. Emit more carbon? Big problem. Lower emissions? Less big problem. But there is so much misinformation out there, and so much more poorly understood information that climate scientists are really beginning to struggle with questions about communicating their research and findings to the general public. How best to translate science jargon into human speak? But just as important, scientists are realizing that, as scientists, they make a series of assumptions about knowledge, research, and data that non-scientists don’t necessarily make. Their ways of knowing are different. Much like the researchers trying to get the Kpelle to categorize objects in a specific way, climate scientists would like non-scientists to understand their data in a certain way so as to come to the same conclusions

Because of our history of work with culture and ways of knowing, community psychologists can contribute to climate change communication. A few examples have recently emerged in this aspect of the psychology of climate change. The American Climate & Environmental Values Survey found a number of ways of knowing that influence Americans’ understanding of and response to climate change. For example, “We’re not ready to abandon the American Dream” (p. 16). Communicating climate change research within the framework of sacrifice, without also focusing on potentials for positive development, turns Americans off. Also, Americans often respond to one of two types of morality: ‘we should address climate change because it’s the right thing to do,’ or, ‘we should address climate change before disasters harm our environment, and thus, harm us.’
Another recent report, How to Talk About Climate Change and Oceans: A FrameWorks Message Brief, discusses specifically the ways Americans feel out of step with the scientific community on climate change. For example, “They remain woefully ignorant of how exactly global warming works” (p. 1, italics in original) even though they have plenty of examples of it happening at home and abroad. Many Americans require a foundational tutorial about how the climate operates normally in order to understand how and why it might be changing. But take care in communicating this because, “When scientific data is presented in ways that seem exaggerated or overstated, Americans become more skeptical of claims about the origins of the problem and more likely to believe that the problem could be natural and not anthropogenic” (p. 2, italics in original).

Climate change is a very broad field that is so very complex. Community psychologists can positively contribute to this field by exploring people’s ways of knowing when it comes to the climate sciences. As mentioned before, huge variations will always exist within a culture, even within our cultural patters. But if we work toward an understanding of the climate change problem definition, we can begin to agree on action steps leading to solutions.

Greenfield, P. M. (1997). You can’t take it with you: Why ability assessments don’t cross cultures. American Psychologist, 52(10), 1115-1124.
Citing:
Cole, M., Gay, J., Glick, J., & Sharp, D. W. (1971). The cultural context of learning and thinking. New York: Basic Books.
Glick, J. (1968, February). Cognitive style among the Kpelle of Liberia. Paper presented at the meeting on Cross-Cultural Cognitive Studies, American Educational Research Association, Chicago.

Kati Corlew, M.A.
University of Hawai`i, Mānoa