did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780231134538

Programs to Reduce Teen Dating Violence and Sexual Assault : Perspectives on What Works

by
  • ISBN13:

    9780231134538

  • ISBN10:

    0231134533

  • Format: Paperback
  • Copyright: 2009-05-01
  • Publisher: Columbia Univ Pr

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

List Price: $37.00 Save up to $21.25
  • Rent Book $23.31
    Add to Cart Free Shipping Icon Free Shipping

    TERM
    PRICE
    DUE
    USUALLY SHIPS IN 3-5 BUSINESS DAYS
    *This item is part of an exclusive publisher rental program and requires an additional convenience fee. This fee will be reflected in the shopping cart.

Supplemental Materials

What is included with this book?

Summary

Arlene Weisz and Beverly Black interview practitioners from more than fifty dating violence and sexual assault programs across the United States to provide a unique resource for effective teen dating violence prevention. Enhancing existing research with the shared wisdom of the nation's prevention community, Weisz and Black describe program goals and content, recruitment strategies, membership, structure, and community involvement in practitioners' own words. Their comprehensive approach reveals the core techniques that should be a part of any successful prevention program, including theoretical consistency, which contributes to sound content development, and peer education and youth leadership, which empower participants and keep programs relevant.Weisz and Black show that multisession programs are most useful in preventing violence and assault, because they enable participants to learn new behaviors and change entrenched attitudes. Combining single- and mixed-gender sessions, as well as steering discussions away from the assignment of blame, also yield positive results. The authors demonstrate that productive education remains sensitive to differences in culture and sexual orientation and includes experiential exercises and role-playing. Manuals help in guiding educators and improving evaluation, but they should also allow adolescents to direct the discussion. Good programs regularly address teachers and parents. Ultimately, though, Weisz and Black find that the ideal program retains prevention educators long after the apprentice stage, encouraging self-evaluation and new interventions based on the wisdom that experience brings.

Author Biography

Arlene N. Weisz joined the faculty of the School of Social Work at Wayne State University in 1995, after a social work career that included practice with adolescents. She researches adult and adolescent partner violence and sexual assault prevention and has co-coordinated and evaluated dating violence and sexual assault prevention programs for middle-school youth and university students. Beverly M. Black is a professor in the School of Social Work at the University of Texas at Arlington. For several years she co-coordinated and evaluated dating violence and sexual assault prevention programs for adolescents and university students in Detroit, Michigan. She conducts research and publishes on issues related to domestic violence, sexual assault, dating violence, and prevention programming.

Table of Contents

Acknowledgmentsp. ix
Introduction: Goals for the Bookp. 1
Project Design and Methodologyp. 10
Theoretical Considerationsp. 16
Program Goalsp. 25
Recruitment Issues for Prevention Programsp. 39
Membershipp. 61
Structurep. 74
Program Contentp. 96
Diversity Issues in Prevention Programsp. 126
Curricula Developmentp. 139
Peer Leadership Programsp. 150
Parental, School Staff, and Community Involvementp. 176
Evaluation of Prevention Programsp. 195
Qualities of Ideal Prevention Educatorsp. 213
Rewarding, Troubling, and Challenging Aspectsp. 233
Wish Lists, Strengths, and Trendsp. 261
Programs Interviewedp. 277
Interview Guidep. 281
Referencesp. 285
Indexp. 303
Table of Contents provided by Ingram. All Rights Reserved.

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Excerpts

View this excerpt in pdf format | Copyright information

INTRODUCTION

Goals for the Book

This book describes the successful programs that experienced practitioners have implemented to prevent dating violence and sexual assault among adolescents. The authors, experienced researchers and prevention-program coordinators themselves, have observed that prevention practitioners seldom publish their experience or research and often are too busy to attend conferences to share their knowledge. We therefore undertook to write this book so our readers can learn from experienced presenters of teen dating violence and sexual assault prevention programs across the United States. As the book describes in detail, we interviewed experienced practitioners across the country. Our goal was to learn as much as possible about prevention programming. We also conducted an extensive review of published literature relevant to these programs, so that we can present both practice and published wisdom.

Our own work in youth dating violence and sexual assault prevention began more than ten years ago. We coordinated a prevention program for middle school youth in Detroit over a five-year period. We offered this program primarily in a charter school setting, with a brief change of venue to a youth program in a large church. Beverly Black went on to coordinate a prevention program in public middle schools in Detroit for another four years. We both have also worked with adolescents in other settings.

Working with prevention programs taught us a great deal but also raised many questions that were not answered in the literature or at conferences. Although prevention programs across the United States were doing excellent work with adolescents, only a few published their experiences or presented them at meetings. As a result, we developed a plan to interview prevention practitioners across the country. It was a privilege to talk with these hard-working, thoughtful people, and we are excited to share their thoughts and experiences with our readers.

In this chapter we discuss why practice wisdom, along with research studies, can make an important contribution to continuing and improv­ing prevention programs. We also address the strengths and limitations of both empirical evaluations of prevention programs and the literature about practitioners' views on research.

Prevalence of Dating Violence and Sexual Assault among Adolescents

Before our first prevention program we talked with staff members of public schools in our city and learned that the teachers and counselors who worked closely with youth every day were well aware that many adolescents experience dating violence and sexual assault. Many people who work with adolescents often do not need research to tell them that this is a common and serious problem among their clients or students. At the same time research findings support practitioners' concerns that dating violence and sexual assault are serious societal problems.

Studies in the U.S. have found that between 11% and 59% of high school students have experienced dating violence (Bergman 1992; Center for Dis­ease Control [CDC] 2003; Foshee et al. 1996; Malik, Sorenson, and Aneshensel 1997; Molidor and Tolman 1998; Silverman et al. 2001) and that dating violence impacts youths' physical and psychological well-being (Callahan, Tolman, and Saunders 2003; Silverman et al. 2001). Data also suggest that adolescent victims of abusive relationships may carry these abusive patterns into future relationships (Smith, White, and Holland 2003). Although definitions of sexual assault in published studies vary, these studies report that between 9% and 18% of female adolescents have been sexually victimized by a dating partner (Foshee et al. 1996; Molidor and Tolman 1998; O'Keefe and Treister 1998). The CDC's Youth Risk Behavior Screening Survey (YRBSS) (CDC 2008), which assesses risk behaviors among adolescents across the United States, revealed that over 7.8% of high school students reported that they had been physically forced to have sex when they did not want to.

Some literature on dating violence reports equal rates of perpetration by girls and boys (Halpern et al. 2001; Molidor and Tolman 1998). Several studies suggest, however, that dating violence is more frightening, and often more injurious, for girls than for boys (Bennett and Fineran 1998; Molidor and Tolman 1998). Foshee et al. (1996) found that 70% of girls and 52% of boys who were abused by a dating partner reported an injury from an abusive relationship. Literature on sexual assault continues to report that males are more likely than females to be the perpetrators (Foshee et al. 1996; Jezl, Molidor, and Wright 1996; Loh et al. 2005).

Importance of Primary Prevention

During the last fifteen to twenty years, youth dating violence and sexual assault prevention programs have been spreading across the United States. Currently the CDC's Injury Research Agenda, Healthy People 2010, places a high priority on the evaluation of programs that intervene before violence occurs, specifically mentioning the reduction of intimate partner and sexual violence. Other federal agencies, such as the National Institute of Justice, and private organizations, such as the Liz Claiborne Foundation and Robert Wood Johnson Foundation, have focused on adolescent dating violence and sexual assault. Prevention programs that target youths are hoping to engage in "primary prevention," reaching the target audience before any violence or sexual assault occurs (Wolfe and Jaffe 2003). This contrasts with "secondary prevention programs" that focus on selected high-risk groups, and "tertiary prevention programs" that attempt to minimize the deleterious effects of violence that has already occurred (Wolfe and Jaffe 2003). Focusing on youth for primary prevention efforts seems appropriate, as younger people are less likely to have been dating or to have already been victims of physical or sexual assaults associated with dating and male-female social interaction.

Strengths and Limitations of Empirical Program Evaluation

Empirical evaluation of youth dating violence and sexual assault prevention programming is essential, for, without such evaluations, there is a risk of spending a great deal of energy and money on programming that may well prove ineffective. Relying solely on practitioners' views about the effectiveness of their own work may, of course, lead to inaccurate assessments, as there are no "checks and balances" and "overgeneralization" may result (Padgett 2004). Because practitioners view certain techniques as effective, they may interpret all responses to those techniques as positive and overlook negative results. Nevertheless, quantitative evaluation approaches have limitations, as we summarize below, and so practitioners' narratives and views can contribute substantially to the continuing development of prevention programs.

Advantages of Disseminating Practice Wisdom

Writers vary somewhat in their definitions of "practice wisdom," but most agree that it encompasses reflection and learning based on accumulated experience in practice (Dybicz 2004). Practice wisdom grasps the richness and variety of human situations that practitioners encounter, whereas quantitative research is sometimes limited in its capacity to illuminate the many complex variables that an ecological view (Bronfenbrenner 1977) of human interactions should take into account:

"One cannot both measure some social event clearly and yet grasp its dynamic complexity. It has been the tendency of empiricists to focus closely on specific client attributes without providing equivalent attention to the client and situation in totality." (Klein and Bloom 1995, 800)

According to Weick (1999), many practitioners feel that research does not capture the "messiness" of actual practice.

Practice wisdom can provide knowledge that goes beyond pure guesswork and bridges the gap between science and practice (Dybicz, 2004; Klein and Bloom, 1995). Klein and Bloom (1995) discuss how workers formulate and test tentative hypotheses based on their practice experience and the available scientific knowledge. Their work, therefore, is not haphazard, and their judgment and creativity include the coordination of positive unexpected events and enable them to react to crucial negative events that are difficult to measure. Patton (1990) stresses that case studies can provide meaningful depth and detail, including information about the context of a program. "For example," in the words of Patton (1990, 54), "a great deal can often be learned about how to improve a program by studying select dropouts, failure, or successes." Practice wisdom, therefore, can come from experienced practitioners regardless of whether their programs are exemplary.

Some investigators argue that the interactions between service workers and their clients may be valuable and meaningful in ways that are impossible to quantify (Imre 1982). For example, the personal warmth or charisma, or the messages of some prevention program presenters, may have unquantifiable influence on young people. The non-measurable aspects of programs may have an impact long after researchers might have been able to measure relatively long-term changes. Moreover, quantitative research may be unable, thus far, to fully specify practitioners' interactions with unique human beings (Klein and Bloom 1995). Padgett (2004, 9) notes that qualitative methods are attractive, because they might grasp the "ever-changing, messy world of practice." Weick (1999, 330) reminds us that even a so-called hard science like physics, for example, now recognizes the importance of "complexity and indeterminacy." Patton (1990) asserts that researchers sometimes lose specificity and detail in quantitative data, because these data group respondents together. Qualitative researchers, on the other hand, can gain rich details from interviews.

Surveys of social work students indicate that "learning derived from significant others" (Fook 2001, 126) was their most significant experiences in learning how to practice. Similarly, social workers report that they prefer consultation with expert colleagues to learning from published research (Mullen and Bacon 2004). Thus practice wisdom in literature potentially allows practitioners to learn from experienced individuals. By presenting wisdom from a variety of practitioners, we offer our readers an array of ideas that might increase the effectiveness of their practice. We are not claiming that one practitioner's approach is better than another, and so this book does not represent "authority-based claims" which have been criti­cized by some evidence-based practitioners (Gambrill 1999).

Klein and Bloom (1995, 804) describe how practice wisdom "is most fully developed" when practitioners' experiential learning can be "articulated in open communication with other professionals," and they believe that communicating to others is essential for knowledge to grow. Thus, some of those who participated in interviews for this book were helped in the process to articulate their practice wisdom. Most of the interviewees were enthusiastic and described the questions as "thought-provoking and thorough." Another benefit of interviewing is that it is a holistic approach (Patton 1990); although we divided interviewees' comments into sections in order to write about them, the answers reflect the interviewees' aware­ness of their holistic, interconnected experiences in prevention work.

Consistent with the idea that one does not have to choose between empirical and practice experience, this book includes both. At the same time that some practitioners use findings from quantitative research to improve their practice, they also respect the practice wisdom that their programs have accumulated.

Limitations of Prevention Programming Research in General

Although the literature suggests that prevention programs for adolescents can be effective (Durlak 1997; Nation et al. 2003), it is not easy to demonstrate these successes through empirical research. Some investigators note that few rigorously evaluated prevention projects have shown large effect sizes (making a noticeable difference regardless of statistical significance) (Tebes, Kaufman, and Connell 2003). There are multiple reasons for this difficulty, including lack of consistent standards for determining effectiveness, (Nation et al. 2003) as well as the difficulty of measuring behavioral changes.

A criticism of positivistic evaluation is that it is not always clear that statistically significant changes are truly meaningful (Edleson 1996). Researchers disagree as to which evidence proves that a program is worth adopting. Experts suggest that pre-post changes for a single sample are not necessarily good indicators and tend to "overestimate the effects of interventions" (Biglan et al. 2003, 435). The most rigorous prevention experts believe that programs should not be adopted without several randomized or time-series studies. In a randomized study, adolescents would be randomly assigned to intervention and non-intervention groups to avoid potential bias. Researchers also warn that publications about programs that were shown to be effective in one setting should include the caveat that they may not work in different settings or might even be harmful (Biglan et al. 2003). Evaluations should be done with numerous populations, various settings, and differing time frames. Moreover, it might require years to determine the parts of programs that would be effective under varying conditions.

Researchers consider randomized trials to be the gold standard of evaluation research (Biglan et al. 2003), but controlled scientific evaluations are difficult to replicate for several reasons. They are expensive (Nation et al. 2003), and the evaluations place a high priority on the consistent, measurable specification of concepts and processes that practitioners often find difficult to duplicate.

In recommending adoption of an empirically demonstrated prevention procedure, program developers often "fail to adequately take into account the local conditions of a given experiment" (Tebes, Kaufman, and Connell 2003, 45). Even when research shows that a program is effective, the published report may not give enough information to determine whether the program would work well with other populations. A program with empirically validated effectiveness "may have no beneficial effect or even a harmful effect when it is applied in a new setting, provided to a different population, or provided by a new type of provider" (Biglan et al. 2003, 6). Some investigators consider this to be a serious limitation in prevention programs, where "few propositions hold across all situations. To say that a program has been found to be 'effective' is to say very little unless one specifies what the program consisted of, for whom it made a difference, under what condi­tions" (Reid 1994, 470). Silverman (2003) asserts that prevention interven­tions addressing behavioral dysfunctions must include awareness that these behaviors are in a constant state of evolution as preventionists try to respond to changing transactional and ecological elements. So although replication is valuable for both practitioners and researchers, it becomes difficult to replicate and evaluate programs that respond to many changing elements.

Published outcome-focused empirical evaluations rarely describe the content of the programs in detail and almost never identify the most effective aspects of these programs. This drawback makes it hard for practitioners to learn from such evaluations. Rones and Hoagwood (2000, 238) note that studies leave many "unanswered questions about the active ingredients that lead to successful program implementation and dissemination."

Nation et al. (2003) suggest that practitioners are looking for practical information about what works, whereas funders are looking for evidence-based information. Although research-based programs are usually much too expensive for local programs to replicate, the general-effectiveness principles gained from research might help local programs distill and im­plement those elements that are cost-effective.

Strengths and Limitations of Evaluations of Adolescent Dating Violence and Sexual Assault Prevention Programs

Some well-executed evaluations of prevention programs have been published (Avery-Leaf et al. 1997; Foshee et al. 2004). However, reviewers have suggested limitations of even the most rigorous studies (Cornelius and Resseguie 2007; Meyer and Stein 2004), which itself indicates how difficult it is to design and empirically evaluate a good intervention. Acosta et al.'s (2001) review of literature about youth violence, covering articles from 1980 to 1999, found that prevention articles were less common than articles on assessment and treatment. In addition, only 5 of 154 articles on prevention were about preventing dating violence.

The literature contains only a few convincing empirical evaluations of dating violence and sexual assault prevention programs for adolescents. Most published evaluations of sexual assault prevention programs concern college students. Evaluated prevention programs for younger children and adolescents often cover sexual assault in the context of general violence prevention (Shapiro 1999). Because of developmental differences between college students and middle or high school students, much of the research on college prevention programs only suggests the approaches that might work with younger populations.

Although "most domestic violence programs in the United States have set prevention of domestic violence as a part of their missions" (Edleson 2000), these efforts are usually under-funded. Meyer and Stein (2004) reviewed school-based prevention programs across the United States and found that they "were not very effective at preventing relationship violence in the short term, and less effective in the long term" (198). Knowledge gains about dating violence were the most common improvements resulting from programs, but Meyer and Stein questioned "how knowledge about relation­ship violence translates into actual violent behavior and the likelihood that one will engage in such behavior" (ibid., 201).

Begun (2003, 643) also asserts, "Strong and convincing evidence does not currently exist to suggest that any particular strategies work as primary prevention of intimate partner violence." She reports that some studies have shown effects on knowledge and attitudes but have rarely demonstrated the persistence of these effects. Furthermore, studies have not addressed differences between girls and boys or between high school and middle school youths, and have also not shown that the programs change actual dating behaviors.

Another limitation of empirical evaluations of dating violence and sexual assault prevention programs is that programs usually attempt to change attitudes and knowledge rather than behaviors (Wolfe and Jaffe 2003). Controversy exists over whether changes in attitudes or knowledge lead to behavioral changes, which are clearly important but difficult to measure (Schewe and Bennett 2002). Of course, prevention programs have the additional problem of measuring the extent to which the target group avoided dangerous behaviors as a result of the intervention. A study would need to employ carefully matched comparison or control groups in order to estimate this outcome. A related important problem is that programs often only address prevention messages to young women, but potential victims have no control over potential perpetrators. Thus changes in potential victims' attitudes, knowledge, or behaviors may have little or no effects on rates of victimization. This makes it difficult to evaluate programs that are directed to both potential victims and potential perpetrators.

Although O'Brien (2001) asserts that the powerful short-term impact of school-based dating violence prevention programs is promising, few empirical evaluations have been able to use follow-up measures (Cornelius and Resseguie 2007). It is difficult, therefore, to determine whether improvements attributed to empirically validated prevention programs were sustained.

Few research projects have examined which program components con­tribute to effectiveness in youth dating violence prevention (Avery-Leaf and Cascardi 2002; Schewe 2003b; Whitaker et al. 2006). Schewe (2002) notes that journals rarely publish studies with negative outcomes. This is unfortunate, since even unsuccessful programs may reveal important is­sues. Some programs may be considered successful based on a very short questionnaire or may only be successful with females, whereas a program reporting a more comprehensive evaluation might appear unsuccessful.

Practitioners' Views on Research

The literature suggests that some human-services practitioners are reluctant to use published research. The few publications that have documented practitioners' views on research (Fook 2001; Mouradian, Mechanic, and Williams 2001) note that practitioners believe that research is sometimes not "user friendly" and that researchers may fail to address practitioners' questions and concerns. The National Violence Against Women Prevention Research Center (Mouradian, Mechanic, and Williams 2001) conducted focus groups with 130 practitioners and concluded that "most evident was a strong emphasis on the need for research that will determine 'what works' to prevent and combat violence against women" (4). The practitioners also emphasized the need for research that is presented in a format that is "easy to read and understand; 'user-friendly' (a term used often in different focus groups); timely; concise; [and] easy to access" (6). Rehr et al. (1998) assert that practitioners often resist participating in evaluative research, because it seems to highlight their practice deficits. We note practitioners' concerns about research here to show how practice wisdom can add to the literature. However, as practi­tioners/researchers ourselves, we hope this book helps both researchers and practitioners to understand each other's perspectives and to further joint ef­forts to improve the effectiveness of prevention programming.

In later chapters we describe how we gathered practice wisdom largely by interviewing prevention practitioners. Then we proceed to share that wisdom, together with findings and ideas from the literature. We attempt to summarize practitioners' thoughts and solutions in each chapter, rather than attempting to evaluate their responses. Each chapter, therefore, presents various approaches to program implementation and responses to dilemmas in prevention practice. The book concludes with a discussion of the current state of prevention programming, the current tensions in the field, and how programs might develop in the future.

***

COPYRIGHT NOTICE: Published by Columbia University Press and copyrighted © 2009 Columbia University Press. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers. For more information, please e-mail or visit the permissions page on our Web site.

Rewards Program