Chapters 10-15

 

CHAPTER 10
MEASURES OF COMMUNITY COLLABORATION

Many individuals and organizations involved in addressing violence against women have heard about collaborative community-wide responses to ending such violence and in many communities across the country, efforts are under way to establish such systems. But what does coordinated community response really look like, and how do you know it when you see it? Furthermore, what is coordinated community response meant to achieve, and how does one measure this?

To most observers, the term "community-wide collaboration" conveys the fairly obvious, if somewhat vague, notion of different members of a community coming together to address a common problem. Clearly, there is more to community collaboration—or at least successful collaboration—than this. Christiansen et al.(1997) 1 characterize collaboration as involving interdependence and ongoing give and take; solutions emerging through participants' dealing constructively with differences; partners working beyond stereotypes to rethink their views about each other; and stakeholders assuming joint ownership of decisions and collective responsibility for future decisions. They believe that collaboration is an ongoing process, not something done once and finished. Partners in collaboration are always looking at how things are going, and talking through how to keep improving them.

It is important to keep in mind that there is no single ideal of community collaboration. What makes sense for one community may not work in another. There are, however, a variety of scenarios with common elements that appear to be effective. These elements or factors can provide us with models against which we can measure efforts in most communities.

The remaining sections of this chapter identify (1) the basic elements needed to start the process of community collaboration, (2) intermediate and system level outcomes associated with successfully establishing community collaboration, and (3) the ultimate outcomes of such collaborations. As noted above, collaboration is a never-ending process. Each success builds on the last one. This means that the process is circular—commitment (the first essential ingredient for a collaborative community-wide response) leads to system changes that lead to renewed commitment, etc. Similarly, each system change that is accomplished feeds a willingness to increase commitment for the next change. This complexity reflects the dynamic and fluid nature of establishing a community-wide collaborative response to violence against women.

It should also be noted that the greater the number (and longevity) of factors present in a given community, the more likely it is that the collaborative effort is truly entrenched within that community and that the effort will survive the withdrawal of a major source of financial support such as VAWA STOP grants. We have provided three tables (Tables 10.1, 10.2, and 10.3), each focused on one possible goal of a collaborative effort. Each table provides specific objectives under each goal, measures of these objectives, and recommended data collection procedures. Please keep in mind that this is not an exhaustive list. Those who are closest to the communities and are actually assessing the collaborations may have other measures in mind and/or have a better idea of what is realistic in terms of data collection.

Elements of Community-Level Collaboration

 

Commitment

The foundation of any type of community collaboration is commitment. This includes an understanding that (1) violence against women is a community problem that must be addressed by the community; (2) community-wide approaches are necessary to provide all victims with a full range of options and services free from unnecessary hassles and difficulties, to hold perpetrators accountable for their actions, and to create public confidence and trust in community institutions; and (3) changes are (almost certainly) necessary in all service units separately, and in the way they interact with each other, to promote excellent care and continuity for victims and to hold perpetrators accountable.

Organizing Structure

Coordinating bodies are a critical part of a community's efforts, providing a forum for identifying problems and developing solutions on an ongoing basis. When faced with the challenge of developing an integrated community system, it is important that the organizing body have some legitimized power or authority to make recommendations and/or decisions. This means that the committee or group is, at least, the recognized structure for interunit communication, for feedback about one's own unit's performances, for establishing priorities among the community-wide tasks to be done, and for public outreach and dissemination about its goals and activities.

Organizing structures can take many forms, but some of the most common are: a committee or task force; a nonprofit, nongovernmental body that assumes coordinating responsibility; or a small, informal network of two or more agencies that plan and implement activities together. As Table 10.1 illustrates, in addition to noting the structure of organizations, it is also useful to gather information such as the length of and reasons for their existence.

Committees. A coordinating committee or task force may employ a coordinator—full- or part- time—whose primary responsibilities are to organize all efforts of the committee, serve as the central contact for its members, and facilitate communication among members. If funding is not available for a full-time position, a suitable member from the community may serve as coordinator on a part-time or voluntary basis. In addition to aiding the work of the task force, the presence of a coordinator may help to ensure continuity in a politically changing environment. Committees (and their subcommittees) vary greatly in their frequency of meeting, generally ranging from once a month to one or two times a year. Meetings provide a forum for updating members on progress toward set goals and the development of new issues.

Nonprofit Organizations. Another possible structure is a nonprofit organization that takes responsibility for some or all of the community's collaboration efforts. Possible scenarios include an organization that was created for the sole purpose of coordinating the community response or an already-existing organization that takes control of one or two specific projects.

Others. In communities where there is no formal coordinating structure, a number of agencies representing different areas (e.g., nongovernmental victim services and law enforcement agencies) may choose to work together to develop collaborative projects. In others, there may be more than one coordinating mechanism.

Formation of Additional Structures. In some cases, new VAW coordinating structures appear. They are formed for various reasons, which may include the realization that certain needs are not being met by the initial coordinating body, disagreement over objectives or means of achieving them, or simply the need to extend the reach of existing structures.

Composition of Committee/Group

Diverse Membership. Diversity of membership is important because it is either difficult or impossible for one person or agency to solve a problem as multi-faceted as violence against women. The challenge of collaboration is to blend the backgrounds of people from several agencies into workable relationships that are mutually beneficial. As noted in Table 10.1, the types of participants in a collaborative effort may vary depending on the specific goals of the community response, but should consist of representatives from different sectors, possibly including: battered women's and sexual assault services, law enforcement, prosecution, civil and criminal courts, health care agencies, child welfare agencies, batterer intervention programs, the corporate/business sector, clergy and religious institutions, and other opinion leaders.

It may also be important to have a combination of experienced/founding members and new members, since older members may be inclined to move on to other projects. The presence of new members will also indicate that there is continued community support and interest in the committee's efforts.

Continuity of Personnel. One of the keys to a successful collaboration is having some or all of the same individuals involved throughout the entire effort. Collaborations improve over time when participants become familiar with each other's strengths and weaknesses and learn to work as a team. Personnel changes occur for a variety of reasons, including work reassignment, termination, promotion, retirement, and, in some cases, a decision not to participate. Any disruption can have adverse effects. At a minimum, changes delay planned activities, and in the worst case the collaboration may dissolve.

Especially vulnerable are collaborations in which a dynamic leader is no longer available—what might be termed the "vanishing advocate" problem. Some collaborations are formed because an individual has taken a problem on and has forged a coalition to seek solutions. Without that leader, the effort folds.

Level of Participating Staff. Some successful groups comprise primarily high-level staff, while others achieve their goals with broad-based community and grassroots support. Whatever the level of participating staff, it is important to consider whether they are in a position within their agency or community that enables them to make changes and influence policy. They should also have the backing or commitment of their own agency to cooperate with the community system. Another factor that should be noted is whether or not those who are in power in the relevant agencies are supportive of ongoing collaborative efforts. (Suggestions for possible ways to collect these data are mentioned in Table 10.1.) For those found to be critical or ambivalent toward the group's efforts, an important initial goal should be to get them on board.

Activity Level/Involvement of Participants. In some groups, the bulk of the work is done by one or two powerful members, while in others it is shared by many. Measures used to gauge the level of member activity might include the number of tasks initiated and completed by each member, the proportion of the member's overall time that he/she dedicates to collaborative efforts, and how many meetings he/she attends.

Another issue is the degree to which participants are allowed to become involved. For example, one might ask who in the group is leading activities and making important decisions and whether or not all members have the opportunity to participate equally.

Levels/Sources of Support

Some collaborative VAW efforts enjoy support from all levels of government, while others are forced to work from the ground up, creating grassroots networks to inform elected and appointed officials and lobby for change. Some initiatives, for example, are formed when states pass laws affecting violence against women or governors or mayors make this issue a priority in their administrations. In other situations, the occurrence of a particularly tragic event or overwhelming frustration with the system prompts service agencies and/or victims to organize a community response from the grassroots level.

Both levels of support are equally important—albeit for different reasons—and should be measured. A high-level elected official such as a governor can help make a collaboration strong by directing state funds and legislative support toward the collaboration's activities. This also happens at a local level when a mayor or other city official uses a portion of the city's funds for this purpose. But due to the changeable political climate, collaborations should not become solely dependent on any one source of support. If they enjoy widespread public support, their activities will continue even after the loss of a strong political ally. For instance, even though grassroots organizations generally are unable to provide the steady funding streams that the government can, they may offer other benefits to the collaboration, such as media attention and lobbying efforts.

Community Collaboration: System-Level Outcomes

 

Communicating Effectively

Among Participants. Having any communication among agencies where there was none, more communication where some already existed, or improved or more effective communication that leads to greater understanding and trust are all indicators of an established community response. Communication may be measured based on frequency, type, level, basis, and nature of contact, as mentioned in Table 10.2.

While regular meetings are one way of keeping participants informed, it is also important to create and maintain connections among individuals. Informal dialogues, memoranda of understanding, reports, and e-mail transmissions can also contribute to effective communications. Ultimately, better communication among agencies and sectors may evolve into arrangements (both formal and informal) for different types of agencies or agencies in different geographic areas to share administrative or management information system databases.

Informal Networks. In addition to communication among formal committee/group members, participants should create their own informal communication networks. For example, even if the coordinating body is composed primarily of criminal justice members, there should be connections established with other community groups. Although a little more difficult to measure than formal networks, these connections are important because even if the coordinating body disbands or loses its coordinator, many of the connections will remain. One example would be to hold a monthly lunch with a brief program presented by a different agency each time.

Feedback from Community. Groups may or may not incorporate community feedback in their efforts. Two possible areas to measure noted in Table 10.2 are (1) whether meetings and materials are available to the general community and if so, how often, and (2) the frequency with which community feedback is incorporated into decisions and/or projects.

Developing a Shared Vision

The myriad agencies/organizations involved in addressing VAW may have different views on one or more of the following issues: fundamental principles of intervention, roles of each component, merits of collaboration, necessity of public accountability, and goals of reform. Despite differences of opinion, they should be able to develop a common vision and mission. It should be acceptable for parties to have different aims, as long as they are not conflicting and fit under an overarching common purpose. In short, everyone needs to have the same understanding of the overall goals of the effort, and all collaborative activities should be working toward those goals.

Developing a common vision should result in fewer misperceptions about each member's roles/responsibilities. Each person should play a role in the collaborative effort and understand what that role is. Individual committees and task forces should have clearly defined goals and purposes. Other indicators of a shared vision include developing a strategic plan, a mission statement, prioritizing tasks, joint planning, and joint implementation (see Table 10.2 for ways to measure shared vision).

In addition, agencies should produce compatible protocols or guidelines for practice in each component. Services provided by different agencies should be as seamless as possible. For example, police and victim advocates may share a jointly developed protocol that specifies (1) that both will go on all calls; (2) once on the scene, the responsibilities of the law enforcement officer and responsibilities of the victim advocates; and (3) the responsibilities of each for follow-up and keeping the victim informed. Another example might be a protocol developed jointly by law enforcement and prosecution, describing the types of evidence police could collect that are of most value to prosecutors and the kind of feedback from prosecutors that would be most valuable to police.

Establishing Systems of Conflict Resolution and Evaluation

Beyond communication, there may be systems or mechanisms in place for the following: identifying problems and developing solutions, evaluating practice efficacy and incorporating community input (public accountability) and other forms of feedback, and monitoring adherence to adopted standards/practices. These systems/mechanisms should be known and be seen as legitimate, reliable, and timely. Table 10.2 suggests various ways to measure these systems.

Developing Trust and Mutual Respect

An important aspect of a successful collaboration is trust and mutual respect among participants. Some collaborations may "click" from the start, especially when the groups have worked together successfully in the past. But this is rare; most take time to build. Collaborative relationships usually move through several early steps: expression of interests, verification of each other, trust building, joint decision making, and small successes. One indication that mutual respect and trust are present is that the group has formed a flexible working arrangement, eliminating the hierarchy that normally characterizes relationships with participating agencies.

Gaining the trust and respect of other group members may itself be a reward for individual participants. When their expertise has been recognized, their ideas have been considered— perhaps even accepted—and their influence has led to solutions to a problem, they may gain a sense of personal satisfaction. The collaboration itself may then be rewarded by improved performance by the individual.

Engaging in Joint Activities and Co-location

As mentioned in the Shared Vision section, a healthy sign of open coordination among different agencies is when they decide to embark on joint activities such as joint training of staff or designing new joint policies or protocols. In some cases, coordinating agencies may decide to co-locate services (e.g., victim service units within police departments and prosecuting attorney's offices). Alternatively, they may decide to respond together as when a victim advocate accompanies police officers on 911 calls. Similar arrangements can be made in other settings such as hospital emergency rooms and courts.

Reporting

Agencies involved in coordination should have a way to report performance, activities, and accomplishments. Ideally, this would be a formal report published on a regular basis. Public information would promote continued support for the initiative from the general public, the private sector, and the government, and may inspire other agencies to undertake similar efforts. As mentioned in Table 10.2, both direct and indirect results of publishing reports should be noted.

Funding

Often community-wide collaboration efforts can lead to changes in various funding mechanisms. In some states or localities, collaborative organizations have succeeded in establishing new sources of funding (e.g., through new marriage license taxes or line items in city budgets). In other cases, collaborative organizations may be given the authority and responsibility for distributing funds within the community, which increases their opportunities to support and foster cross-agency or cross-jurisdictional collaborative projects and other efforts.

Community Collaboration: Ultimate Outcomes

 

Creating Permanent Policy and Practice Changes

One strong indication that a community's collaborative efforts have become entrenched is the existence of permanent policy changes. In some communities, permanent changes in the law and agency policies or procedures may occur as a result of collaboration efforts and will remain even if the original committee disbands or loses steam.

Treating Victims and Perpetrators Consistently

Coordinated community response should lead to more consistent treatment of both victims and perpetrators. The same services and options should be available to victims of all racial, linguistic, geographic, or other characteristics, irrespective of the point at which they enter the system (e.g., a 911 call to the police, an emergency room, child protection, etc.). Similarly, perpetrators should be held accountable in a consistent manner by everyone they come into contact with, such as law enforcement officers, prosecutors, judges, treatment providers, and probation and parole officers. Specific examples of how perpetrators might be treated consistently include: not reducing criminal charges; including participation in batterer and chemical dependency treatment programs and compliance with protection orders in sentences; rigorous enforcement of probation and parole conditions; having systems in place that allow one to track compliance with treatment programs and probation and parole terms; and coordinating terms and conditions with other legal cases (both civil and criminal) involving the perpetrator, victim, or other family members.

Creating More Options and Resources for Victims in the Justice and Human Service Systems

Effective coordinated community response should produce more options and less duplication of services for victims in the justice and human service systems. Because sexual assault and domestic violence victims often face a variety of difficulties and unique circumstances, there should be flexible solutions. For example, domestic violence victims may be offered a choice between staying at a temporary group shelter or receiving financial assistance in order to obtain a more permanent residence. A community may make various types of counseling, including group and individual therapy, available to victims of sexual assault and allow them up to two years to enroll in such services. Standards may be imposed to improve the quality and appropriateness of batterer intervention programs or sex offender treatment programs, and higher quality may lead to higher utilization and more resources. New options should also include more culturally sensitive responses to VAW on the basis of race, language, religion, culture, class, kinship networks, perspectives of the efficacy of the legal process, physical and/or mental disabilities, and urban/rural status.

Changing Public Knowledge of VAW and Reducing Unmet Need

Finally, effective community response should result in an increased level of awareness and understanding of violence against women by the public at large. A variety of public education strategies tailored to differing segments of the population (teens, immigrants, etc.) can be employed and the effects measured by individual surveys on levels of understanding and tolerance about violence against women. Chapter 11 discusses community attitudes toward violence against women further and offers specific tools for measuring these attitudes.

The general public should also be aware of the services available to victims and their families. Increased knowledge and use of services by those that need them would indicate that the collaborative efforts had reached community members, who may in turn become interested in joining the groups' efforts.

Table 10.1
Community Collaboration, Goal 1: Establish an Effective, Stable, and Continuing Community Response to VAW

Objectives Specific Measures Data Collection Procedures Caveats
Commitment

(Willing to collaborate)

Are agencies and their representatives dedicated to community collaboration?
—Do they believe that violence against women is a community-wide problem that needs to be solved by the community?
—Are they willing to make changes in their own agencies if necessary in order to further community-wide goals?
Interviews/surveys with key agency staff re. actual and planned participation in collaborative efforts and willingness/ability to change policies/procedures as needed.  
Presence of established formal/informal collaborative structures

(Ability to collaborate)

What type(s) of organizational structures has existed in the community (formal task force, interagency lunch group)? How long has it been in place? How often does it meet? What is its focus?

How does the structure support collaboration?

Interviews/surveys with actual (and potential) collaborating participants, as well as outside sources.

Task force/committee/group agendas, minutes, memos, other documents.

 
Presence of new committees or nonprofit organizations

(Ability to collaborate)

Types, numbers, and purposes of secondary organizational structures.

Reasons for the formation of new group (e.g., none other existed, focus of original group too narrow, membership of original group restricted, etc.).

Activities of organizations and areas they cover (e.g., health services, law enforcement, etc.).

Interviews with original committee members.

Interviews with new organizations.

 
Achieve diverse membership in collaborative structure (engage diverse players in other collaborative efforts) Appropriate types and numbers of members/players with respect to issue (DV and SA), systems (law enforcement, prosecution, corrections, advocates), and community (reflecting demographics of community).

Each subcommittee or task force reflects the diversity of the whole collaborative structure.

Length of time various members have been involved.

Committee membership records.

Self-reported member/participant surveys.

 
Level/positions of participating collaborators. Are collaborators in positions within their respective agencies to: make changes and influence policy? to implement changes in service practice (i.e., line workers)? to understand barriers to success and how to overcome them?

Are those who are in power in the relevant agencies supportive of VAW efforts? Have they: (a) taken an active role in; (b) been ambivalent toward; or (c)opposed VAW efforts?

Why have/haven't certain people joined the efforts?

Track record of participants, e.g. have they delivered what they've promised?

Observation: determine which agencies/political players have/have not shown an interest in or taken an active role in VAW efforts.

Agency heads or public relations staff can be contacted to determine reasons for supporting/not supporting efforts.

It may be difficult to get government officials to admit on the record why they don't support efforts. Informal contacts with agencies and political players may be useful to determine reasons for refusal of support.
Engage active participants Share of work time devoted to collaboration.

Number of tasks each participant initiates, plans, and completes.

Number of meetings attended by participant.

Self-reported participant surveys and interview with committee coordinator.

Minutes of committee meetings.

 
Opportunities for involvement Nature and level of involvement of various members (set agenda, participate fully, observe only).

Who makes major decisions? Who leads activities? If there is one leader, does he/she delegate authority to other members? Are all participants involved in the decision-making process?

Number of policy changes that participants try to install in their own agencies. Success of such attempts.

Observation of meetings.

Surveys of participants.

Examination of activities: determine who was responsible for prioritizing and organizing the task and completing the work.

 
Level/source of support What is the highest level of support for VAW efforts? State? Region? Locality? Do the efforts also have support at the grassroots level?

How has the collaboration benefited from this support (e.g., funding, media attention)

Observation; interviews with participants.

Funding records of collaborative partners; press clippings.

 

 


Table 10.2
Community Collaboration, Goal 2: Achieve System Level Outcomes

Objectives Specific Measures Data Collection Procedures Caveats
Achieve frequent, positive communication among members Frequency of inter-agency contact: one time, as-needed, regularly scheduled.

Type of contact: referrals only, telephone, meetings, conferences, training sessions.

Level of contact: directors/senior managers, trainers, line workers.

Basis of contact: individual initiative, contractual, formal agreements, memoranda of understanding.

Nature of contact: hostile, reluctant, indifferent, cordial, amicable.

Are participants informed about actions of the group/committee? Do participants provide updates on their activities for other members?

Labor intensive approach: Committee members and coordinator keep a general log of the people they contact, how often, the nature of the contact, etc. Members listed in logs may also be interviewed in order to double-check this information and account for possible differences in perception.

Much less labor intensive approach: Periodic interviews (e.g., every quarter) with committee members and others involved in collaborative effort. Use questions with quantitative answer categories such as: "how often in contact with X...daily, weekly, monthly, etc."

The communication logs can provide information on both member and non-member contacts, but in this case only members would be interviewed.

It is not likely that people will comply with the log approach, as it takes a lot of time and will contain a good deal of extraneous information. A quarterly interview is probably "good enough" to track changes in interaction patterns.

Create an informal communication networks (non-member contacts). Number of non-member agencies or organizations contacted.

Frequency of non-member contacts.

Length of relationship.

This information can be obtained from the general communication logs. Non-members with whom contacts were made may be interviewed in order to compare information.  
Provide opportunities for feedback from general community. Number of meetings open (and published materials made available) to the general public.

Frequency with which community feedback is incorporated into decisions and/or projects.

View committee records.

Survey of community members to determine if they are aware of committee's activities, have had opportunities to give feedback and have done so, and whether or not

that feedback was incorporated into committee decisions and/or projects.

 
Develop a shared vision. Do committee members have a clear and consistent perception of their roles and the roles of others?

Have they been able to put differing opinions aside and agree upon a shared vision or primary goal?

Interviews with members and coordinator.

Observation of meetings, records.

 
Create a written mission statement and/or strategic plan. Does the mission statement or strategic plan exist?

Does it represent the opinions of the majority of the members?

Examination of the mission statement.

Interviews/surveys of members and committee coordinator.

 
Establish a mechanism for conflict resolution and feedback among members. Number/share of committee conflicts that are resolved.

Number of opportunities available for members to voice objections and/or give feedback.

Frequency with which member feedback is incorporated in committee's activities.

Review of minutes; observation of meetings.

Interviews/surveys of collaboration participants.

 
Establish a system for evaluation of programs/ achievements. Does a formal program evaluation system exist? If so, how many programs (and/or what percentage) are evaluated? How often are they evaluated? Who is evaluator?

Does the group monitor general achievements?

Interviews/surveys of collaboration participants.

Examination of committee records.

Review evaluation results (if available).

 
Develop trust and mutual respect. How do group members perceive each other? Is there a general feeling of trust? Surveys implemented initially and after the collaboration has been in existence for a substantial period.  
Engage in joint planning, prioritizing, and implementa-tion of tasks; co-location. Number/types of members and/or agencies involved in these activities.

Number of these activities.

Observation of meetings.

Examine joint activities; document who is involved.

Interviews/surveys of participants.

 
Create formal reporting procedures. Is there a formal written report produced by the committee? If so, how often, what type of format, who receives the report, total number of reports distributed, specific and general effects of publishing the report. View written document.

Interview those receiving document and record their opinions regarding the usefulness and validity of report.

Record direct and indirect results of publishing report (e.g., change in funding, attention from media, community interest).

It may be difficult to document results of publishing the report, but this may be achieved by interviewing recipients of the report (e.g., funders, media) and seeking their opinions.
Increase funding. Are any new funding sources available as a result of community collaboration?

Are you able to take better advantage of existing funding sources because of collaboration?

Do collaborative structures have any responsibility for distributing/ allocating funds within community?

Review major sources of funds for VAW; identify roles and responsibilities of collaborative structures and collaborating agencies in terms of use and allocation of funds.  

 


Table 10.3
Community Collaboration, Goal 3: Achieve Ultimate Outcomes

Objectives Specific Measures Data Collection Procedures Caveats
Permanent policy changes. Have the collaboration efforts resulted in permanent changes in agency procedures and/or laws (e.g., guidelines for arrest of abusers, new training curricula)? Check legislative history of state/city/locality.

Interview collaborating participants.

 
Consistent treatment of victims and perpetrators. Are the same options available to victims at all points of entry to the system?

Are perpetrators consistently held accountable by all parts of the system?

Interview/survey personnel of various agencies (e.g., victim services, law enforcement, courts, child welfare).

Survey victims using services.

Interview law enforcement agencies and court personnel. Examine court records and/or newspapers to determine sentencing patterns for convicted rapists/abusers.

 
More options and resources for victims. More options: what types of services are available to victims and perpetrators in the justice and human service systems? Increase in number of referrals.

Better options: standards written, standards meet criteria of model programs, standards enforced on programs throughout the community.

Duplication: where does duplication of services occur?

Check state, city, and agency records.

Review hotline records, comparing need stated/requests made to information given

Review STOP subgrant award reports

 
Increased knowledge in general community re VAW; increased use of services. How many people are aware of:
--the problem of violence against women;
--services that are available;
--laws pertaining to victims and abusers.

How many people and/or what percentage of the population in need are actually using services and/or reporting violence?

Survey residents.

Records of service agencies and law enforcement offices.

Survey victims and their families/friends.

See Chapter 11 for some survey resources.

 
 

CHAPTER 11
MEASURING CHANGE IN COMMUNITY ATTITUDES,
KNOWLEDGE OF SERVICES, AND LEVEL OF VIOLENCE

Many STOP grants have as either intermediate or ultimate outcomes the goal of making communities safer and more supportive of women victims of violence. Steps toward this goal can take many forms. Many projects try, through outreach, thoughtful location of services, education of professionals and the general public, and public service announcements to increase women's knowledge of the services that are available to help them deal with domestic violence and sexual assault. These projects want women to know that help is available if they need it, and where to find it. Other projects try to reduce the level of support for violence against women by reducing myths and attitudes that blame women for their own victimization, minimize the harm done by violence, discourage women from trying to improve their situation, and provide excuses for male perpetrators of violence. In efforts to change community behavior, projects may work to enlist support from community members other than the "usual suspects" from the justic systems and victim advocacy. These other community allies and partners can do many things to make your community safer for women. Finally, many projects want to know what the true level of violence against women is in their community, and most projects hope that the ultimate effect of all their efforts will be to reduce the overall level of this violence. This chapter provides a brief look at some ways that these goals might be measured. The chapter is far from exhaustive, but it does offer some resources for measuring the achievement of these goals.

Knowledge of Service Availability

It would be hard to imagine a program offering services and supports for women victims of violence that was not interested in whether the community knew about its existence. Therefore any service might profit from doing some investigation into whether its name rings any bells among members of the larger community. The issue of community knowledge is especially important for new services, and for services that are trying to reach new populations. Since one of the major VAWA objectives is to extend services to previously underserved populations, many projects have been funded with this objective. All of these projects should be making an effort to learn whether members of their target community have heard of them.

What to Ask About

Basically, you want to know about recall and about recognition. "Recall" means that when you ask someone "Do you know anywhere you could go to get help if you were raped?" the person says, "Yes, I would go to the ABC Crisis Service." This answer indicates they recall the name and purpose of your agency, and do so without prompting or help from you. Even if people cannot recall the name of your agency, they may still recognize it when asked a question such as "Do you know what ABC Crisis Service does?" Or, you could ask "Which of these agencies would you go to if you had just been raped and wanted help?" and then give the person a list of agencies that includes ABC Crisis Service and see whether she pick out ABC Crisis Service as a place to go for this type of help.

With people who already know about your program, there are several questions you can ask to get important feedback. You can ask whether they know your location, your hours, whether you speak their language, and whether they could get to your agency using the transporation available to them. You could then ask them how they learned about your program, whether they have ever used your services, or whether they know anyone who has. Follow-up questions could ask for feedback about the program and its services, such as whether they have heard that you treat women with dignity and sympathy.

Sample questions include the following (answered "yes" or "no"):

How, When, and Whom to Ask

There are many different places where you could ask these questions. Each place will let you learn about perceptions of your program from a different group of women. Different places may require you to use different formats, some of which will be very formal while others will be very informal, such as group discussions. Here are a few of the places and formats you might be able to use to get information about how well your program is known in the community:

Attitudes toward Violence Against Women

The causes of rape, other sexual assaults, and domestic violence are many and complex. One causal factor may be community support for or tolerance of violence against women. Attitudes such as acceptance of sex-role stereotypes, belief in rape myths, and beliefs that some circumstances justify battering help to create a climate that gives abusers and rapists a justification for their actions, and is hostile to victims of domestic violence and rape. A wide variety of Americans, including typical citizens, police officers, and judges, have been shown to hold beliefs that can be used to justify rape (Burt, 1980; Feild, 1978; Mahoney et al., 1986) and domestic violence (Broverman et al., 1970; Pagelow, 1981).

In addition to supporting violence against women, attitudes may also have the effect of creating an environment that is hostile to victims. Rape-supportive attitudes are partly responsible for low levels of rape reporting (Russell, 1982) and for a blaming-the-victim attitude that makes it difficult for victims to seek help and recover from their assaults (Ehrhart & Sandler, 1985). Belief in rape myths leads to a strict definition of rape and denies the reality of many actual rapes (Burt & Albin, 1981), which makes it difficult to prosecute rapists and support victims. Rape victims are often victimized twice—once from the actual assault and a second time when they encounter negative, judgmental attitudes from the police, courts, and family and friends (e.g., Deming & Eppy, 1981; Weis & Borges, 1975; Williams, 1984).

Myths and stereotypes about victims of domestic violence often make it difficult for these victims to seek help and to improve or sever relationships with their abusers. As mentioned earlier, police are less likely to arrest offenders if they believe that domestic violence is a "family issue" or that victims will refuse to press charges (Pagelow, 1981). And negative attitudes—such as apathy or hostility—that are often experienced by abused women when they reach out for help, may actually help perpetuate abuse by stigmatizing women and making it harder to leave abusive relationships (Stark, Flitcraft, & Frazier, 1979). Because they contribute to the perpetuation of violence against women, recognizing and measuring community attitudes can play an important part in evaluating the effectiveness of some STOP-funded projects.

Measuring Community Attitudes

A number of quantitative measures can be used to examine attitudes toward violence against women. Some, such as the Attitudes Toward Women Scale (Spence & Helmreich, 1972) measure general attitudes about the roles and rights of women, while others, including the Rape Myth Acceptance Scale (Burt, 1980) and the Inventory of Beliefs about Wife Battering (Saunders, et al., 1987) assess attitudes specifically related to rape and domestic violence. These measures can be used in a variety of ways, depending on the preference of the grantees and the intended target of their programs.

The most practical and least expensive way to implement the measures is not to survey the broad community, but to focus on those that could be considered a "captive audience," such as students, potential jurors, or members of civic or religious organizations. For example, if a prevention program has given presentations to students in a particular high school, grantees may wish to survey students in that high school and in a school without the prevention program and then compare the results.

Some communities might have the capacity to conduct surveys with a broader range of community members. If you are trying to change attitudes in the whole community, it is much more convincing to conduct a community-wide survey than to use "captive audiences," many of whom will surely have been exposed to your message. One way to do this, if your state has a periodic statewide poll, is to connect with the organization that conducts it and add a few questions about community attitudes toward violence against women. Another way is to find a sponsor who will support a special-focus survey of your particular community.


Rape Myths

Researchers in the field of rape have argued that these widely accepted myths—defined as prejudicial, stereotyped, or false beliefs about rape, rape victims, and rapists—support and promote rape. Rape myths are part of the general culture and people learn them in the same way they acquire other beliefs and attitudes—from parents, friends, newspapers, books, and television (Burt, 1991). Some of these myths include (1) women enjoy sexual violence; (2) sex is the primary motivation for rape; (3) women are responsible for rape prevention; (4) only bad women are raped; (5) women falsely report rape; and (6) rape may be justified (Lottes, 1988). Belief in rape myths is significant because it "provides men with a structural position from which to justify sexually aggressive behavior" (Marolla & Scully, 1982). For example, Scully and Marolla (1984, 1985a, 1985b) found that the myths that women both enjoy and are responsible for their rape were used by convicted rapists to excuse and justify their crimes.

RAPE MYTH ACCEPTANCE SCALE

Citation Burt, M. R. (1980). Cultural myths and supports for rape. Journal of Personality and Social Psychology, 38, 217-230. Copyright © 1980 by the American Psychological Association. Used with permission.
Description The Rape Myth Acceptance Scale is a 19-item measure of the acceptance or rejection of myths about rape. It was originally developed for a study of community attitudes, but has become a popular scale for use on a variety of issues, including the narrowness of people's definition of a "real" rape, people's use of different types of information in making rape judgments, jurors' likelihood of convicting, college and other non-incarcerated men's likelihood of raping, and in sex offender treatment. Some of its items may seem a bit dated, but it still works. It is intended for use with adults (18 and older), has been used extensively with college students, and has occasionally been used with younger adolescents.
Sample items: Response categories:

1=disagree strongly
2= disagree somewhat
3=disagree slightly
4=neutral
5=agree slightly
6=agree somewhat
7=agree strongly

3. Any healthy woman can successfully resist a rapist if she really wants to.
5. When women go around braless or wearing short skirts or tight tops, they are just asking for trouble.
10. In the majority of rapes, the victim was promiscuous or had a bad reputation.

Reference Burt, M. (1991). Rape myths and acquaintance rape. In A. Parrot & L. Bechhofer (Eds.), Acquaintance Rape: The Hidden Crime. New York: Wiley & Sons.

Burt, M., & Albin, R. (1981). Rape myths, rape definitions, and probability of conviction. Journal of Applied Social Psychology, 11, 212-230.

Another possible measure:

BUMBY COGNITIVE DISTORTIONS SCALE

Citation Carich, M.S. & Adkerson, D.L. (1995). Adult Sex Offender Assessment Packet. Brandon, VT: Safer Society Press. Copyright © 1995 by Safer Society Press. Used with permission.
Description The Bumby Cognitive Distortions Scales include both a Rape Scale (36 items) and a Molest Scale (38 items) These scales were developed by Dr. Kurt Bumby and take 5-10 minutes each to complete. They are intended for use by adolescents and adults. Dr. Bumby can be contacted for more information at Fulton State Hospital, Mail Stop 300, 600 East 5th Street, Fulton, Missouri 65251-1798. Many of the items on this rape scale are almost identical to those on Burt's Rape Myth Acceptance Scale, but there are almost twice as many items so other myths are also covered.
Sample items: Response categories:

1=disagree strongly
2= disagree somewhat
3=agree somewhat
4=agree strongly

Rape items:

3. Women usually want sex no matter how they can get it.
6. Women often falsely accuse men of rape
13. If a man has had sex with a woman before, then he should be able to have sex with her any time he wants.


Domestic Violence Myths and Stereotypes

Popular myths and stereotypes abound on the subject of domestic violence victims and batterers. The public and many professional groups often hold negative attitudes toward battered women (Dobash & Dobash, 1979; Gelles, 1976; Straus, 1976). Victims are often thought to be (1) masochistic; (2) weak; (3) "seeking out" the batterers; or (4) somehow at fault. Batterers are excused because they are "sick" or their force is perceived as justified because of the wife's behavior (Greenblat, 1985; Pagelow, 1981). One way in which domestic violence myths are harmful is that they are often believed by the people responsible for aiding domestic violence victims: social workers, judges, health professionals, and law enforcement officers. For example, a common perception among police officers is that domestic violence victims are too weak-willed to press charges. This is particularly damaging in cases where officers base their decision to arrest an abuser on whether or not they believe the case will ever reach court (Bates & Oldenberg, 1980).

Several scales have been developed to measure attitudes toward battering, including the Attitudes to Wife Abuse Scale (Briere, 1987) and the Inventory of Beliefs about Wife Beating by Saunders and his colleagues (1987).

ATTITUDES TO WIFE ABUSE SCALE

Citation Briere, J. (1987). Predicting self-reported likelihood of battering: Attitudes and childhood experiences. Journal of Research in Personality, 21, 61-69. Copyright © 1987 by Academic Press. Used with permission.
Description The Attitude to Wife Abuse Scale (AWA) is an 8-item measure of attitudes towards women and the abuse of women. The AWA was developed by Dr. John Briere, and is intended for use with adolescents and adults.
Sample items: Response categories:

1=disagree strongly
2= disagree somewhat
3=disagree slightly
4=neutral
5=agree slightly
6=agree somewhat
7=agree strongly

3. A husband should have the right to discipline his wife when it is necessary.
5. A man should be arrested if he hits his wife.
8. Some women seem to ask for beatings from their husbands.

INVENTORY OF BELIEFS ABOUT WIFE BEATING

Citation Saunders, D.; Lynch, A.; Grayson, M.; & Linz, D. (1987). The inventory of beliefs about wife beating: The construction and initial validation of a measure of beliefs and attitudes. Violence and Victims, 2, 39-57. Copyright © 1987 by Springer Publishing Company, Inc. Used with permission.
Description The Inventory of Beliefs about Wife Beating (IBWB) is a 31-item scale developed by Dr. Daniel Saunders to measures attitudes and beliefs about wife beating. It covers many more issues than the Briere scale, including attitudes toward appropriate intervention (none, arrest the husband, social agencies do more to help), attributions of responsibility for battering (e.g., the husband, because.....; the wife, because....), and the value of wife-beating (improves a marriage). Five reliable subscales emerge (1) WJ=wife beating is justified, (2) WG=wives gain from beatings, (3) HG=help should be given, (4) OR=offender is responsible, and (5) OP=offender should be punished. Scales proved to have high construct validity with other scales, and to differentiate well among known groups, especially differentiating abusers from college students and advocates, and male from female students.
Sample items: Response categories:

1=strongly agree
2= agree
3=slightly agree
4=neutral
5=slightly disagree
6=disagree
7=strongly disagree

3. Wives try to get beaten by their husbands in order to get sympathy from others.
7. Even when women lie to their husbands they do not deserve to get a beating.
18. If a wife is beaten by her husband she should divorce him immediately.
27. Occasional violence by a husband toward his wife can help maintain the marriage.


Adversarial Sexual Beliefs and Hostility Toward Women

Adversarial sexual beliefs and general hostility toward women are attitudes that contribute to violence against women. Adversarial sexual beliefs refer to the feeling that men and women are adversaries in their sexual relationships with one another—that relationships are exploitative and each party is manipulative and not to be trusted (Burt, 1980). Hostility toward women is related to adversarial sexual beliefs, but goes beyond sexual interactions to asses a more general feeling of animosity and distrust of women in all aspects of relationships and interactions. Check (1984) expanded Burt's Adversarial Sexual Beliefs Scale (Burt, 1980) to create a measure of general hostility toward women and found that men who scored high on the hostility scale tended to (1) have traditional sex-role beliefs; (2) believe in rape myths; (3) admit to using force in their attempts to get women to have sex with them, and say they would be likely to do so again; and (4) use high levels of physical punishment when rejected by women and given the opportunity to retaliate.

ADVERSARIAL SEXUAL BELIEFS SCALE

Citation Burt, M. R. (1980). Cultural myths and supports for rape. Journal of Personality and Social Psychology, 38, 217-230. Copyright © 1980 by the American Psychological Association. Used with permission.
Description This scale was developed as part of Burt's research on community attitudes supportive of rape, done in the late 1970s. It is strongly associated with belief in rape myths, but is not the same as rape myth acceptance. It is also related to sex-role stereotyping but, where sex-role stereotyping usually focuses on adult economic and family roles, Adversarial Sexual Beliefs focuses on interaction patterns related specifically to sexual behavior. It has been used frequently in research since first being published in 1980.
Sample items: Response categories:

1=disagree strongly
2= disagree somewhat
3=disagree slightly
4=neutral
5=agree slightly
6=agree somewhat
7=agree strongly

3. A man's got to show the woman who's boss right from the start or he'll end up henpecked.
5. Women are usually sweet until they've caught a man, but then they let their true self show.
10. A lot of women seem to get pleasure in putting men down.

HOSTILITY TOWARD WOMEN SCALE

Citation Check, J. V. P. (1985). The Hostility Towards Women Scale (Doctoral dissertation, University of Manitoba, 1984). Dissertation Abstracts International, 45 (12). Used with permission of author.
Description The Hostility Toward Women scale (HTW) is a measure of anger and resentment toward women. Developed by Dr. James Check as part of his doctoral research, the HTW consists of 30 items. It began with the tone of Burt's Adversarial Sexual Beliefs Scale, and developed a set of more general items reflecting hostile attitudes toward women that covered a variety of different domains (not just sexual interactions). It is intended for use with adults.
Sample items: Response categories (true/false):

1. I feel that many times women flirt with men just to tease them or hurt them.
11. I don't seem to get what's coming to me in my relationships with women.
13. Women irritate me a great deal more than they are aware of.
17. It is safer not to trust women.

Reference Check, J.; Malamuth, N.; Elias, B.; & Barton, S. (1985). On hostile ground. Psychology Today, 56-61.

Sex-Role Stereotypes

Traditional sex-role socialization or sex-role stereotyping, which views women as having a lower social status and lesser rights than men, appears to play an important role in violence against women. People who hold these beliefs—for example, feeling that a woman should get married and raise a family, be a virgin when she marries, and never contradict her husband in public—are more likely to be tolerant of and/or engage in domestic violence (e.g., Eisikovits, Edleson, Guttmann & Sela-Amit, 1991; Kincaid, 1982; Koss et al., 1985, in Parrot). In one study by Finn (1986), traditional sex-role attitudes were found to be the most powerful predictor of attitudes supporting marital violence. Sex-role stereotypes also play a role in rape and sexual assault. In their analysis of sexually aggressive and non-aggressive men, Koss and Dinero (1988) found that sex-role stereotyping was causally related to sexual assault in that "the more sexually aggressive a man has been, the more likely he was...to accept sex-role stereotypes."

ATTITUDES TOWARD WOMEN SCALE

Citation Spence, J.T., & Helmreich, R. L. (1972). The Attitudes Toward Women Scale: An objective instrument to measure attitudes toward the rights and roles of women in contemporary society. Psychological Documents, 2, 153. Used with permission of the author.
Description The Attitudes Toward Women Scale is a 55-item measure that was originally designed to assess opinions about the rights and roles of women. It focuses on family and economic roles mostly, and on norms and obligations of one gender to the other. It has been widely used in the 25+ years since it was first published, in its long form and also in a shorter 15-item version (see Spence & Hahn, 1997, below). This scale is intended for adults.
Sample items: Response categories:

1=agree strongly
2=agree mildly
3=disagree mildly
4=disagree strongly

a. Husbands and wives should be equal partners in planning the family budget.
b. Intoxication among women is worse than intoxication among men.
c. There should be a strict merit system in job appointment and promotion without regard to sex.
d. Women should worry less about their rights and more about becoming good wives and mothers.

Reference Spence, J.T. & Hahn, E.D. (1997). The Attitudes Toward Women Scale and attitude change in college students. Psychology of Women Quarterly, 21, 17-34.

Masculinity Ideology

Also playing a role in support of violence against women is masculinity ideology. Masculinity ideology, which refers to beliefs about the importance of men adhering to culturally defined standards for male behavior, may be responsible for some problem behaviors—particularly in adolescent men. In their 1993 study, Pleck, Sonenstein, and Ku found that a higher score on the Masculinity Ideology Index was associated with, among other things, twice the odds of being sexually active and of ever forcing someone to have sex. And in a study of 175 college males, Mosher and Anderson (1984) found that a measure of macho personality with three components—callous sex attitudes towards women, a conception of violence as manly, and a view of danger as exciting—was significantly correlated with a history of self-reported sexual aggression against women.

MASCULINITY IDEOLOGY

Citation Thompson, E., H., & Pleck, J. H. (1986). The structure of male role norms. American Behavioral Scientist, 29, 531-543. Copyright © 1986 by Sage Publications. Used with permission.
Description The Masculinity Ideology Scale is an 8-item measure that assesses a male's endorsement and internalization of cultural belief systems about masculinity and male gender. It is adapted from Thompson and Pleck's Male Norms Scale (MRNS), a 26-item abbreviated version of the Brannon Masculinity Scale, Short Form. It is intended for use with adolescents and adults.
Sample items: Response categories:

1=agree a lot
2=agree a little
3=disagree a little
4=disagree a lot

a. It is essential for a guy to get respect from others.
b. I admire a guy who is totally sure of himself.
c. A young man should be physically tough, even if he's not big.

Reference Brannon, R. (1985). A scale for measuring attitudes about masculinity. In A.G. Sargent (Ed.), Beyond Sex Roles (pp. 110-116). St. Paul, MN: West.

"Beyond the Usual Suspects"

Most STOP projects will focus their efforts on the civil and criminal justice system agencies, and on providing services and advocacy to women victims of violence. These are "the usual suspects." But some STOP projects are trying to bring more players into the effort to end violence against women. The previous section on public attitudes covered one approach to involving others, namely, trying to reduce beliefs and attitudes in the community that minimize perceptions of harm, blame women, and justify acts of violence. This section addresses efforts to involve particular local actors—preferably people with influence who are opinion leaders—as allies and partners in ending violence. Possibilities include the following (all of which have occurred in at least some communities):

For any project with objectives that include the development of community partners and allies, an evaluation will need to document the goals and the project's success in reaching them. Process analysis techniques will be the ones to use in this effort. Goals should be specified in terms of who (which allies are targeted), how many, how (what approaches will be used, and when), and for what (what do you want them to do). Goals can be documented by examining written project materials and by conducting interviews with key staff. Success in reaching the goals can be documented by interviewing key project staff, but that is only the beginning. It is essential that you also do the first bullet below, and that you consider the remaining bulleted suggestions:

Measuring Community Levels of Violence Against Women

Ultimately, everyone involved in STOP wants to see reductions in levels of violence against women. However, documenting success on this goal usually falters on the shoals of inadequate measurement. Police statistics are known to underrepresent levels of violence, and are also biased (meaning that they miss particular types of women victims, as well as generally not getting all the incidents). The direction of bias may vary from community to community depending on local attitudes and practices, but some type of bias will virtually always be there. Records from victim service agencies and hotlines also undercount and have biases, although the biases are often quite different from the biases of police data. National surveys such as the National Crime Victim Survey undercount, are biased in various ways, and cannot be used to reveal what is happening in particular communities because their sample sizes are only adequate for national estimation.

There are no easy solutions to this dilemma, but that does not mean that there are no solutions. We recently came across a state-level effort that seems reasonably priced and reasonably successful in getting respondents. Further and most important, it obtained estimates of lifetime and recent victimization that seem in line with what one might expect (rather than being grossly low). We thought it would be worth sharing this resource with users of this Guidebook, because some of you may want to replicate this study in your own state or community.

In 1996, the Michigan Department of Community Health, using funding from the Centers for Disease Control and Prevention, developed and conducted a telephone survey of Michigan women to determine the lifetime and recent prevalence of violence in their lives. 1 The violence examined included physical or sexual violence or threats of violence from strangers, boyfriends or dates, other acquaintances, current intimate partners including husbands and live-in partners, and ex-partners. Reports were limited to experiences since age 16. The structure and portions of the questionnaire content were based on an instrument that Statistics Canada used in its first national survey of violence against women in 1993. The telephone survey itself was conducted by the Gallup Organization, using random digit dialing.

Results represent the general population of women ages 18-69; 1848 women completed the interview. There is some bias in the survey respondents toward currently married women, employed women, and higher income women, as might be expected from a telephone survey methodology. Weighting was used to produce the final results, using age, race, and educational status as factors in the weighting to adjust for the sample biases and obtain representative data.

To give you a flavor of the results and their reasonableness, we quote (with permission) from information distributed at a presentation of this study in Indianapolis, at the 1997 Annual Meeting of the American Public Health Association. The study found:

These results, and others available from the study team (see footnote 2) will suggest to anyone who has tried to do this type of survey that this team did something right. Anyone interested in conducting a similar survey would be wise to discuss methodology with this study team. In particular, important things to discover are (1) how they conducted telephone screening that identified women and established a safe and secure time for the interview, and (2) how they introduced the study as a whole and each section of the questions to elicit the level of self-disclosure that they achieved. At the time this Guidebook went to press, no final report was available for this study, but you should be able to get it soon from the study team, and it will undoubtedly make interesting reading.

Addendum: Additional Reading

Bates, V., & Oldenberg, D. (1980). Domestic violence and the law. Paper presented at the annual meeting of the Pacific Sociological Association, San Francisco.

Broverman, I., Broverman, D., Clarkson, F., Rosenkrantz, P. & Vogel, S. (1970). Sex-role stereotypes and clinical judgements of mental health. Journal of Consulting and Clinical Psychology, 34, 1-7.

Burt, M. (1980). Cultural myths and supports for rape. Journal of Personality and Social Psychology, 38, 217-230.

Burt, M. (1991). Rape myths and acquaintance rape. In A. Parrot & L. Bechhofer (Eds.), Acquaintance Rape: The Hidden Crime. New York: Wiley & Sons.

Burt, M., & Albin, R. (1981). Rape myths, rape definitions, and probability of conviction. Journal of Applied Social Psychology, 11, 212-230.

Check, J. (1984). The Hostility Toward Women Scale. Unpublished doctoral dissertation, University of Manitoba.

Deming, M., & Eppy, A. (1981). The sociology of rape. Sociology and Social Research, 65, 357-380.

Dobash, R. E., & Dobash, R. (1979). Violence against wives: A case against patriarchy. New York: Free Press.

Ehrhart, J., & Sandler, B. (1985). Myths and realities about rape. Washington, DC: Project on the Status and Education of Women.

Eisikovits, Z. C., Edleson, J. L., Guttmann, E., & Sela-Amit, M. (1991). Cognitive styles and socialized attitudes of men who batter: Where should we intervene? Family Relations, 40, 72-77.

Field, H. (1978). Attitudes toward rape: A comparative analysis of police, rapists, crisis counselors, and citizens. Journal of Personality and Social Psychology, 36, 156-179.

Finn, J. (1986). The relationship between sex role attitudes and attitudes supporting marital violence. Sex Roles, 14 (5/6), 235-244.

Gelles, R. (1976). Abused wives: Why do they stay? Journal of Marriage and the Family, 38, 659-668.

Kincaid, P. J. (1982). The omitted reality: Husband-wife violence in Ontario and policy implications for education. Concord, Ontario: Belsten.

Koss, M. & Dinero, T. (1988). Discriminant analysis of risk factors for sexual victimization among a national sample of college women. Journal of Consulting and Clinical Psychology, 57, 242-250.

Koss, M., Leonard, K., Oros, C., & Beezley, D. (1985). Nonstranger sexual aggression: A discriminant analysis of the psychological characteristics of undetected offenders. Sex Roles, 12, 981-992.

Lottes, I. (1988). Sexual socialization and attitudes toward rape. In A. Burgess (Ed.), Rape and Sexual Assault II. New York: Garland Publishing.

Mahoney, E., Shively, M., & Traw, M. (1986). Sexual coercion and assault: Male socialization and female risk. Sexual Coercion and Assault, 1, 2-8.

Marolla, J., & Scully, D. (1982). Attitudes toward women, violence, and rape: A comparison of convicted rapists and other felons. Rockville, Md.: National Institute of Mental Health.

Mosher, D., & Anderson, R. (1984). Macho personality, sexual aggression, and reactions to realistic guided imagery of rape. Typescript.

Pleck, J., Sonenstein, F., & Ku, L. (1993). Masculinity ideology and its correlates. In S. Oscamp & M. Costanzo (Eds.), Gender Issues in Contemporary Society. Newbury Park, CA: Sage.

Russell, D. (1982). The prevalence and incidence of forcible rape and attempted rape of females. Victimology, 7, 81-93.

Scully, D., & Marolla, J. (1984). Convicted rapists' vocabulary of motive: Excuses and justifications. Social Problems, 31, 530-544.

Scully, D., & Marolla, J. (1985a). Rape and vocabularies of motive: Alternative perspectives. In A. W. Burgess (Ed.), Rape and Sexual Assault: A Research Handbook. New York: Garland Publishing.

Scully, D., & Marolla, J. (1985b). "Riding the bull at Gilleys": Convicted rapists describe the rewards of rape. Social Problems, 32, 252-263.

Stark, E., Flitcraft, A., & Frazier, W. (1979). Medicine and patriarchal violence: The social construction of a 'private' event. International Journal of Health Services, 9, 461-493.

Straus, M. (1976). Sexual inequality, cultural norms, and wifebeating. Vicitmology: An International Journal, 1, 54-76.

Weis, K., & Borges, S. (1975). Victimology and rape: The case of the legitimate victim. In L.G. Schultz (Ed.) Rape Victimology. Springfield, Ill: Charles C. Thomas.

Williams, J. (1984). Secondary victimization: Confronting public attitudes about rape. Victimology, 9, 66-81.


CHAPTER 12
MEASURING PERCEPTIONS OF JUSTICE

Many STOP projects are trying to change the ways that justice system agencies treat women victims of violence. These changes range from interpersonal behavior such as respectful listening through significant changes in skills and procedures applied to cases involving violence against women. Examples of the latter are better evidence collection, better communication with victims about the status of their case, better prosecution strategies, better monitoring and supervision of offenders, more attention to victims' wishes and circumstances in case disposition and sentencing, and so on.

Most, if not all, of the changes just described should affect how victims feel about participating in the justice system, and whether they feel that "justice has been done" in their case. Therefore, you may want to include measures of perceptions of justice in your evaluation, to let you know whether system changes have produced a better feeling about the system among victims. Your evaluation can measure perceptions of justice by the victims or parties in a legal action. You can also develop a rating of the extent to which the justice intervention is consistent with the rights of victims.

Perceptions of Justice

The first step in measuring perceptions about justice is to decide whether you want to measure (1) distributive justice, (2) procedural justice, or (3) both.

Distributive Justice

Distributive justice refers to perceptions of the outcome of police, court or other justice procedure (regardless of how those outcomes were arrived at). In rating the justice of an outcome, research indicates that people base their perceptions on how they answer two questions:

Procedural Justice

Procedural justice refers to how the system acted, how it processed a case regardless of the outcome that resulted. It involves several components, including the consistency, fairness, and appropriateness of rules and procedures, and how those in authority treated the participants.

Somewhat surprisingly, ratings of procedural and distributive justice appear to be independent of each other. Participants may rate the outcome as just, but not the procedure—or the procedure fair, but not the outcome. When individuals are forced to choose between the two, some research indicates that the higher ranking is given to achieving a just outcome (Wagstaff & Kelhar, 1993).

The following items and scales draw on the work of Tyler (1989), but include some new wording and answer categories. The scales ask separately about police and courts and can be extended to other justice agencies such as prosecutors' offices.

Procedural fairness items use 5-point scales and include the following:

Perceptions of fair personal treatment (by police/courts/others) use "yes" and "no" as answer categories; items include the following:

General fairness of treatment by (police/courts/other) items, rated on 5-point scales, include the following:

Distributive justice (the fairness of the outcome) items, also on 5-point scales, include the following:

Protection of Victim Rights

You can develop a rating of justice by asking women victims of violence to tell you the extent to which they have been treated according to the principles contained in the Victims' Bill of Rights that is the law in many states. This Bill of Rights lists certain legal remedies and protections that should be extended to victims. Not all states have this legislation, but your state may. You may also want to include rights accorded victims in other states. Rights mentioned in these bills include, among others:

This list can serve as a starting point for a scale of the justice victims receive when a crime (sexual assault or domestic violence) occurs by asking victims whether their rights were observed in these ways. The list can be tailored to the program you are evaluating. You can further clarify or define these items and/or add other rights to the list. Answer categories can be yes/no or ratings of satisfaction using five-point scales similar to those used above. However, items in a single scale should all use the same number of answer categories (two or five).

Addendum: Readings on Perceptions of Justice

Casper, J.D., Tyler, T.R., & Fisher, B. (1988). Procedural justice in felony cases. Law & Society Review, 22, 403-507.

Feld, B.C. (1990). The punitive juvenile court and the quality of procedural justice: Disjunctions between rhetoric and reality. Crime & Delinquency, 36, 443-466.

Folger, R., Cropanzano, R., Timmerman, T.A., Howes, J.C., & Mitchell, D. (1996). Elaborating procedural fairness: Justice becomes both simpler and more complex. Personality and Social Psychology Bulletin, 22, 435-441.

Gilliland, S. W. (1994). Effects of procedural and distributive justice on reactions to a selection system. Journal of Applied Psychology, 79, 691-701.

Lind, E.A., & Tyler, T.R. (1988). The Social Psychology of Procedural Justice. New York: Plenum Press.

Miller, J.L., Rossi, P.H., & Simpson, J.E. (1991). Felony punishments: A factorial survey of perceived justice in criminal sentencing. The Journal of Criminal Law & Criminology, 82, 396-422.

Shapiro, D.L., & Brett, J.M. (1993). Comparing three processes underlying judgments of procedural justice: A field study of mediation and arbitration. Journal of Personality & Social Psychology, 65, 1167-1177.

Stroessner, S.J., & Heuer, L.B. (1996). Cognitive bias in procedural justice: Formation and implications of illusory correlations in perceived intergroup fairness. Journal of Personality & Social Psychology, 71, 717-728.

Tyler, T.R. (1994). Psychological models of the justice motive: Antecedents of distributive and procedural justice. Journal of Personality & Social Psychology, 67, 850-863.

Tyler, T.R. (1989). The psychology of procedural justice: A test of the group-value model. Journal of Personality & Social Psychology, 57, 830-838.

Vingilis, E., & Blefgen, H. (1990). The adjudication of alcohol-related criminal driving cases in Ontario: A survey of crown attorneys. Canadian Journal of Criminology, 639-649.

Wagstaff, G.F., & Kelhar, S. (1993). On the roles of control and outcomes in procedural justice. Psychological Reports, 73, 121-122.


CHAPTER 13
MEASURING THE IMPACT OF TRAINING

By Heike P. Gramckow, Ph.D., Jane Nady Sigmon, Ph.D.,
Mario T. Gaboury, J.D., Ph.D. 1, and Martha R. Burt, Ph.D.

It is hard to imagine any training in the area of violence against women that does not seek, at a minimum, to increase knowledge, heighten sensitivity toward victims, and enhance skills that will help trainees contribute to efforts to protect victims. It does not matter whether those attending training are police officers, prosecutors, judges, victim adovcates, hospital personnel, or probation officers—these basic goals will be the same even when the content of the knowledge and skills to be imparted will differ. Thus every STOP-funded training project will need to review this chapter, because every trainer needs feedback indicating whether the time and effort invested in training has any payoff in learning and in changed attitudes and behavior.

Logic Model of a Training Project

Underlying all impact evaluation is a common concept described in more detail in Chapter 6. Determining impact requires comparing the conditions of individuals who have experienced an intervention (e.g., a training program) with individuals who have experienced something else. We identify the impact of training by comparing data on participants and non-participants, or by measuring participants before and after an intervention (and also possibly while the intervention is in progress), or by other methods of comparison.

This chapter addresses issues in evaluating a particular type of project (rather than offering outcome measures relevant to a variety of projects), so it is a good place to practice developing a logic model. Exhibit 13.1 shows a schematic logic model, without a lot of detail. Column A contains background factors which, in this case, will be a variety of characteristics of the trainees (which characteristics will depend on the content of training). Column B shows characteristics of the training itself, which should be documented with the techniques of process evaluation. This is an essential step; process evaluation has inherent value in illuminating whether the training operated as intended, and is necessry to understand the results of an outcome evaluation.

IMTPex13_1.jpg (137820 bytes)

Column C identifies some common external factors that can enhance or undermine the impact of training. As with background factors, which ones are important in your evaluation will depend on the content of training, the origins of the trainees, and the circumstances that prevail in the trainees' home agencies and home community. Column D identifies the goals or outcomes of training, which are described in greated detail throughout this chapter. Note that the data source anticipated for Columns B and C is process analysis.

Selecting Outcomes to Evaluate

The ultimate goal of VAWA-supported training is improved outcomes for women victims of violence. However, few training evaluations will be able to take this long-term view. The more immediate and intermediate outcomes such an evaluation can measure are those that facilitate or promote the ultimate goal. Immediate outcomes include new knowledge, changed attitudes, and new skills. Intermediate outcomes may be changes in the behaviors of those who received training, putting their new learning and attitudes into practice. More long-term but still not ultimate outcomes may include changes in police or court procedures, the adoption of new policies that affect the treatment of victims by criminal justice or allied professionals, or the development of a coordinating council that connects all community services for victims. The contents, and sometimes the structure, of training will be shaped by which of goals the training aims to achieve. Likewise, any evaluation must structure its measures to be able to assess the different goals and activities of the training program.

Avoiding Inappropriate Outcomes

Any evaluation of training should cover only outcomes that the training itself can be expected to accomplish. When the training's main goal is to increase the knowledge of its participants, then the evaluation should concentrate on assessing knowledge acquisition. While a number of other outcomes may result from the training, they cannot be the focus of an evaluation if the training itself did not posit that its goals included these other outcomes. It would be an odd training program that wanted only to convey knowledge and did not also hope or expect that participants would use the new knowledge in some way (i.e., that they would change some behaviors as a consequence of what they have learned). But suppose for the moment that this is the case. Then an evaluation would stick to assessing knowledge. However, when the training program's logic model includes the expectation that new knowledge will lead to different behaviors, and the program's training curriculum is set up to encourage changed behaviors and give people time to practice them, then the evaluator should ask whether the training participants use the new behaviors once they are back on the job.

Evaluators also need to be careful not to expect training programs to achieve ambitious goals that clearly require more than just training. When police officers participate in training that teaches them how to work closely with victim advocates in order to offer better protection to women who have been assaulted, we would look for increased and improved coordination between police officers and appropriate victim advocates. But can we also expect to see that female victims of domestic assaults are actually better protected? The latter may be a long-term goal of the training effort that very likely requires the availability of additional resources (e.g., shelter, protection orders, electronic warning systems). While the training may actually teach police officers how to access these resources, it cannot make them available in each community. When training alone cannot, realistically, achieve a stated goal, then the hoped-for impact cannot be expected to occur and should not figure centrally in an evaluation of training.

Picking the Right Outcomes

Developing the right measures for goal achievement becomes more involved the more comprehensive the goals of training are. Usually the results of training that aims only to provide knowledge are relatively easy to measure. For example, when the training curriculum includes a session on new legislation that changes the situations in which protective orders may be issued or a session on new techniques for gathering evidence in sexual assault cases, participants should be assessed on their ability to describe the situations or techniques presented in training. When the training also has the goal of providing the skills to apply the new law or the new techniques and includes sessions for mastering these skills, then the evaluation should include an assessment of how much the skills are used in practice.

Specifying and collecting the right evaluation measures for assessing skill acquisition is more difficult than for knowledge acquisition, because one must collect evidence of changed behavior. This may come through self-report from a follow-up questionnaire, from the testimony of others (e.g., interviewing co-workers and supervisors about the trainee's use of the new skills), from actual observation, or from analysis of case records (assuming that the records contain adequate information). Obviously, it is more expensive and time-consuming to gather these types of information to document behavior change (or lack of it) than it is to give trainees a paper-and-pencil knowledge test on the last day of training. However, the importance of what you learn from your increased investment usually more than makes up for the additional trouble.

If training is to achieve its goals, the substantive information actually provided in the training session must be linked to those goals. Suppose that the goal of a training program is to increase the number of protection orders that officers obtain and the training provides law enforcement officers with knowledge of the legal requirements for obtaining protection orders, but does not teach them how and where to obtain the protection order. Then, considering the content of the training sessions, the desired result is not likely to occur, or if this result occurs, it is not likely to be a direct result of the training.

Also, the training must be implemented well enough that its critical elements have been delivered, and to the right people. If, for example, one section of a training program is not taught, you cannot expect to see the same results as if trainees had received the entire program. The missing component has to be identified and accounted for since it is only logical to assume that only a completely implemented effort can have the expected impact. If a training component is missing this may actually be an explanation for why the assessment results indicate that the program did not have the predicted impact.

Levels of Evaluation for Training Programs

You will need to gather several types of information before you can see whether a training is successful. First, you need information from the trainers about the goals and the content of training, to identify what was intended and what was delivered. Second, you need baseline information from the trainees about their level of knowledge, attitudes, and skills before the training. Finally, you need information from the trainees after the training to identify changes as a result of the training. In addition, training sessions related to violence against women will often include victims, survivors, and advocates as participants or presenters. These participants bring a unique perspective to a training session. They should be asked to give their perceptions of how well the other participants "got the message," especially on the issues of attitudes and sensitivity toward victims.

A very widely accepted approach to evaluating training programs has been developed by Kirkpatrick (1996), who outlines four levels of measurement to address the range of potential training effects. Although many training evaluations focus on only one or two of these levels (reaction assessment and learning), he argues that to truly assess the outcome of training, one should gather data to measure all of the following:

This model can be applied to any training regardless of the setting or topic. Each level of assessment can provide valuable information for developing, evaluating, and revising training curricula. However, assessment strategies such as the type of information to be gathered, timing of the assessment, and the method of gathering information are different at each level.

Table 13.1 shows Kirkpatrick's four levels of evaluation, reaction, learning, behavior change, and problem impact (in our case, impact on VAW), and how they apply to the common training goals of knowledge and attitude change, skill acquisition, behavioral intentions and behavior change, organizational change, and impact on victims and on violence. It also shows the most relevant timing for evaluation at each level, and the importance of taking direct and indirect external influences into account. Immediate assessments are those done during a training session or just before everyone goes home. Short-term follow-up assessments are done after trainees are back on the job for a while, but usually within the first two or three months, possibly up to six months, after training. Long-term follow-up assessments can be done months or even years after the training. Timing will depend on how long you think it will take for a particular impact of training to occur. Changes in trainees' own behavior might be expected to occur relatively quickly (within the first few months), whereas changes in colleagues' behavior might take up to a year to catch on, and institutional change could take several years (although there would be signs along the way, such as resource commitment, planning committees, protocol development committees, etc.). We discuss each level of evaluation shown in Table 13.1, including the measures that are most commonly used in relation to each training goal for each level.

Table 13.1
Options for Evaluating Training Activities

Evaluation Level Training Goals Timing
Reaction Perceived knowledge gained
Perceived attitude change
Perceived skills acquired
New behavioral intentions
Immediate
Learning Knowledge gained
Attitude change
Skills learned
Plan in place for carrying out behavioral intentions
Before
Immediate
Short term
Long term
Behavior Change Skill use on the job
Own behavior change beyond use of skills
Changes in behaviors of others
Changes in "behaviors" of organizations (see Chapters 8, 9, 10)
Short term
Long term
Impact on VAW Improved victim outcomes (see Chapter 7)
Reduced levels of VAW in the community (see Chapter 11)
Long term

Level 1: Reaction Assessments

The most basic level of impact evaluation focuses on capturing immediate, on-the-spot reactions to different aspects of the training, providing a measure of "customer satisfaction." However basic, it is an important method of obtaining information that will help training organizers improve training programs for future presentations.

The information gathered usually asks for the trainee's judgment about the content of each session, the materials used, the instructor's ability to present the material, the relevance of the material to the trainee's job, and the training facility. Examples of "reaction assessment" questions, to be answered on a scale from 1= strongly disagree through 5 = strongly agree follow:

  1. The content of the session was relevant to my professional needs.
  2. Overall, the information provided in the session was practical.
  3. The time allocated to the subject was adequate.
  4. The session enhanced my knowledge of the subject.
  5. The instructor was very good at getting his/her points across.
  6. The room where the session was held was adequate (big enough, warm/cool enough, appropriate arrangement of tables and chairs, easy to see the board/screen/flip-chart, etc.).
  7. I learned a lot that I needed to know.
  8. The session changed the way I will think about (_________) in the future.

All evaluation at the level of reaction assessments is immediate. One can assess reactions to the presentation of materials, regardless of what the materials were trying to do (that is, whether they were trying to convey knowledge, change attitudes, change behavior, etc.). One can assess trainee's beliefs that they learned something new, their perceptions that efforts to change their attitudes or feelings about a topic succeeded or failed, their beliefs that they have acquired new skills, and their intentions to change their behavior once they get back on the job.

Even with reaction assessment, background and external factors (Columns A and C of your logic model) can play a role. To understand how these factors might affect reactions to training, you would also gather information about the ones you thought might make a difference and analyze your results against these factors. Some background and external factors will need to be collected through questionnaires completed by trainees (including their application for training), while others will be obvious to anyone who attended the training. For reaction assessments, the major background and external influences one would want to guard against are:

Since a reaction assessment gathers immediate perceptions and feelings, it is usually conducted during the training and immediately prior to the trainees' departure by distributing and collecting training evaluation questionnaires to be completed by all participants. These questionnaires can be done for each session, for each day, and for the training program as a whole, depending on the level of detailed feedback you want to receive. A session questionnaire should assess immediate reactions to content and presentation of each individual session, including the adequacy of opportunities to practice new learning if that is one of the goals of training. A combination of forced-choice (e.g., yes/no, agree/disagree) and open-ended questions that give people space to write in comments will facilitate participant feedback. A daily questionnaire should assess reactions to the balance of types of sessions (plenary, didactic, practicum, coalition-building, etc.), order of sessions, quality of presenters, and so on. When the training extends beyond one day, an overall training evaluation questionnaire should be used to gather participant assessments of the training program as a whole, the quality of written materials, format, schedule and organization of the course, trainers, and facilities. If the session is one day or less, this information should be requested on the daily questionnaire, which will serve also as the overall questionnaire. In an effort to encourage full and honest participation, such evaluations are typically submitted without names or other identifying information.

While one usually gets many completed questionnaires from a reaction assessment, it provides only limited information on the effect of the training. Its findings generally do not provide information about goal achievement, because a reaction assessment cannot capture even the most basic impacts such as the amount of learning or attitude change that occurred. Nor can it register how much of the newly acquired knowledge actually stays with the participant over time or how well it can be and has been applied in the field.

Reaction assessments are very much like the little card one is often requested to complete in a restaurant to indicate how much one liked the services and food. You can indicate being pleased with or displeased with the facility, the waiter, and the food, but there is no opportunity to add comments when the food did not agree with you several hours later. Some feedback of this immediate type is important but it cannot give the whole picture, nor does it capture the most important potential impacts of training. Immediately following training, participants may have the feeling that they learned much and may indicate this in a questionnaire administered on site. This result reflects the participants' perceptions of their learning, but the reaction assessment does not actually test that learning, nor does it assess whether the learning is applied on the job. Thus, a reaction assessment cannot adequately evaluate whether a training program accomplished any of its goals (beyond merely having gotten through the training day).

Level 2: Learning Assessments

An essential component of a training evaluation is an assessment of what participants learned. New knowledge and new attitudes are always primary training goals, and are also the basis on which trainers expect that trainees may change their behavior in the future.

Learning can be measured by assessing changes in participants' knowledge, attitudes, or skill levels from immediately before to some time after a training program. Such pre- and post-test measures of how well participants understood the material covered in the training program provide an objective indication of whether learning took place. Ideally, pre- and post-test assessments would be conducted with a group of training participants and with a comparison group who did not participate in training. By examining the differences between the two groups, evaluators can assess on the post-test whether participation in the training program improved participants' performance. However, few training programs have the funds and staff to use comparison groups. As a result, learning assessments are generally limited to those who participate in the training.

As mentioned earlier in this chapter, for training related to violence against women in particular, attitude change and increased sensitivity to victims must be an essential goal. There are some general points of sensitivity that should be included in any STOP-funded training evaluation, including the following:

Questions assessing attitudes can be constructed to fit the specific work situations of the trainees. For example, in a police training focused on new laws specifying mandatory arrest of batterers, important attitudes to address in both training and evaluation are those which have been shown to reduce the likelihood that police will make an arrest (Ford, 1987). These include ideas about the seriousness and criminality of battering, whether some women are "unworthy" of police protection, whether women have acted on their own behalf, and whether battering is a "private matter." One can assess these attitudes on 5-point scales (e.g., 1=agree strongly, 2= agree somewhat, 3=neither agree or disagree, 4= disagree somewhat, 5=disagree strongly; or that a factor, if present in the situation, 1=strongly favors arrest, 2=somewhat favors arrest, 3=irrelevant to arrest, 4=somewhat against arrest, 5=strongly against arrest). The first group of items shown below uses the agree-disagree scale; the second group uses the favors-against arrest scale:

The same types of attitudes are also relevant to prosecutors' decisions not to prosecute, judges' decisions to reduce sentences or waive punishment, and other actors' decisions about whether or not to support a battered woman. Parallel attitudes affect the decisions of police, prosecutors, judges, juries, and potential support people in cases of sexual assault. Attitude scales related to both sexual assault and battering are described in Chapter 11, where you can see other examples of attitudes that are important to include in training and to measure because they make a difference for people's behavior toward women victims of violence.

To assess learning, the content of a pre- and post-test evaluation instrument must be completely consistent with the training course content. The questions on the questionnaire should match the most important elements of the information that the training organizers were trying to convey. The most typical format is a combination of true/false and multiple choice questions that are based on material covered in the course. It is best to include both fact-based questions and questions that require the application of information to a problem situation. If attitude change was one of the training goals, questions should cover current beliefs related to the topics about which trainers hoped to change attitudes. If skill acquisition is a training goal, paper-and-pencil measures may not be enough to measure training success. In addition to participants' self-reports that they learned new skills, evaluators may wish to set up situations in which they can actually observe trainees to see whether they are using the skills. This can be done during training in practice sessions and also is good to do in the participants' regular job situation at some time after training. Finally, paper-and-pencil assessments toward the end of a training program can ask participants about their behavioral intentions—what do they intend to do when they get back to their jobs, how is it different from their usual behavior, and how do they plan to introduce these new behaviors into the workplace.

Timing is a critical dimension of learning assessments. You will want to collect data before training to assess "pre" levels of knowledge, attitudes, and skills. You will also certainly want to collect data at the end of training, to provide the easiest form of "post" evaluation data. However, bear in mind that "after" training covers a lot of time. It is very important for a training evaluation to try to include some assessment "down the road," as the effects of knowledge and attitude change are known to wear off quite rapidly if they are not quickly reinforced with behavior change. So a good evaluation of training should try to assess learning retention two or three months after training, and should also track whether by that time participants have changed their behavior in any way that fulfills the behavioral intentions they stated at the end of training.

Since the measurement of learning is linked to the topic presented in a course, few measures can be applied to every training evaluation. We can, however, give an example in which judges were taught the contents and implications of a state's victim rights legislation. Measures of increasing levels of learning and how they can be established for this subject matter cover the following (Bloom, 1956):

If the purpose of a training program is to increase the skills of participants in a particular area, a performance-based pre- and post-test evaluation may be utilized on site to collect immediate information about the effectiveness of the training and to provide immediate feedback to the trainees. Evaluators using this approach can assess learning, attitude change, and behavior change as a result of training. For example, if crisis intervention hotline volunteers are being trained to answer hotline calls, one method of evaluating their skills acquisition and proficiency in using them would be to have each trainee at the beginning of training handle a number of mock phone calls that present a variety of crises, and do this again at the completion of training. A trainer could observe and rate the responses of the trainee. In this instance, the actual performance of handling a representative range of phone call scenarios would provide trainers with valuable information about the individual trainee's learning and level of skill development, as well as strengths and weaknesses in the training program.

Pre- and post-test learning assessments generally are not anonymous, because they often are used to provide specific feedback to individual training participants. If you do not want or need to give this type of feedback, anonymity can be preserved by assigning an identification number to each trainee and attaching the number to pre- and post-test learning assessments and any other follow-up assessments.

Background and external factors (Columns A and C of your logic model) may affect learning from a training program. Background factors—factors the trainees bring with them—are the only ones likely to affect immediate assessments. You may want to collect data from trainees through registration materials or a background sheet completed at the beginning of training to help you assess whether background, pre-training knowledge, or existing skills influenced the impact of training. Both background factors and external factors can influence learning assessments taken at some time after the training ends. The persistence of a supervisor's or work group's mistaken knowledge or negative attitudes may overwhelm any attempts of trainees to put new learning into place, just as lack of resources in the community may hinder trainees from establishing new cooperative or referral behaviors. An evaluation should try to learn about as many relevant external influences as possible. You can often use trainees' perceptions to identify some of these, especially when it seems likely that there will be little long-lasting effect of training and you are asking them "why not?" In addition, you could try to get "outside" views of "why not?" by asking other informants in the workplace, department, or community.

Level 3: Behavior Change Assessments

Assessments of behavior change try to identify whether the knowledge gained during the training is actually applied later, so of course these changes must be measured some time after the training takes place. It is also important to point out that evaluations of behavior change should build on previous assessment levels. If the learning assessment, for example, indicates that trainees did not learn very much, it is unlikely that behavior will change significantly. If such a change occurs nevertheless, it is likely to be the result of some external factor. Also, if the learning assessment indicates that a high degree of learning occurred but the behavior assessment show little impact, the evaluator might look for external factors that prevent trainees from applying what they learned in training on their jobs.

Changes in behavior can be captured through pre- and post inquiries to the training participant, or to his or her peers, supervisor, or clients (e.g., victims). Table 13.2 gives an example of questions posed to police officers who received training about how to respond more effectively to calls for domestic disputes. The officers answered on a scale of 1 = never to 5 = almost always, and the results for the pre- and the post-training questions were later marked on the same questionnaire to create a visual display of areas of improvement.

Such an instrument is also helpful if behavior changes are to be observed, either in a mock situation or in the field. An observation guide with similar indicators for trainee behavior is provided to the observer (who could be a supervisor or researcher), who notes the rating before the training and after. The results are then compared to indicate changes.

 

Table 13.2
Comparing Responses Before and After Training

Indicator question Indicator response pre-training Indicator response post-training
How often do you: Never Rarely Sometimes Often Almost Always Never Rarely Sometimes Often Almost Always
...encourage the victim to seek medical attention?

  1 2 3 4 5


      X

  1 2 3 4 5


        X
...explain to the victim and the assailant that the arrest decision is required by law?

  1 2 3 4 5


    X

  1 2 3 4 5


        X
... discuss a safety plan with the victim?

  1 2 3 4 5


  X

  1 2 3 4 5


      X

Another avenue for identifying the pervasiveness of behavior change following training is to look at institutional documents. Some of these may reveal whether the lessons of training have penetrated through to the daily life of trainees. For example, many police departments have formal field training programs for new officers (that is, after they finish with the academy, they receive further training in their own department in conjunction with assuming their role as police officers. This field training often is guided by manuals that specify certain skills that police must be able to demonstrate. An evaluator of a training program who wanted to see whether behavior had changed in the field could examine these field training manuals to see whether new training being given at police academies has permeated through to field training.

Level 4: Problem Impact Assessments

Indicators of a training program's impact on the ultimate issues of violence against women, such as increased victim safety and reduced overall levels of violence against women, are the same as those described in other chapters of this Guidebook, especially in Chapters 7, 8, 9, 10 and 11, and are not repeated here. However, since the chain of reasoning that links training activities to these ultimate outcomes is very long and complex (as the logic model you develop for your training program should make clear), your main challenge as an evaluator will lie in establishing the connection between training provided and changes in a problem situation as revealed by the indicators you select. Your logic model will help you decide where you have to look for impact, and what steps and external influences you will have to document along the way. In general, it is not likely that you will be able to attribute any changes in these ultimate impacts to training, or to training alone. The resources it would take even to attempt this would probably be better spent on improving your ability to measure outcomes that are a bit more connected to the training itself.

Assessing External Influences

If, despite what we just said, you still wish to include measurement of ultimate impacts in your training evaluation design, you will have to do an excellent job of measuring external influences. Therefore, we have saved the greatest part of our discussion of these influences for presentation after considering Level 4 evaluations, although they also apply to the long-term assessments of behavior change in Level 3, especially if you want to assess changes in the behaviors of others or of institutions.

External influences might include direct factors such as the characteristics of the people who received the training. Adult learning theory recognizes the many types of learners and great diversity in education levels, life experiences, goals, abilities and disabilities among learners entering adult training (Thermer, 1997). The different levels of knowledge that participants bring to training have an impact on how well they will be able to apply what is taught when they are back on the job, and as a result, influence to what extent a training program has the desired effect.

Other external influences include indirect factors such as other training efforts, resources to implement what is learned, resistance or cooperation from others to implementing what is learned, financial and legal barriers to implementing what is learned, and so on. Our earlier example of a training program whose long-term goal was to increase victim safety through better cooperation between police and victim advocates is a case in point. Even if cooperation actually improves dramatically, the ultimate goal of increased victim safety may not be reached because other resources to accomplish this goal were not available or were reduced.

It is also important to document external influences that operate simultaneously with training to produce the ultimate outcome. Again using our example, the training to increase cooperation may not have been very effective, but shortly after the training took place the police department and victim service agency issued guidelines requiring officers and victim advocates to work closely together, and supervisors in both agencies made every effort to implement this new guideline. Evaluation would most likely show that the goal of training was achieved, but it would be wrong to attribute to training more than a facilitative role in providing the skills and attitudes that helped individual officers and advocates make this transition more smoothly.

The following list, while not exhaustive, shows many frequently observed external influences that would be considered to be indirect effects in evaluations of training impact:

Other Issues

 

Timing of Training Impact Evaluations

Depending on the type of training program, its impact can be evaluated continually or periodically. An important consideration in determining when a training evaluation needs to occur is the time in which the impact can realistically be expected. You need to consider that some goals can only be achieved over time. If we again look at the example of a training program that promotes learning designed to increase law enforcement's cooperation with other agencies so that women victims will receive better protection, it is obvious that the first goal, learning, is the one you can assess closest to the time of training. Increased and improved cooperation is a goal that is more likely to be achieved a bit later, but still in a shorter time than the goal of greater protection for female victims.

To identify whether impact lasts a long time, you have to do your assessment a while after the training occurred. Usually such post-tests are scheduled within 6 months to one year after the training occurred. The later the assessment is conducted, the higher the likelihood that the impact of the training is "contaminated" by too many other influences (e.g., on-the-job experience) and the results can no longer be directly related to the training. It also becomes more difficult to track the participants, especially in law enforcement settings were assignments are rotated frequently or in settings with high turnover. In one study to assess the impact of a training program to assist law enforcement and prosecution in developing an asset forfeiture program, almost 40 percent of the law enforcement staff participating in the training effort had moved into other assignments at the time of the one-year follow-up training assessment.

When you should conduct a post-training assessment to assess training impact also depends on the type of training provided and the goals to be achieved. Long-term goals can only be assessed on a long-term basis. Among the training programs funded under the VAWA STOP grants this caveat about the timing of assessment applies especially to training efforts to increase cooperation among different agencies, which is a longer-term goal requiring sufficient time to materialize.

Methods to Evaluate Training

We have already made a number of suggestions about methods for assessing training impact at each level of evaluation. Selecting the appropriate method of evaluating VAWA-related training involves technical issues such as how best to measure changes in attitudes, knowledge, behaviors, and actual impact on a specific problem. As with any program evaluation, training impact can be measured through surveys, observations, focus groups, and analysis of agency records. Method selection will also depend on practical issues such as budget constraints and timelines.

Traditionally, how training participants react to training, what they learned, how they changed attitudes or behavior, and to what extent a specific problem was affected by a training session is measured through a written or oral survey. This survey can elicit information from the individual being trained, and also from others with whom the training participant interacts such as a supervisor, a co-worker, or a woman victim of violence (a client).

Another way to evaluate training impact is to conduct process evaluation. This would include observing the participant in real-life situations, where you can see how well tasks that were addressed in the training are actually handled. This observation can be made by an independent researcher or expert, or by a peer or supervisor. Another mechanism is a focus group with supervisors, co-workers, or clients to identify whether the training had the desired impact. These mechanisms provide qualitative information.

Other assessment approaches can focus on collecting quantitative data from agencies involved in violence against women issues (law enforcement, prosecution, victim services, courts, and others). Such data can be meeting records (if more coordination was a goal of the training), agency case records (if increased numbers of protection orders was a goal of the training), and other similar "hard" data. Both qualitative and quantitative data collection have their benefits and drawbacks. Generally, a combination of both provides the most complete picture.

Using Training Evaluation Results to Improve Services

Chapters 4 and 5 discussed in general some of the ways that programs can use evaluation data to improve their own agency functioning. In this chapter we want to focus on specific applications of training evaluations to improve programs and services for women victims of violence, including monitoring training progress, adjusting the subject matter or presentation style to the target audience, and justifying further use of resources for training. To ensure that a training program is up-to-date and continues to deliver the information required to the right audiences a number of steps should be implemented, including (1) revising training goals and evaluation measures, (2) developing a feedback loop, and (3) implementing a systemic change mechanism (Levin, 1974).

For a training program to deliver the needed services, training providers continually need to assess whether the goals, techniques, and materials of the training program are still adequate to the needs of the target population. If the goals, techniques, or materials change, the measures to identify impact need to be revised accordingly. To ensure that the training continues to be relevant, it is helpful to establish an ongoing mechanism to receive feedback not only from those who are currently participating in training, but from those who have participated in the past, those who are potential training participants in the future. It might also be informative to include those who benefit from the training indirectly, such as co-workers and supervisors, and the clients who actually receive the services that the training is trying to improve.

All of the information collected to identify training impact, to revise the training program, and to receive continuous feedback on training needs and the adequacy of the training provided is helpful to ensure that the training delivered is the training needed. There is, however, a larger issue that needs to be addressed to ensure that training resources are spent efficiently and effectively—the issue of ensuring that the knowledge and skills learned in a training course can actually be applied. Training providers need to identify factors that prevent trainees from using what they learned and possibly incorporate into training ways that the trainees may try to overcome these obstacles. Evaluation can help you identify these external influences and may possibly point the way to how they can be handled.

Addendum: Readings on Training Evaluations

Other fields have developed an extensive body of literature regarding the evaluation of training programs that may be helpful to criminal justice professionals. These include business and industry (Burkhart, 1996; Ezzeddine & Holand, 1996; Holcomb, 1993; Jackson, 1989; Rothwell, 1994); adult education (Gailbraith, 1997; Lea & Leibowitz, 1992; Moran, 1997); vocational programs (Strecher & Hasner, 1992, 1993, 1995); and literacy (Leef & Riddle, 1996). The following references provide some resources for those interested in further reading on training evaluations (see also the addendum to Chapter 6 for readings on general evaluation issues).

Basarab, D. Sr., & Root, D. (1992). The Training Evaluation Process. Norwell, MA: Kluwer.

Bloom, B.S. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals, Handbook I: Cognitive Domains. New York: McKay.

Brinkerhoff, R. (1968). Achieving Results from Training. San Francisco: Jossey-Bass.

Burkhart, J. (1996). Evaluating Workplace Effectiveness. Denver: Colorado State Department of Education, State Library and Adult Education Office.

Dugan, M. (1996). Participatory and empowerment evaluation: Lessons learned from training and technical assistance. In Fetterman, D.M., Kaftarian, S.J. and Wandersman, A. (Eds.), Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability. Thousand Oaks, CA: Sage.

Ezzeddine, A., & Holand, K. (1996). How we developed our professional development programs: Two models and why they work. Small Business Forum, 14(2), 40-53.

Ford, D. (1987). The impact of police officers' attitudes toward victims on the disinclination to arrest wive batterers. Paper presented at the Third National Conference for Family Violence Researchers, University of New Hampshire, July 1987. Indianapolis: Indiana University at Indianapolis.

Gailbraith, M. (1997). Administering Successful Programs for Adults: Promoting Excellence in Adult, Community, and Continuing Education. Professional Practices in Adult Education and Human Resources Development Series. Malabar, FL: Krieger Publishing Co.

Holcomb, J. (1993). Making Training Worth Every Penny. Playa del Rey, CA: On Target Training.

Jackson, T. (1989). Evaluation: Relating Training to Business Performance. San Diego, CA: University Associates.

Kirkpatrick, D.L. (1986). More Evaluating Training Programs. Alexandria, VA: American Society of Training and Development.

Kirkpatrick, D.L. (1993). How to Train and Develop Supervisors. New York: AMACOM.

Kirkpatrick, D.L. (1996). Evaluating Training Programs: the Four Levels. San Francisco: Berrett-Koehler Publishers, Inc.

Lea, H.D., & Leibowitz, Z.B. (1992). Adult Career Development: Concepts, Issues, and Practices. Alexandria, VA: National Career Development Association.

Levin, H.M. (1974). A conceptual framework for accountability in education. School Review, 82(3), 63-391.

Leef, C. & Riddle T. (1996). Work Together, Learn Together: a Model That Incorporates Community Partnerships. Peterborough, Ontario, Canada: Literacy Ontario Central South.

Moran, J. (1997). Assessing Adult Learning: A Guide for Practitioners. Professional Practices in Adult Education and Human Resources Development Series. Malabar, FL: Kreiger Publishing Co.

Phillips, J. (1991). Training Evaluation and Measurements Methods, 2nd Edition. Houston: Gulf.

Rothwell, W. (1994). Beyond Training and Development: State-of-the-Art Strategies for Enhancing Human Performance. New York: AMACOM.

Strecher, B.M., & Hasner, L.M. (1992). Local Accountability in Vocational Education: A Theoretical Model and its Limitations in Practice. Santa Monica, CA: RAND.

Strecher, B.M., & Hasner, L.M. (1993). Beyond Vocational Education Standards and Measures: Strengthening Local Accountability Systems for Program Improvement. Santa Monica, CA: RAND.

Strecher, B.M., & Hasner, L.M. (1995). Accountability in Workforce Training. Santa Monica, CA: RAND.

Thermer, C.E. (1997). Authentic assessment for performance-based police training. Police Forum, 7(3),.1-5.


CHAPTER 14
DATA SYSTEM DEVELOPMENT

by Susan Keilitz and Neal B. Kauder 1

This chapter provides guidance on how to evaluate data systems developed or revised to improve system responses to domestic violence and sexual assault. The information presented in this chapter is drawn primarily from the experience of designing, implementing, and evaluating criminal justice system databases. Lessons learned in this context are closely related and readily applicable to the evaluation of data systems developed under VAWA STOP grants. However, these lessons are somewhat generic and therefore will not address every issue that is likely to arise in the evaluation of data systems funded by STOP grants. VAWA STOP grantees therefore should keep in mind the following considerations as they adapt the information presented here for use in their evaluations:

All VAWA STOP evaluations of data systems, regardless of the system type and purpose, should document information on five factors that might affect the development or revision of a data system. These factors follow:

Evaluation Methodology

 

Chapter 6 of this Guidebook addresses methodological issues related to conducting evaluations of STOP grant projects. One of these issues is the value of having information from a time period before the project began against which to compare information gathered in the evaluation. Evaluations of data systems may be more likely than others to suffer from the absence of pre-project information, because in many instances no data system existed to which the new system can be compared. New automated systems can be compared to the manual systems they replaced, however. Likewise, the effects of a new system can be assessed by comparing whether and how particular functions were performed before and after the system was implemented.

One way to gather information to use in an evaluation of a data system without incurring the expense of collecting pre-project data is to conduct a users' needs assessment before a system is designed or restructured. The results of a needs assessment can serve as an objective baseline for measuring how a system is meeting the needs of its users. Among other things, needs assessments typically identify who intends to use the data system, what pieces of information are required, and how the information will be used. Whether or not specific groups of people have received the information they need is a critical measure for assessing the utility of the system.

The lack of pre-project comparative information should not discourage evaluation of data system projects. This chapter suggests ways that VAWA STOP grantees can conduct an evaluation that provides valid and useful information about how well a data system is functioning and how it might be improved. To accomplish this goal, evaluations of data systems should involve at least three basic components, discussed below:

Potential sources of evaluation information and methods for gathering the information are discussed in Chapter 6. Many of the sources described in that chapter should be tapped in an evaluation of a data system. These include reviews of existing system documentation and reports generated about or by the system; interviews with and surveys of system developers, participants, and users; audits of the system to compare data from its original source with the form and content of the data in the system for accuracy and completeness.

Logic Model for a Data System Project

As with Chapter 13 (training), this chapter addresses issues in evaluating a particular type of project, and also lends itself to developing a logic model. Exhibit 14.1 shows a schematic logic model for a data system project, without a lot of detail. Column A contains background factors which, in this case, are mostly characteristics of systems, although some individual characteristics may also be relevant. Column B is also a bit different from other logic models in this Guidebook, as it shows the steps in developing the data system, as well as the operating characteristics of the system itself. As with the training evaluation, the steps and operating characteristics should be documented with the techniques of process evaluation.

Column C identifies some common external factors that can enhance or undermine the development of a data system and its ultimate usefulness. Column D identifies the obvious primary goal of having the intended users actually use the system. It also identifies a variety of system-level impacts that might, in the long run, be influenced by the availability and use of a good data system.

The remainder of this chapter discusses the wide variety of variables that might be used for different parts of this logic model, and gives examples of the topics one might cover and the questions one might ask.


IMTPex14_1.jpg (153061 bytes)

Documenting the Purposes and Operations of the Data System

This component of an evaluation lays the foundation for the other components of the evaluation. It should provide you with a thorough understanding of the technical and operational features of the data system. This understanding should guide the evaluator in identifying issues to explore further and what information sources will be needed. For example, if a purpose of the system is to produce a monthly report for the prosecutor and the court, you will know to inquire of prosecutors and court staff whether these reports are useful and produced in a timely manner. If a users’ needs assessment was completed before the system was implemented, you should be able to conduct this component of the evaluation in less time and with fewer resources.

The first task you need to do for this evaluation component is to gain a basic understanding of the system by reviewing documentation of the system’s technical requirements, data dictionaries, and any background reports. You should be able to get this information from the individuals responsible for the design of the data system and from those responsible for managing it. The ease or difficulty of obtaining system documentation is itself a measure of the quality and utility of the data system.

The next step is to gather information from the people who participated in developing and operating the system and those who use it. You will need to interview the system designers and developers, staff who enter and delete information from the system, system managers, and system users. If several individuals or groups use information in the system, mail or telephone surveys may be an efficient way to gather information from the universe of users. It is important that you include all users of the system in this part of the evaluation.

Finally, functional diagrams of data system should be constructed. These system or functional diagrams serve as a visual guide to the system and to the evaluation. They can help you understand the flow of information through the system as well as focus on and record how the system functions at each juncture. These diagrams also will facilitate the documentation of system changes in the second and third components of the evaluation (see below).

The system review and interviews should address the following issues: What are the intended purposes of system? Who was involved in the design and implementation of the system? What resources supported system development and implementation? What are the technical requirements of the system (e.g., hardware and software used and compatibility with other systems, field lengths, data transmission capabilities)? What information is contained in system? Can the information be accessed as individual case records, or can only aggregated summaries be produced? Can individual variables within be accessed and analyzed? Who enters information into the system? When and how is information entered? How is information updated? How and when is information audited? Who uses the information in the system? How is the information used (e.g., to produce periodic or occasional reports; to track activity daily, monthly, yearly; to exchange with others; for evaluation)? What individuals or groups can access the information and how is this accomplished? What measures are in place to ensure confidentiality of the information in the system? Which, if any, systems are linked and how?

Measuring the Accuracy, Reliability, and Utility of the Data System

The second component of an evaluation of a data system builds on the information gathered while documenting the purposes and operation of the data system. As in the first component, this component entails interviews or surveys of system users. It also involves reviews of original sources of information contained in the system to determine the degree to which this information matches the information in the system. You may be able to obtain the information on user perceptions of the system while gathering information in the first evaluation component.

Assessing User Perceptions of Accuracy, Reliability, and Utility

User perceptions of data system quality and utility typically are measured through surveys and interviews to ensure broad representation of users at various agency or occupational levels. Comparing the results of an initial needs assessment with the results of follow-up satisfaction interviews allows you to gauge the degree of change related to the actual data system implementation. Regardless of whether an initial needs assessment was completed, questions can be posed about former and current use of the data system to identify the benefits and problems associated with system changes. The following items typically are addressed in an assessment of user satisfaction with the data system:

Assessing Data Quality

Developing a process for tracking the accuracy and reliability of a data system will vary according to the level of specificity and detail needed or desired from the outcome measures. For example, it may be useful to know how often agencies fail to record dispositions for domestic assault arrests into the data system, but it may be even more helpful to determine which types of domestic assault cases are most likely to have missing data of this type. A jurisdiction may find that information fields are filled out more often in felony cases that are handled in the court of general jurisdiction than they are when the cases are misdemeanors heard in limited jurisdiction courts. It may be as important to describe the nature or accuracy of an arrest or disposition entry as it is to determine whether it has been entered.

The need to evaluate the quality of the data, as opposed to its mere existence, is becoming more important as domestic violence protocols and risk assessment instruments are relying more heavily on data to guide their development. When evaluating a specific data system it is also important to consider data quality across other systems, including law enforcement (arrest), court (disposition), and corrections (treatment/incarceration) data systems. The data system under evaluation may be functioning well but other data systems to which it relates or upon which people also rely may be flawed. This consideration becomes more critical as government officials and communities attempt to efficiently and reliably share or merge data across various databases and organizations (see below).

Considerations for Assessing Data System Quality. Many jurisdictions track measures of data quality as part of an audit or data quality assessment program. For example, managers of criminal history repositories report rates of missing or inaccurate data as part of their ongoing effort to improve criminal history records. Lessons from these experiences provide a few considerations for establishing measures for assessing the quality of other data systems:

Audits are time and labor intensive, however, because they require a manual examination of the source documents. This process can be facilitated by accessing transaction logs (maintained by some jurisdictions) to provide an audit trail of all inquiries, responses, and record updates or modifications. It may be necessary to perform random sample audits, depending upon a number of factors, including the level of resources available to conduct an audit and the extent to which system access crosses agency or jurisdictional boundaries. Pulling a sample properly is important to ensure that the entire universe of cases (all records within a data system) have an equal chance of being selected for audit. In some cases, you may want to select more cases (oversample) from a subgroup of cases or cases with particular characteristics that are more important or that occur with less frequency, such as violent domestic assaults and homicides. (See Chapter 6 for discussion of a number of other considerations concerning sample selection.)

Measuring Effects of the Data System on
Behavior, Performance, and Policy Change

In most scenarios, a new or redesigned data system can be considered effective if there are positive changes in the behavior or performance improvements of those who directly or indirectly access the system. Benefits may be quite obvious, as is the case with quicker responses for information requests or queries, or more subtle, as in the way information is formatted or presented to users.

One method for determining whether there have been changes or improvements in the behavior or performance of users involves constructing detailed diagrams of the various access points of a data system. These diagrams can depict how improvements at one stage of a data system can aid others at lateral or later points in both the data system and in the responses of the justice system and service providers. Identifying whether information gathered or improved at one segment of the system has benefited those involved in other segments is critical. The degree to which information reporting at the front end of a data system affects those at the back end should be communicated clearly to all system users, because those at earlier stages can rarely view the connection between their job performance and overall justice system improvements.

The diagrams can guide the selection of individuals across the system(s) to interview or survey. These system users may include data entry personnel, law enforcement officers, prosecutors, judges, court staff, probation officers, treatment providers, shelter staff, legal services providers, and other victim services providers. Information from these interviews or surveys can be combined with official justice system statistics (e.g. arrests, charges, dispositions, protection orders issued, enforcement actions, sentences) to more accurately describe the impact of the data system on behavior and policy change.

There are a number of indicators that data system changes have affected user behavior, system responses, and policy improvements. A few examples are described here

Changes in User Behavior, Job Performance, and System Responses

Those who access an information system for purposes of doing their job are acutely aware of changes in data accessibility, interface formats, and the quality of information stored. For example, as a result of improvements in a criminal history repository, law enforcement may conduct background checks on suspects at the earliest stages of the arrest or investigation process. They may also indicate a greater degree of confidence in the information they obtain from these data requests. As a result, officers may be more aware of outstanding warrants or civil protection orders, which in turn leads to a more accurate and complete charges. Improvements in protection order registries may result in more arrests for protection order violations, because the specific terms of orders are identifiable in the system. Police may report to you that their job performance has improved, while data obtained from police statistics may show a concomitant increase in arrests or an increase in the frequency of multiple charges.

Likewise, prosecutors may find that improved access and data specificity reduces the time they have to spend calling or waiting for authorities from other agencies to respond to their inquiries. These prosecutors may find that their behavior on the job has changed, because they have more time to investigate the facts of a case or they may be able to spend more time working with the victims. An examination of conviction rates before and after implementation of the system may indicate that more cases are successfully prosecuted.

Services Affecting Victims, Offenders, and Other Clients

The range of benefits derived from improving a data system may not be noticed entirely for several years. For example, using STOP grant funds to integrate data across agencies or organizations may have the most impact after a database on offenders (or victims) has been solidly established. In this instance managers and researchers can begin to more accurately profile those who come into contact with the justice system. Information obtained from this type of analysis can be used for a number of purposes, including enhancing victim services, determining long-term treatment effectiveness, describing the related factors and tracking changing patterns in domestic violence and sexual assault incidents, objectively assessing offender risks to public safety at the various criminal justice stages, and assessing the effectiveness of new laws or policies aimed at deterring or preventing domestic violence

Use of System Data by Government Officials and Others in Policy-Making Positions

Government officials and policy making bodies are well-established consumers of data analysis and research. The extent to which they access and utilize data from the new or revised system can be assessed in a number of ways, including the following:

Expanding the Scope or Policy Application of Studies

Another indication that a data system is becoming more effective in aiding program development relates to the scope of the studies that are being conducted. For example, the ability to develop risk assessment instruments or interview protocols based on empirical information can only be accomplished with high-quality and detailed data that describes all aspects of a domestic violence event.

Increased Variety of Organizations Requesting Information and Data

Knowledge of the availability of high-quality data from a justice system often spreads across a variety of organizations. This will be more likely to occur if system data are not only well maintained and analyzed, but also are presented in formats that are readily understood and policy relevant. Legislative staff, judges, and executive branch advisors are the types of officials most likely to be aware of the benefits of an accessible and reliable data system. You may find it useful to survey these groups to test their knowledge of a system and to determine in what ways they use the data. Representatives of these groups can also recommend ways for improving the types of analyses that are conducted and the formats used for presenting results.

Evaluating the relationship between data system modifications and changes in justice and service system responses can be a complex undertaking for a number of reasons. Many data system improvements come at a time when organizations are redefining their roles or changing their structures completely. For this reason, isolating the effects of a data system on behavior and policy change may not be easily achieved. Although statistics and survey information gathered before and after the implementation of a data system may indicate a positive change, the data system alone may not be the cause of the change. For this reason, it is important to control for other intervening factors, and at a minimum, to search for the existence of alternative explanations. Policy makers will benefit most from evaluations that explain both how a data system improved job performance and system responses and how observed successes and failures occurred within the context of broader system changes.

New Approaches to Data Access and Integration

As states and localities move toward improving the quality and utility of their data systems, the need is arising to break down walls among the various departments and agencies that store common pieces of information. To this end, system designers are beginning to approach the problem of data access not from a data "sharing" viewpoint, but from a more seamless data "integration" perspective. In this sense, data from different systems are merged continuously through a network or Intranet environment.

The data integration approach allows individual agencies to maintain basic data structures and formats that have evolved for decades with significant commitments of resources. The recognition is growing that creating a single unified database, centrally housed and requiring each agency to report data in exactly the same format, is an insurmountable task. Instead, the technology exists (through "middleware" software products) to take data from a variety of data systems and merge or store the information in a common format without having to rebuild entire information systems. This new environment still requires data integrity and completeness at the various junctures of the system but offers a technological solution to the existing problems in the transfer of data across systems or agencies.

Because the data integration approach appears to be the direction in which data systems are heading, current STOP grant project evaluations may benefit from an assessment of the possibilities of becoming part of an integrated data system environment. One of the first tasks will be to ensure that those involved in violence reduction initiatives have a voice during deliberations relating to the development of comprehensive integrated information systems. These individuals and the groups they represent are not only in the best position to assess information needs related to victims and offenders, but they are also well situated to know how linking to alternative data sources can help them to perform more effectively. They may also recognize how the information they hold can in turn benefit others.

A thorough discussion of the issues related to integrating data systems is beyond the scope of this guidebook, but VAWA STOP grantees should be aware that the environment is changing. A few questions that might be considered initially are presented here: Should access to our data be granted to other officials in the judicial process, and will this in turn facilitate access to data held by those other entities? Are there ways we can design or structure our system to better prepare for future data integration across systems? Has our system attempted to integrate information across multiple law enforcement, judicial, and correctional platforms? Do any local or wide area networks or Intranets exist to allow seamless access and sharing of information? Can we download information for further research and policy analysis?


CHAPTER 15
SPECIAL ISSUES FOR EVALUATING PROJECTS
ON INDIAN TRIBAL LANDS

by Eileen Luna, J.D., M.P.A. 1

The VAWA mandates a set-aside of 4 percent of annual funding to support STOP projects on Indian tribal lands. In addition, services to women in Indian tribes is one of the seven purpose areas for which states may use their STOP allocation. Projects run specifically by and/or for women on tribal lands are relatively new activities in the area of violence against women. Therefore it is important that we use evaluation to learn as much as possible about the success of these projects. All of the specific suggestions for conducting evaluations that we have made earlier in this Guidebook are relevant to evaluations of projects on Indian tribal lands. In addition, Indian tribal lands may be subject to some legal and jurisdictional complications that do not apply elsewhere. That, combined with the fact that tribes are sovereign nations with a wide variety of traditions and experiences, means that evaluators would profit from some background before jumping into an evaluation.

This chapter was written to provide some relevant background. Evaluators who wish to work with Tribal governments need to understand something of the history, culture, traditions, and protocol surrounding Indian tribes. They need to know that jurisdictions can be uncertain, varying from place to place, tribe to tribe, and over time. And they need to be prepared to work with Tribal governments who have little or no experience collecting systematic data on crimes, services, or outcomes, or complying with grant conditions that are routine for state and local governments in the United States.

The Indian Tribal Grants Program

Grants to Indian tribes are subject to the same requirements as other STOP grants. The VAWA mandates that the STOP grantees focus 25 percent of their grant monies in each of three priority areas: law enforcement, prosecution, and victim services. The remaining 25 percent can be spent at the grantees' discretion. A number of Tribal STOP grantees have elected to expend this discretionary percentage on the development and implementation of elements of tribal court systems.

Within each priority area, the grantees have undertaken to develop training programs as well as appropriate policies, protocols, and procedures; to undertake the collection of data; and to facilitate communications within and among programs and governmental agencies.

Given the recent development and implementation of most tribal police departments (amounting to a 49 percent increase in two years), 2 the development of appropriate training programs and policies is of critical importance. However, it is unwise, particularly in Indian Country, to take rules and policies directly from other jurisdictions or to train law enforcement and other program personnel without regard to compatibility with the particular tribal culture.

The evaluation of Tribal programs should take into consideration the legal and jurisdictional issues unique to Indian Country. Evaluation should also consider the fact that many Tribal programs, particularly in the fields of administration of justice and victims services, have been created very recently. There may be little history and few models to assist with programmatic development. It is essential that the evaluation team commit to helping the Tribes develop competent and effective programs, while conducting the necessary critique of that which has been developed and implemented.

The Administration of Justice in Indian Country

The difficulties inherent in working with, and within, Indian Country, particularly in the field of program evaluation, are often difficult to foresee. Mistakes can be made that can hinder the development of relationships and trust—factors that are essential components of thorough evaluation. These mistakes can be avoided if one considers the legal framework of Tribal sovereignty, the jurisdictional issues that criss-cross Indian Country, and advisable protocols. The following should help to clarify the legal and jurisdictional issues faced by Tribal programs, explain the protocols that facilitate working in Indian Country, and thus make the evaluation of these programs more comprehensive.

Tribal Sovereignty

Tribes are sovereign and self-governing. Except for express limitations imposed by treaties, by statutes passed by Congress acting under its constitutionally delegated authority in Indian affairs, and by restraints implicit in the protectorate relationship itself, Indian tribes remain "independent and self-governing political communities." A recognition and understanding of the "government-to-government" relationship, which has existed historically and has recently been emphasized by President Clinton in his 1994 Executive Order 3 and subsequently by Attorney General Janet Reno, 4 is necessary if an evaluation is to be fairly and impartially done.

Tribes have the power to pass and enforce laws to prevent violence against Indian women on their reservations. The programs should reflect the cultural circumstances of the particular tribe. The programs should be historically consistent. They must also address the needs of Tribal members, while supporting the development of Tribal governments and the concept of sovereignty.

In most situations tribes retain a broad degree of civil jurisdictional authority over members and non-members on the reservation, particularly where the conduct threatens or has some direct effect on the political integrity, economic security, or the health or welfare of the tribe. But the right of Tribes to exercise criminal jurisdiction over non-Indians is not so clear. The Supreme Court ruled in Oliphant v. Suquamish Indian Tribe 5 that Tribal sovereignty does not extend to the exercise of criminal jurisdiction over non-Indians. However when the Supreme Court supported this decision again in Duro v. Reina 6 by expanding Oliphant to preclude criminal jurisdiction over non-member Indians, the case was specifically overruled by Congressional action. This action by Congress, called the "Duro-fix" gives Tribal governments rights of criminal action against non-member Indians. 7

Tribal Jurisdiction

Jurisdiction is the power of a government to make and enforce its own laws. Tribal jurisdiction presents the most complex set of issues in the field of federal Indian law and policy.

Today, when Indian people and their leaders speak of tribal sovereignty, what they are usually talking about centers on questions of tribal jurisdiction—questions of whether or not a tribal government has the political power and legal authority to act, legislate, and enforce its laws with respect to certain persons and subject matter. A great deal depends on the answers to such questions, because when Congress or the courts are called on to resolve the jurisdictional disputes that can arise among tribes, states, and the federal government, what is ultimately being determined is who governs the land, the resources, and the people in Indian Country.

In the evaluation of tribal programs and the problem of preventing violence against women in Indian communities, crucial issues to be analyzed include:

Source and Scope of Tribal Jurisdictional Authority

Indian tribes existed as independent, self-governing societies long before the arrival of European colonial powers on the North American continent. This status was recognized by the Supreme Court in one of its earliest and most important decisions, wherein Indian tribes were held to be "distinct, independent political communities." 9 This history and status as sovereign, self-governing societies distinguishes Indian tribes from other ethnic groups under the Constitution and laws of the United States and must be taken into consideration when evaluating tribal programs.

Within their reservations Indian tribes have inherent sovereign powers of civil and criminal jurisdiction over their own members. Tribes control their own membership, maintain their own judicial and criminal justice systems, and regulate the reservation environment and its resources. Tribes also have absolute jurisdiction over internal matters such as tribal membership and domestic relations. By virtue of their sovereign, self-governing status, tribes have the power to pass and enforce laws. Although they are not empowered to exercise criminal jurisdiction over non-Indians, they do retain a broad degree of civil jurisdictional authority over non-members on the reservation.

Indian Country

The term "Indian Country" is the starting point for analysis of jurisdictional questions involving Indian tribal governments and whether or not P.L. 280 applies. Federal law defines the geographic area in which tribal laws (and applicable federal laws) normally apply and state laws do not as follows:

...[T]he term "Indian country," as used in this chapter, means (a) all land within the limits of any Indian reservation under the jurisdiction of the United States government, notwithstanding the issuance of any patent, and including rights-of-way running through the reservation, (b) all dependent Indian communities within the borders of the United States whether within the original or subsequently acquired territory thereof, and whether within or without the limits of a state, and (c) all Indian allotments, the Indian titles to which have not been extinguished, including rights-of-way running through the same.

This definition, which appears in 18 U.S.C., Sec. 1151, a criminal statute, is also used for purposes of determining the geographic scope of tribal civil jurisdiction. Individual allotted parcels of land not located within the reservation may still constitute Indian Country if the Indian title to the land has not been extinguished. 10

Criminal Law in Indian Country

The imposition of federal and state criminal laws in Indian country has caused great confusion. The Major Crimes Act 11 gave federal courts jurisdiction over thirteen violent felonies. 12 The Assimilative Crimes Act 13 and the Organized Crime Control Act have specifically been held to apply to Indian Country.

In states other than those where P. L. 280 applies (discussed below), subject matter jurisdiction of federal, tribal, or state courts is usually determined on the basis of two issues (1) whether the parties involved in the incident are Indians and (2) whether the incidents giving rise to the complaint took place in Indian Country. For the purpose of this particular analysis, an Indian is defined as a person of Indian blood who is recognized as a member of a federally recognized or terminated tribe. Indian Country includes (1) all land within the limits of any Federal Indian reservation, (2) all dependent Indian communities, and (3) all Indian allotments.

This erosion of tribal authority has caused tribes to become uncertain about the extent and scope of their civil and criminal jurisdiction. Many crimes go unprosecuted for two reasons: jurisidictional vacuums are created by the uncertainties of tribal jurisdiction, and the realities of federal and state prosecutorial priorities normally do not give a high priority to prosecuting reservation crimes. The sense of helplessness and frustration engendered by the jurisdictional confusion that exists in present-day Indian country results in many crimes going unreported. Tribal police and courts are underfunded and training is inadequate to deal with the complexities created by limitations on tribal jurisdiction. Tribes often find it difficult to secure the cooperation of neighboring state and local law enforcement authorities. The same situation often also applies to federal law enforcement agencies in addressing reservation crime.

State Versus Tribal Jurisdiction

Implications of P. L. 280. Under general principles of federal Indian law, states do not have direct jurisdiction over reservation Indians. However, Congress has the power to vest federal authority with the states, which it did with the 1953 passage of P.L. 83-280. Further, since 1968 when Congress amended the act, states with P.L. 280 jurisdiction are allowed to retrocede jurisdiction to individual tribes within their state, upon petition by the tribe and pursuant to the approval of the federal government. Should retrocession be granted, tribal jurisdiction over reservation Indians is determined according to general principles of federal Indian law.

Six states were delegated criminal jurisdiction over reservation Indians and civil jurisdiction over cases arising against Indians in Indian Country under P.L. 280. Other states were permitted to assume such jurisdiction, pursuant to the passage of appropriate state jurisdiction and/or state constitutional amendments. Tribal consent was not a requirement for the assertion of state authority for either the mandatory or option states under the original legislation.

After this legislation passed, ten states accepted such jurisdiction. Then, in 1968, Congress amended P.L. 280 to include a tribal consent requirement, which required a tribal referendum before states could assume jurisdiction. Since that date, no tribe has so consented.

In those states that have assumed jurisdiction (see Table 15.1), P.L. 280 established state jurisdiction without abolishing tribal jurisdiction. Thus the powers are concurrent. The problem, however, is that after passage of the act in 1953, many tribes in P.L. 280 states did not develop their own court systems, relying instead on the misunderstanding that the legislation had deprived tribes of adjudicatory powers. However, the existence of a tribal court system is necessary before a tribe can assert concurrent jurisdiction.

Table 15.1
State by State Overview of Public Law 280

Mandatory States Indian Country Affected
Alaska All Indian Country in the state except the Annette Islands with regard to Metlakatla Indians
California All Indian Country within the state
Minnesota All Indian Country within the state except Red Lake Reservation (Retrocession accepted for Nett Lake Reservation)
Nebraska All Indian Country within the state (Retrocession accepted for Omaha Reservation)
Oregon All Indian Country within the state, except the Warm Springs Reservation (Retrocession accepted for Umatilla Reservation)
Wisconsin All Indian Country within the state (Retrocession accepted for Menominee Reservation and Winnebago Indian Reservation)
Option States Indian Country Affected
Arizona Air and water pollution
Florida All Indian Country within state
Idaho Seven areas of subject matter jurisdiction; full state jurisdiction if tribes consent: compulsory school attendance, juvenile delinquency and youth rehabilitation, dependent, neglected and abused children, insanities and mental illness, public assistance, domestoc relations, motor vehicle operation.
Iowa Civil jurisdiction over Sac and Fox Reservation.
Montana Criminal jurisdiction over Flathead Reservation: full state jurisdiction where tribes request, counties consent, and governor proclaims (Retrocession accepted for Salish and Kootenai Tribes)
Nevada Full state jurisdiction, but counties may opt out; later amendment required tribal consent (Retrocession accepted for all covered reservations)
North Dakota Civil state jurisdiction only, subject to tribal consent.
South Dakota Criminal and civil matters arising on highways: full state jurisdiction if United States reimburses cost of enforcement.
Utah Full state jurisdiction if tribes consent.
Washington Eight subject areas of jurisdiction on Indian trust land; full state jurisdiction as to non-Indians and Indians on non-trust land, although the state has allowed full retrocession fairly liberally (Retrocession accepted for Confederated Tribes of the Chehalis Reservation, Quileute Reservation, Swinomish Tribal Community, Colville Indian Reservation, Port Madison Reservation, and Quinault Reservation

With regard to criminal jurisdiction, the legal situation in P.L. 280 states is as follows:

With regard to civil jurisdiction, the legal situation is as follows:

State regulatory jurisdiction. State regulatory jurisdiction was not granted under P.L. 280. The states were specifically not allowed to tax the reservations for services, such as law enforcement and access to state courts, rendered pursuant to such jurisdiction, nor were they allowed to infringe on water rights nor interfere with, control, or regulate any rights or privileges related to hunting, fishing, or trapping afforded under federal treaty, agreement, or statute.

In all states tribes may regulate, through taxation, licensing, or other means the activities of both tribal members and non-members who enter consensual relationships with the tribe or its members, through commercial dealing, contracts, leases, or other arrangements. A tribe may also retain inherent power to exercise its civil authority over the conduct of non-members within its reservation when that conduct threatens or has some direct effect on the political integrity, the economic security, or the health or welfare of the tribe. 14 Further, a tribe may banish from tribal land any non-member against whom charges have been brought.

Law Enforcement Issues

Approximately 170 of the 230 reservations that are federally recognized at present have law enforcement departments. These departments are of five types. The types are not mutually exclusive, so more than one type may operate simultaneously within the boundaries of a given reservation.

The Bureau of Indian Affairs is involved with two of the types of law enforcement: BIA-LES and 638. Where P.L. 280 operates, the states are responsible for law enforcement on the reservation. Throughout the United States, even where P.L. 280 exists, many Indian Nations have their own tribal police that they fund and control. These tribal police departments often operate on reservations covered by other forms of law enforcement, including BIA, 638, and/or self-governance-funded law enforcement programs. All this, of course, results in problems of overlapping jurisdictions and conflicts of law.

Given the attention paid in recent years to the expansion of law enforcement services, this area is one where confusion regarding jurisdiction and conflict of laws may easily arise. For example, a problem arose recently regarding criminal statistics obtained from the Tribal governments that operate 638 and self-governance police departments. Most Tribal governments have not traditionally provided criminal incident statistics to the U.S. Department of Justice as is required of all other law enforcement departments in the United States. Now, however, Tribal governments have begun to receive funding from the Omnibus Crime Bill and through the Community Oriented Policing Services (COPS) program for the expansion of tribal law enforcement programs. This funding since 1995 has included over $22.4 million for new police services to 128 federally recognized Indian nations, over $6 million for programs aimed at reducing violence against women and almost another $5 million to fund the development of juvenile justice and other community-based programs that emphasize crime reduction. 15 When Tribal governments accept funding and contract with the U.S. Department of Justice to provide specific law enforcement services, there are requirements attached which include statistical reporting, something which may never have been done before by the Tribe. Also, the likelihood increases that the Tribe will be impacted by federal legislation aimed at changing or increasing the regulations under which law enforcement operates.

The burgeoning growth of Tribal police departments has increased the potential for conflict with federal, state and local law enforcement agencies. The issues of who has jurisdiction, whether that jurisdiction is exclusive or mutual, and which is the lead agency during a given incident can have sweeping repercussions. The failure of an agency to recognize or credit the personnel of another can seriously jeopardize the success of a law enforcement action. Further, the failure of one agency or institution to recognize an order by another can considerably hamper the administration of justice and impair the protection of a victim or witness.

Tribal Court Systems

More than 140 tribes now have their own court systems. Of these, approximately twenty-five have BIA-appointed judges and have retained the Code of Federal Regulations. The rest have developed their own tribal courts and have their own laws and codes, many of which may not be in writing.

Some tribes give members a choice of appearing before a more Euro-American-oriented tribal court or the more traditional tribal court. Others have established intertribal courts that are shared among tribes and are also often used for appellate hearings.

Many tribal judges are not attorneys. They may have received training from the National American Indian Court Judges Association or the National Indian Justice Center. Some have become active in state judges' associations.

The role and status of lawyers also varies by tribe. Often tribes do not require a lawyer to be a member of the state bar association. A few tribes have their own bar examinations. Many tribal courts do not provide legal representation for a defendant, but all allow defendants to have representation at their own expense, pursuant to the Indian Civil Rights Act.

The disparity between tribal court systems, and between tribal court systems and the Euro-American system, can engender distrust and even fear among non-Indians. State and local officials can resist granting recognition to tribal court orders, and issues of Full Faith and Credit abound.

Tribal Codes

Prior to colonization, the governmental structures and legal systems of tribes were largely unwritten and informal. However, they were no less effective than other systems. To a great extent they vested authority in Tribal councils, granting them the power to create laws, to negotiate with other tribes and governments, and to deal with issues of tribal land and resources.

Often these laws and powers are not written down, or if they are, they are not explicit. Much is left to the interpretation and discretion of those in authority. Particularly in the realm of domestic abuse, discretion is often left to clans, families, and others with traditional responsibilities.

The definition of violence against women may vary from tribe to tribe, with some codes having specific lists enumerating criminal offenses and others including acts that would be of a more civil nature in a Euro-American code. Other codes include as criminal and/or civil wrongs those acts that have a damaging effect on the spirit of a person 16 or which threaten the health or welfare of their children.

The remedies for incidents of violence against women may also vary from tribe to tribe. Generally, however, victims of domestic violence are accorded greater or equal protection under tribal codes than they would have under state laws. Some tribes handle all charges of violence against women as criminal, while others use civil statutes. Still others combine criminal and civil sanctions.

Unfortunately, tribal governments are generally at an early stage in their code development. Often tribal leaders do not fully comprehend the significance of this task and may be working in the dark, without the assistance of professionals who have written these types of laws before. Others may not wish to tie themselves down to specific delegations of power and thus may prefer to keep laws informal and fluid. If codes are promulgated, they may be modeled on Euro-American legal systems or on those of other tribes. These may not be culturally compatible and may require resources for implementation that do not exist in the particular Tribal community.

Protocol

Issues of proper protocol when dealing with Tribal government leaders can make or break the evaluation of Indian programs. Setting up and following a proper protocol when dealing with indigenous peoples is of such concern that some organizations have written specific guidelines for researchers to follow (discussed below). However, a simple guideline for researchers to follow when dealing with Tribal leaders is to consider them as one would the President of the United States and/or members of Congress. Tribal Chairs and members of Tribal Councils are the chosen representatives of sovereign peoples. Many are elected. In some instances they accede to their positions through inheritance or birth, or are chosen by the spiritual or clan leaders of their communities. They carry a heavy mantle of responsibility and should be accorded great respect.

It is also essential that researchers or evaluators realize that they bring to any encounter with indigenous leaders their own preconceptions and biases. These biases should not be denied but instead realized and compensated for. It is natural and normal to have opinions about how things should be done. It is critical, however, to put these preconceptions aside and to realize that just because Indian Tribal leaders may speak English, dress in Euro-American style, and be highly educated does not mean that they necessarily think and perceive issues in the same manner as might members of the majority community. The lives of Indian people may proceed pursuant to a rhythm different from those in the majority community. It is important for evaluators to realize that ceremonies and rituals often take precedence, even over previously scheduled interviews with evaluators. This rhythm should be appreciated by the evaluator as an indication of a healthy community and not treated impatiently or as avoidance.

The first step prior to any contact with the tribe should be a literature review on the tribal people who are to be evaluated, as would be done if working in a foreign country. The evaluator should be introduced to representatives of the Tribal government, as well as to any others who have approved the grant proposal or have any responsibilities toward the project funded by the grant.

It is essential that the evaluator involve tribal people from the inception of the evaluation. The evaluator should be prepared to explain who they are, why they are there, what they are doing, and how they are proceeding, and to explain this again and again, often to small groups that include persons to whom the explanations were given before. Opportunities to attend social events or activities should be valued because they can result in the development of a relationship that can assist an effective evaluation.

It is also essential to value conversation without insisting on directing it, lecturing, or filling in silences. There are natural breaks in conversations among tribal people, when thoughts are formed that are deemed pertinent. It is important that this process take place within its normal rhythm and that it not be hurried. Evaluators who conduct themselves as learners, rather than as teachers, will hear and learn more.

Issues of Tribal sovereignty and self-determination underscore many contacts and decisions made in Indian Country. If tribal leaders and people believe that the concept of Tribal sovereignty is understood and honored by researchers, they will be more cooperative and forthcoming. This cooperation can go a long way in facilitating the evaluation of programs.

The Global Coalition for Bio-Cultural Diversity of the International Society of Ethnobiology has developed a model covenant to help guide researchers in proper protocol with indigenous communities. Although this covenant is focused on the protection of intellectual property rights for indigenous peoples, it contains language that may be helpful for effective and impartial evaluations of Indian tribal programs. Pertinent sections read as follows:

Covenant Between a Responsible Corporation, Scientist or Scientific Institution and an Indigenous Group

Prologue
Indigenous peoples are unanimous in identifying their primary concern as being self-determination, which subsumes such basic rights as recognition of and respect for their cultures, societies, and languages, as well as ownership over their own lands and territories, and control over the resources that are associated with those lands and territories...This COVENANT should not be viewed as a finished product defining equitable partnerships, but rather a process of consultation, debate, discussion, and creative thinking from the many peoples and groups concerned...

Spirit of the Covenant

—This Covenant is celebrated in order to:
Support indigenous and traditional peoples in their fight against genocide and for their land, territory, and control over their own resources, while strengthening the culture and local community through recognition and support of the groups' own goals, values, and objectives...Thereby establishing a long term relationship built through joint decision-making based upon the principles of equality of relationships and protection of traditional values, knowledge and culture; if these basic elements are not respected, then the Covenant is endangered, and along with it, the spirit of trust and partnership between responsible businesses/scientists/institutions and local communities that is essential for the future well-being of the planet...

Basic Principles to Be Exercised by All Partners
1. Equity of partners, including..joint planning and goal setting, informed consent, and full disclosure in all aspects of the project, including results...
4. Dedication to the promotion of harmony and stability within a group, between groups, and within the region, meaning that activities creating tensions (between indigenous and non-indigenous alike) are contrary to the Spirit of the Covenant.
5. Confidentiality of information and resources, meaning that information imparted by the indigenous group to the Partner cannot be passed on to others without consent of the Giver.
6. Continual dialogue and mutual review, supported by independent monitoring...
8. Development, strengthening, and support of local (indigenous and non-indigenous) educational, health, research, and non-governmental institutions...
10. Establishment of local autonomy and control over all aspects of the projects as early as possible.

Conclusion

The fair, thorough, and unbiased evaluation of programs in Indian Country is essential if tribal governments are to be able to successfully compete for much-needed funds. Although the creation and carry-through of appropriate programs is the responsibility of the tribes, any assistance that they can receive through an impartial and even sympathetic evaluation can go a long way in helping tribes to develop needed competency and to reinforce their worthy efforts.

The creation of an equitable partnership between evaluator and program should be welcomed. The processes of joint planning and goal setting, as well as the continual discussion and mutual review of the evaluative process, will enrich both partners and make present and future efforts more comprehensive as well as more enjoyable.

The evaluation of a program does not have to be fearsome for those studied. With a close working relationship between evaluators and program staff it can be a growth experience, where the evaluator is viewed as an ally in the formation and articulation of the vision. It is to this challenge that evaluators should dedicate their work.


Notes for this section

Chapter 10
1. Helen Christiansen, Linda Goulet, Caroline Krentz, and Mhairi Maeers (Eds.). (1997). Recreating Relationships: Collaboration and Educational Reform, New York: State University of New York Press.

Chapter 11
1. Contacts at the Michigan Department of Community Health include Patricia K. Smith, Violence Prevention Program Coordinator (513 335-9703), Ann Rafferty and John C. Thrush. The survey instrument is available from Michigan DCH and also from the STOP TA Project (800 256-5883 or 202 265-0967). Contact Patricia K. Smith for details regarding the conduct of the study, its costs, time and staff required, methodological lessons learned, and suggestions for revisions of the questionnaire.

Chapter 13
1. Heike P. Gramckow, Director of Management and Program Development, and Jane Nady Sigmon, Director of Research, both from the American Prosecutors Research Institute, are working with others to complete an implementation manual for coordinating councils on violence against women (funded by the Bureau of Justice Assistance), and developing a curriculum to train prosecutors to handle domestic violence and sexual assault cases (funded by the Violence Against Women Grants Office). Mario T. Gaboury is the Director of the Center for the Study of Crime Victims' Rights, Remedies, and Resources at the University of New Haven. Jane Nady Sigmon serves as an Advisor to the Urban Institute for the National Evaluation of the STOP Formula Grants Program.

Chapter 14
1. Susan Keilitz of the National Center for State Courts in Williamsburg, Virginia and Neal B. Kauder, a Principal at VisualResearch in Richmond, Virginia are evaluating STOP projects in Purpose Area 4, Data and Communications Systems, under a National Institute of Justice grant.

Chapter 15

1. Eileen Luna, of the University of Arizona American Indian Studies Program (AISP), is evaluating STOP Indian Tribal Grants under an NIJ grant. She was assisted in the development of this chapter by her colleagues in AISP, particularly Jay Stauss and Robert Hershey.

2. A survey being conducted by the University of Arizona AISP and the Kennedy School of Government, funded by the Police Executive Research Forum, indicates that tribal funding of law enforcement has grown considerably in recent years. In Fall 1995, 114 tribes had tribally funded police departments, compared to 170 as of June 1997. The remaining 340 tribes have their law enforcement needs met by the BIA's law enforcement system, or by the states through Public Law 83-280.

3. Executive Order, 4/29/94, Government-to-Government Relations with Native American Tribal Governments.

4. Reno, Janet. (1995). U.S. Department of Justice Commitment to American Indian Tribal Justice Systems. Judicature, Nov.-Dec.

5. 435 U.S. 191 (1978).

6. 495 U.S. 676 (1990).

7. P. L. 102-137. This one-sentence amendment to the Indian Civil Rights Act simply maintains Tribal criminal jurisdiction over both member and non-member Indians for crimes committed in significant part on Tribal lands.

8. See David Getches, Charles F. Wilkinson, Robert A. Williams, Jr. (1993). Federal Indian Law: Cases and Materials, 3rd edition.

9. Worcester v. Georgia, 31 U.S. (6 Pet.) 515, 559 (1832).

10. See Robert N. Clinton. (1976). Criminal Jurisdiction Over Indian Lands: A Journey Through a Jurisdictional Maze, 18 Ariz. L. Rev. 503, 515.

11. 18 U.S.C. Sec. 1153 (1988).

12. The federal courts have jurisdiction over murder, manslaughter, kidnapping, maiming, a felony under chapter 109 A, incest, assault with intent to commit murder, assault with a dangerous weapon, assault resulting in serious bodily injury, arson, burglary, robbery, and a felony under section 661 of this title within Indian Country.

13. 18 U.S.C. Sec. 13 (1988).

14. Montana v. United States, 450 U.S. 544 (1981).

15. Nation to Nation, newsletter of the U.S. Department of Justice, Office of Tribal Justice, August, 1996.

16. See identification of instances of disharmony in Rules for Domestic Violence Proceedings, Courts of the Navajo Nation Rule 1.5.