Chapter 3: In Search of the Appropriate Evaluation
The Art of Appropriate Evaluation

CHAPTER THREE

IN SEARCH OF
THE APPROPRIATE
EVALUATION

The program evaluation in the introductory scenario is everyone’s worst evaluation nightmare because it didn’t demonstrate anything other than a lot of good intentions and confused activity. The person running that evaluation clearly did not have an evaluation mentality and did not design an evaluation that was appropriate to the size of the project and to the data that was available.

speed limit 55 signOne of the most critical elements in a successful evaluation (that is, one that actually proves something) is deciding what should be demonstrated. This decision should be based on the type of project you are implementing and the type of data that are collectable (or available). Your goal is to set up an evaluation that is appropriate for your individual circumstances.

What Makes an Evaluation Appropriate?

If the National Highway Traffic Safety Administration (NHTSA) is going to promote a brand new traffic safety countermeasure as an effective tool in reducing traffic deaths and injuries, it is reasonable to assume that NHTSA will have thoroughly evaluated this countermeasure in realistic conditions to make sure it works. This would require conducting several full-scale evaluation research projects that verify the effectiveness of the countermeasure. NHTSA can also call upon large volumes of national and state level crash data with enough records to confirm, with a high degree of confidence, that changes can be attributed to the countermeasure. A full-scale countermeasure effectiveness evaluation project is the only type of evaluation that would be appropriate in these circumstances.

Two years later, after this new countermeasure has been implemented in several communities, a program manager in a city of 75,000 reads about it in a NHTSA publication and decides that it might be just what is needed to solve a troubling traffic safety problem in his community. This program manager has a solid evaluation mentality so he immediately considers what type of evaluation would be appropriate for his circumstances. He does not need to conduct the same type of evaluation that NHTSA conducted because:

  1. He is not trying to prove to the nation that it works; his boss was convinced by NHTSA’s evaluation results.

  2. His community experiences only a few crashes of the type affected by this countermeasure, (but he still would like to reduce that number even further).

  3. Resources are limited.

He needs to determine what an appropriate evaluation would be for these circumstances. There are two types of evaluation questions that are appropriate for most local, and even some State, programs:

  • Did you implement the program as planned?
  • Did you accomplish your objectives?

Did You Implement The Program As Planned?

Some managers might dismiss this type of administrative evaluation as simple “bean counting” that doesn’t demonstrate anything worthwhile. You will be surprised by what you can learn merely by checking to see whether everything is going as planned.

In one community, a mandatory jail sentencing program for DWI repeat offenders was implemented. The program was evaluated to determine if serving time had any effect on recividism. The evaluators were never able to determine this effect because of an unexpected finding. Although most repeat offenders were sentenced to jail time, the evaluators discovered that very few of them actually served any time. There was no system in place to follow up with individuals when they left the courthouse. Obviously the program manager had to go back to the drawing board to solve the problem of ensuring that the court sentences were actually carried out.

Another community decided to implement an occupant protection traffic enforcement blitz, complete with highly visible public information and media coverage. The evaluator kept track of the number of police officer hours spent and the number and type of citations issued. The program staff were surprised to find that although lots of safety belt citations were issued during the first week, there were no citations issued for child safety seat violations. The police officers did not seem to fully understand the requirements of the State law. This discovery led to a police roll call training session on the child safety seat law and on the importance of enforcing it. During the second week of the blitz, forty-seven citations and warnings were issued for child safety violations.

 

At the most fundamental level, you can do an evaluation to determine if you implemented the program as planned. This may sound pretty obvious, but in fact many projects take a wrong turn right off the drawing board. This approach, which is called an administrative evaluation, does not require any elaborate data collection efforts or even a research design. All that it requires is an understanding of what is supposed to happen during a program and a systematic approach to tracking what actually happens.

Let’s go back to the bicycle helmet program on page 2. Suppose you decide you’re going to have two safety fairs over the summer and you’re going to give away free helmets, donated by a community sponsor. An administrative evaluation would keep track of the number of helmets you obtained and the number you gave away. It might also document such things as the age, gender and neighborhoods of the children who received the helmets, the number of people who participated in the safety fairs, and the amount of publicity you received about the fairs.

If you monitor your program from the beginning, you will be able to spot any implementation problems early and determine if the problem can be fixed or if the whole idea should be scratched. There is no sense wasting dollars going through the motions of implementing a program with fatal flaws.

An important element of documenting how the program was implemented is tracking the resources as they are being spent. Every project should have a detailed budget for such items as staffing, supplies, etc. A good evaluation should document whether the project was completed within budget or over budget. The rate at which resources are being spent can sometimes give a good indication if the project is being implemented as planned. If local police are not putting in the budgeted amount of overtime, for example, maybe the sobriety check-points are not being conducted as frequently as planned.

Did You Accomplish Your Objectives?

Everyone knows that you conduct an evaluation to demonstrate that you accomplished your objectives. You don’t need an evaluation mentality to realize that. But it does help you understand what objectives you should be measuring.

merge ahead signPeople usually write goals and objectives to impress a funding source. They are frequently written in grandiose terms that sound impressive but lack a clear focus.

  • To reduce traffic deaths (Do you want to promise that in your small town?)

  • To increase support for traffic safety (How will you measure this?)

  • To improve safe driving behaviors (What behaviors do you care about?)

When challenged, the individuals who wrote these objectives were able to revise them to focus on what an individual project was specifically designed to do, not what sounded good on paper.

Reducing traffic deaths was changed to increase safety belt use—that was what they were really aiming for. Increase support for traffic safety was changed to getting 1,500 signatures on a petition for passage of a bicycle helmet ordinance, and improve safe driving behavior was changed to reduce the incidence of red-light running.

We cannot emphasize enough the
importance of carefully defined objectives. They make the difference between a successful and a frustrating evaluation.
 

We cannot emphasize enough the importance of carefully defined objectives. They make the difference between a successful evaluation and a frustrating one. You should read Section IV for more detailed suggestions on writing SMART Objectives.

What Might Not Be Appropriate to Demonstrate?

It is very difficult to link a countermeasure program to a reduction in deaths and injuries at the local level, (and sometimes, even at the State level). There are several reasons for this.

  • Although traffic crashes are a serious national problem, killing more than 40,000 per year, traffic deaths in any community are relatively rare events. Most communities will experience fewer than a dozen traffic-related fatalities a year resulting from all causes. Furthermore, the number of deaths might fluctuate considerably from year to year, for no apparent reason. Given that the number of deaths might go up or down regardless of what new program you implemented, you might not want to raise expectations that your program will save lives. It would be far better, for example, to demonstrate that your program resulted in an increase safety belt use.

  • Traffic deaths are influenced by a variety of factors, all of which can influence whether fatalities climb or drop. These factors, called variables, could include:
    • The amount of driving in the community (an increase in gasoline costs could reduce the amount of miles traveled, or a new shopping mall on the outskirts of town could increase vehicle travel.)

    • The weather conditions, (a very bad winter could lead to an increase in fender bender type collisions, but major injuries might go down because people drive less and at slower speeds in bad weather)

    • A change in the driving age (reducing the minimum age could increase crashes caused by inexperienced drivers)

    • A change in the population (a downward trend in population growth could reduce the number of drivers on the road.)

    • Previous extremes (a shift back to “normal” levels after reaching an extreme value, either high or low)

If you are trying to establish a connection between a particular countermeasure and a reduction in deaths and injuries, you have to ensure that none of these variables, or any others you might think of, contributed to that change.

  • Since the number of fatalities that occurs in most communities is so small, if you were committed to demonstrating a reduction in fatalities, you would need to aggregate your data over several years in order to have enough deaths to show a real decrease.
It is very difficult to compare data that were collected in widely separated time periods whether you are looking for fatalities or some other measure such as citations issued.
 

This approach creates an entirely different problem related to existing data: it is very difficult to compare data that were collected in widely separated time periods whether you are looking for fatalities or some other measure such as citations issued. Over time, changes in data collection procedures, data definitions, and enforcement thresholds can change significantly. For example, a community may change its policy concerning the collection of blood alcohol content data on traffic fatalities, making it difficult to compare the number of alcohol-related deaths over a five-year period. Or a Traffic Records Department may change its definition of a “reportable” crash from $250 or more in damages to $2,000 or more in damages. This would spuriously decrease the number of reported crashes.

These problems with linking countermeasures directly to bottom line changes in fatality levels are not insurmountable. However, they do require a significant increase in the complexity and cost of an evaluation. You should undertake this extra effort only when it really is necessary, like when you are trying a countermeasure that has never been tried anyplace else.

What Works
If your program involves one of the following strategies, you can concentrate your evaluation dollars on documenting that you implemented the countermeasure, not that the countermeasure saved lives.

• Safety belts
• Child safety seats (always in the back seat!)
• Bicycle helmets
• Motorcycle helmets
• DWI enforcement
• Sobriety checkpoints
• Tougher impaired driving laws
• Crossing Guards
• Traffic Calming Devices (e.g., speed bumps)
• Educating judges and prosecutors
 

If your countermeasure has been around for a while, why do you want to spend precious resources to prove what has already been proven? The traffic safety community has demonstrated to most everyone’s satisfaction that safety belts and strong DUI laws save lives. If you are implementing an occupant protection program, therefore, you don’t need to link your program to a reduction in deaths and injuries. Instead, you can limit your evaluation to demonstrating that you accomplished your objective to increase the rate of safety belt use by a specific percentage.

Similarly, since the effectiveness of sobriety checkpoints has been thoroughly evaluated, you can focus your evaluation dollars on demonstrating that the number of sobriety checkpoints you planned were conducted and that citations for DWI offenses increased. It is not necessary to attempt to link this accomplishment to a reduction in alcohol-related fatalities.

In order to prove that you accomplished your objective of increasing safety belt use or DWI enforcement you will still have to collect data and document your accomplishments. You will probably need to observe safety belt use before and after you implement you strategy, or collect enforcement data for a comparable period before you instituted your “blitz.” If safety belt usage or DWI enforcement increased, your program was a success.

Learning that something did not work does not make your evaluation a failure.
 

If it did not increase, than you should look at the strategies you used. Perhaps these techniques were not as effective as other options, (e.g., a public information campaign, by itself, will not be as effective at changing behavior as an enforcement campaign coupled with continuing media coverage.) Learning that something did not work does not make your evaluation a failure. It simply provides you an opportunity to learn more about your problem and to revise your approach in the future.

Summary

A program evaluation can provide you the following information about your program:

  • That you implemented the program as planned;
  • What resources were spent, and
  • Whether your program accomplished its objectives.

That level of detail is appropriate for most local and state level evaluations. In the next section we provide you with a high level overview of what will be involved when you take the plunge.