WORK ORDER NUMBER BAT-02-006

 

 

TRAFFIC DATA QUALITY WORKSHOP
PROCEEDINGS AND ACTION PLAN

 

 

Final Report

 

 

 

to

 

Office of Policy

Federal Highway Administration

Washington, D.C.

 

 

 

 

 

505 King Avenue

Columbus, Ohio 43201

 

In Association with

 

Cambridge Systematics, Inc.

Texas Transportation Institute

 

 

September 25, 2003

 

.

 



WORK ORDER NUMBER BAT-02-006

 

 

TRAFFIC DATA QUALITY WORKSHOP
PROCEEDINGS AND ACTION PLAN

 

Final Report

 

 

 

Prepared for

 

Office of Highway Policy Information

Federal Highway Administration

Washington, D.C.

 

 

 

 

 

Principal Authors

 

Dr. Edward Fekpe, PEng.

Mr. Deepak Gopalakrishna

 

 

 

 

 

September 25, 2003



ACKNOWLEDGMENTS

The authors gratefully acknowledge the support and guidance of Mr. Ralph Gillmann of the Federal Highway Administration Office of Policy and Mr. James Pol of the Intelligent Transportation Systems Joint Program Office throughout this project.

 

The authors also acknowledge the support of the Department of Transportation of the states of Ohio and Utah for hosting the regional workshops.  Mr. David Gardner of Ohio DOT and
Ms. Dian Williams of Utah DOT deserve recognition for their roles in organizing these regional workshops.  Ms. Tami Hannahs and Ms. Lynn Price of Battelle also provided valuable logistic assistance in organizing the workshop in Columbus, Ohio.  The authors acknowledge the valuable inputs provided by state and local agency officials during the interview process and all the workshop participants.

 

The authors also acknowledge the valuable inputs provided by the project team, particularly in developing the white papers and conducting the regional workshops.  The project team members are:

 

Dr. Edward Fekpe, Principal Investigator, (Battelle)

Mr. Deepak Gopalakrishna (Battelle)

Ms. Mala Raman (Battelle)

Dr. Rich Margiotta (Cambridge Systematics Inc.)

Dr. Dan Middleton (Texas Transportation Institute)

Mr. Shawn Turner (Texas Transportation Institute).


Table of Contents

ACKNOWLEDGMENTS. i

 

List of Acronyms. v

 

EXECUTIVE SUMMARY.. vi

Introduction. vi

Research Approach. vi

Action Plan. vii

Action Plan Implementation and Work Items. viii

Research Studies. viii

Workshops. viii

Case Studies and Clearinghouse. viii

 

1.0       INTRODUCTION.. 1

1.1       Background. 1

1.2       Project Objectives and Scope. 1

1.3       Organization of Report 2

 

2.0       RESEARCH APPROACH.. 3

2.1       Traffic Data Quality Issues. 4

2.2       Data Collection – Interviews. 5

2.3       Development of White Papers. 5

2.4       Regional Workshops. 6

2.5       Action Plan Development 7

2.6       Additional Traffic Data Quality Literature. 7

 

3.0       WORKSHOP PROCEEDINGS. 7

3.1       Introduction. 7

3.2       Session 1 – Defining and Measuring Traffic Data Quality. 8

3.2.1    Defining Data Quality. 8

3.2.2    Measuring Data Quality. 9

3.2.3    Discussion Points. 9

3.2.3.1... Discussions – Ohio Workshop. 10

3.2.3.2... Discussions – Utah Workshop. 12

3.3       Session 2 – State of the Practice in Traffic Data Quality. 13

3.3.1    Types and Applications for Traffic Data. 14

3.3.2    Traffic Data Quality:  Characteristics. 14

3.3.3    Quality Issues for Using ITS-Generated Data for Traditional Uses. 14

3.3.4    Recommendations:  Possible Solutions. 14

3.3.5    Discussion Points. 15

3.3.5.1... Discussions – Ohio Workshop. 15

3.3.5.2... Discussions – Utah Workshop. 16


Table of Contents (Continued)

3.4       Session 3 – Advances in Traffic Data Collection and Management 16

3.4.1    Introduction. 16

3.4.2    Innovative Contracting Methods. 18

3.4.3    Standards. 18

3.4.4    Training for Data Collection. 18

3.4.5    Data Sharing Between Agencies and States. 18

3.4.6    Advanced Traffic Detection Techniques. 18

3.4.7    Discussion Points. 19

3.4.7.1... Discussions – Ohio Workshop. 19

3.4.7.2... Discussions – Utah Workshop. 19

3.5       Action Plan Discussion. 20

3.5.1    Defining and Measuring Traffic Data Quality. 20

3.5.1.1... Ohio Workshop. 20

3.5.1.2... Utah Workshop. 20

3.5.2    State of the Practice. 21

3.5.2.1... Ohio Workshop. 21

3.5.2.2... Utah Workshop. 21

3.5.3    Innovative Approaches. 22

3.5.3.1... Ohio Workshop. 22

3.5.3.2... Utah Workshop. 22

3.5.4    Responsibilities and Timeline. 23

 

4.0       ACTION PLAN FOR IMPROVING TRAFFIC DATA QUALITY.. 24

4.1       Introduction. 24

4.2       Partnerships and Coordination. 24

4.3       Action Items. 24

4.3.1    Guidelines and Standards for Calculating Data Quality Measures. 24

4.3.2    Compilation of Business Rules/Data Validity Checks and
Quality Control Procedures. 25

4.3.3    Best Practices for Equipment Installation and Maintenance. 26

4.3.4    Clearinghouse for Vehicle Detector Information. 26

4.3.5    Sensitivity Studies to Demonstrate “Value of Data”. 27

4.3.6    Guidelines for Sharing Resources. 27

4.3.7    Life-cycle Costs of Detection Equipment 27

4.3.8    Improved Contracting Approaches. 28

4.3.9    Case Study or Pilot Tests. 28

4.3.10  Guidance on Technologies and Applications. 28

4.4       Implementation and Work Items. 29

4.4.1    Research Studies. 29

4.4.2    Workshops. 30

4.4.3    Case Studies and Clearinghouse. 30

 

5.0       CONCLUDING REMARKS. 31

Table of Contents (Continued)

REFERENCES  32

 

 

List of Appendices

APPENDIX A:.... WHITE PAPERS. A-1

APPENDIX B:.... INTERVIEWEE CONTACT LIST AND INTERVIEW GUIDE. B-1

APPENDIX C:.... REGIONAL WORKSHOP ATTENDEES. C-1

APPENDIX D:.... RELEVANT TRAFFIC DATA QUALITY LITERATURE. D-1

 

 

List of Figures

Figure 1.  Traffic Data Quality Research Approach. 4


List of Acronyms

AADT         Average Annual Daily Traffic

AASHTO    American Association of State Highway Transportation Officials

ADUS         Archived Data User Service

AMATS      Akron Metropolitan Area Transportation Study

ARTIMIS    Advanced Regional Traffic Interactive Management and Information System

ASTM         American Society for Testing and Materials

ATIS           Advanced Traveler Information Systems

ATMS         Advanced Traffic Management Systems

ATR            Automatic Traffic Recorder

COTR         Contracting Officer’s Technical Representative

DOT            Department(s) of Transportation

EDL            Electronic Document Library

ESAL          Equivalent Single Axle Loads

FHWA        Federal Highway Administration

FOT            Field Operational Test

GIS             Geographic Information System

ITS JPO      Intelligent Transportation Systems – Joint Program Office

ITS              Intelligent Transportation Systems

MAG           Maricopa Association of Governments

NOACA     Northeastern Ohio Areawide Coordinating Agency

ODOT         Ohio Department of Transportation

OKI            Ohio-Kentucky-Indiana Regional Council of Governments

ROW          Right-of-Way

RTMS         Remote Traffic Microwave Sensor

TMCs          Traffic Management Centers

TMG           Traffic Monitoring Guide

TRB            Transportation Research Board

TTI              Texas Transportation Institute

UDOT         Utah Department of Transportation

VDC           Vehicle Detector Clearinghouse

VDOT         Virginia Department of Transportation

WIM           Weigh-in-Motion

WSDOT      Washington State DOT


Executive Summary

Introduction

Recent research and analysis have identified several issues regarding the quality of traffic data available from Intelligent Transportation Systems (ITS) for transportation operations, planning, or other functions.  Since Federal agencies use and disseminate traffic data from state and local agencies, the quality of the data becomes even more critical.  The quality of the traffic data and the information produced from the data are critical factors that affect the abilities of transportation agencies to ensure the security of transportation and the management of the nation’s transportation resources.  The focus of data quality is on establishing a consistent methodology for ensuring that data are managed so that a measure of reliability is sustained.  The primary objective of this project is to define an action plan to address traffic data quality issues.  Such an action plan should include work items that can be executed through the U.S. Department of Transportation (DOT), stakeholder organizations (e.g., American Association of State Highway Transportation Officials [AASHTO], ITS America), and state DOTs.

Research Approach

The development of the action plan involved several steps.  First, the issues associated with traffic data quality were reviewed.  Second, three white papers were developed whose themes were based on the issues identified.  The white papers were developed from information gathered from published literature and through interviews with state and local agencies involved with traffic data collection, use, and management.  The white papers are designed to explore the issues and current practices for ensuring data quality.  The scopes of the three white papers and the issues addressed are outlined below.

 

Theme #1:  Defining And Measuring Traffic Data Quality (EDL # 13767).

This white paper defines the measures and methods for quantifying traffic data.  Issues considered include definition of traffic data quality for different users and for different applications; data quality metrics or measures; methodology for assessing traffic data quality; and acceptable levels of quality.

 

Theme # 2:  State-of-the-Practice in Traffic Data Quality (EDL # 13768).

This white paper documents issues, measures, and approaches for assessing, using, and accommodating traffic data quality in various applications.  Issues considered include types and applications of traffic data being used by the states; how data quality problems are handled in various applications; methods used or studies conducted by states to ensure data quality; and institutional issues, data sharing issues and funding constraints.

 

Theme #3:  Advances in Traffic Data Collection and Management (EDL # 13766).

This white paper identifies innovative approaches for improving data quality.  This includes innovative technologies in traffic data collection, new contracting methods, and standards, training for data collection and data sharing between agencies and states.  The issues addressed in this white paper include loop detectors versus non-intrusive data collection devices; lack of field staff for proper maintenance of monitoring devices; innovative approaches to data collection; effects of contracting approach on data quality; new contracting methods, more coordination, standards, and training.

 

Following the development of the white papers, two regional workshops on traffic data quality were conducted.  The three white papers were used to stimulate discussions and obtain inputs from the workshop participants to develop an action plan that addresses traffic data quality issues.  The workshops, sponsored by FHWA Office of Policy, the ITS Joint Program Office (JPO), Ohio Department of Transportation (ODOT), and Utah Department of Transportation (UDOT) were held on March 11, 2003 in Columbus, Ohio and on March 13, 2003 in Salt Lake City, Utah. 

 

The workshop attendees included data providers and users as well as those who influence data collection activities in one way or another.  In attendance were private sector travel information providers, representatives from 10 state DOTs:  Ohio, Delaware, Indiana, Kentucky, Pennsylvania, Utah, Idaho, Texas, Washington, and California.  Also, in attendance were representatives from Advanced Regional Traffic Interactive Management and Information System (ARTIMIS) in Cincinnati, Ohio; Maricopa Association of Governments (MAG) in Arizona; Northeast Ohio Areawide Coordinating Agency (NOACA); Ohio-Kentucky-Indiana (OKI) Regional Council of Governments; and Akron Metropolitan Area Transportation Study (AMATS).

Action Plan

The action plan builds upon the findings in the white papers and inputs obtained from the regional workshops.  The action plan provides a blueprint for specific actions to address traffic data quality issues.  Implementation of the plan will require collaboration among both public and private partners with the FHWA and state DOTs playing leading roles.  The plan identifies the following 10 priority action items based on those identified at the regional workshops. 

 

1.                  Develop guidelines and standards for calculating traffic data quality measures.  The guidelines and standards are expected to contain methods to calculate and report the data quality measures for various applications and levels of aggregation. 

Coordinators:  FHWA or AASHTO

 

2.                  Synthesize validation procedures and rules used by various states and other agencies for traffic monitoring devices.  The synthesis document should include quality control procedures for all types of applications and data management methods for maintaining high quality data.

Coordinators FHWA, states

 

3.                  Develop a synthesis of best practices for installation and maintenance of traffic monitoring devices.  This document should include guidance for establishing quality; standard test methods for determining accuracy and other data quality measures; “triggers” for conducting maintenance; and guidance for selecting strategic traffic monitoring device locations.

Coordinators:  FHWA, states

 

4.                  Establish a clearinghouse for vehicle detector information.  Establish an independent testing entity to conduct periodic tests and verify claims of the new and emerging traffic detection devices on the market.  Store results of tests in a clearinghouse that can be accessed by all potential users.

Coordinators:  FHWA, Vehicle Detector Clearinghouse (VDC), states

 

5.                  Conduct sensitivity analyses and document the results to illustrate the implications of data quality on user applications.  Based on the results of the sensitivity analysis, develop data quality “targets” or “benchmarks’ for each application.  The results of the sensitivity analysis would be used to provide guidance or procedures for imputing missing data points.

Coordinators:  FHWA, states

 

6.                  Develop guidelines for sharing resources for traffic monitoring activities.  The guidelines should contain information on shared equipment, personnel, funding, and cooperation among different agencies and departments.  The guidelines should also include public-private collaboration approaches and practices which establish trust in private sources of data

Coordinators:  FHWA, states

 

7.                  Develop a methodology for calculating life-cycle costs.  The methodology would enable states and other agencies to investigate alternative data collection technologies; develop quality levels as a function of investment in installation and maintenance; and coordinate or leverage operations and other activities in more than one location or jurisdiction.

Coordinators:  FHWA, states

 

8.                  Develop guidelines for innovative contracting approaches for traffic data collection.  The guidelines should include information on performance-based contracting and management, task-order-type contracts and cooperative agreements for equipment installation and maintenance, and life-cycle-cost based bidding.

Coordinators:  FHWA, states

 

9.                  Conduct a case study or a pilot test.  The goal is to observe state DOT and TMCs working to improve data quality and evaluate the return on investment from the improved data quality.

Coordinators:  FHWA, states

 

10.              Provide guidance on technologies and applications.  This action item is in two parts:
(i) provide guidance on the data elements to measure and report since this dictates the type of device procured by the agency, and (ii) provide guidance on the innovative and emerging uses of loops and existing technologies.

Coordinators:  FHWA, states

 

Action Plan Implementation and Work Items

FHWA would play a leading role in the overall implementation of the action plan.  Following are the three potential groups of activities or work items to implement the action plan.

Research Studies

The majority of the action items relate to the development of guidelines, which are best implemented through research studies.  Action items in this category include the following:

 

·        Guidelines and standards for calculating data quality measures (#1)

Workshops

Some of the action items could be implemented through regional workshops.  Action items in this category are those that require sharing of experiences and success stories.  The following are action items in this category:

 

Case Studies and Clearinghouse

Action item in this category require establishing or identifying an independent entity and conducting case studies.  The following are the action items in this category:

 


1.0    INTRODUCTION

1.1       Background

Recent research and analysis have identified several issues regarding the quality of traffic data available from Intelligent Transportation Systems (ITS) for transportation operations, planning, or other functions.  For example, the Advanced Traveler Information Systems (ATIS) Data Gaps Workshop in 2000 identified information accuracy, reliability, and timeliness as critical to ATIS.  The key findings of the workshop, which are included in a document titled “Closing the Data Gap:  Guidelines for Quality Advanced Traveler Information System (ATIS) Data” (U.S.DOT, 2000), are the following:

 

·        Guidelines for quality data go beyond ATIS.

 

A recent report, “Sharing Data for Traveler Information:  Practices and Policies of Public Agencies” (Battelle, 2001), issued in January 2002 examines policies aimed at facilitating data sharing and ultimately improving the quality and quantity of information that reaches travelers.

 

The ITS Archived Data User Service (ADUS) promotes reuse of traffic data collected for real-time operations.  The ATIS and Advanced Traffic Management Systems (ATMS) are generating large amounts of traffic data that could be used in other applications, such as performance monitoring.  However, initial experience with ITS traffic data has identified serious data gaps and data quality deficiencies.  Data can be edited after the fact to remove errors but the problem still remains at the source.  The need for guidelines for sharing traffic data among various agencies and users has been recognized.

 

Section 515 of the Treasury and General Government Appropriations Act for Fiscal

Year 2001 (Public Law 106-554; H.R. 5658) directs the Office of Management and

Budget to issue government-wide guidelines that provide policy and procedural guidance to Federal agencies for ensuring and maximizing the quality, objectivity, utility, and integrity of information (including statistical information) disseminated by Federal agencies.  Since Federal agencies use and disseminate traffic data from State and local agencies, the quality of the data will become even more critical.

 

It is also recognized that the quality of the traffic data and the information produced from the data are critical factors that affect the abilities of transportation agencies to ensure the security of transportation and the management of the nation’s transportation resources.  Data reliability requires that the INFOstructure consistently produce output that the public sector and the private sector can accept without skepticism or distrust.  Effective data quality methods and tools are critical for ensuring the success of INFOstructure applications.

 

The focus of data quality is on establishing a consistent methodology for ensuring that data are managed so that a measure of reliability is sustained.  Several factors affect data quality, including addressing “data gaps” to rectify coverage deficiencies as well as data compatibility across different software/hardware platforms; ensuring that data elements are efficiently matched with coordinated location and time elements; and resolving conflicts among data formats so that data are manipulated to satisfy information and presentation needs.

1.2       Project Objectives and Scope

The primary objective of this project is to define an action plan with work items that can be executed through the U.S. Department of Transportation (DOT), stakeholder organizations (e.g., American Association of State Highway Transportation Officials [AASHTO], ITS America), State agencies, and private industry.  It is anticipated that this effort will establish a multi-year program that will reinforce and sustain the value of INFOstructure applications.  Specifically, this project will:

 

(1)         Develop white papers that explore the issues and current practices for ensuring quality, focusing on transportation but also considering how data quality is addressed in other industries

 

(2)         Develop a draft action plan and timeline for U.S. DOT and others to pursue that will develop metrics, tools, and recommended practices to ensure that data quality is effectively attained

 

(3)         Assemble a workshop that includes the co-sponsorship of relevant stakeholder organizations to address the issues and to validate and revise the action plan and timeline

 

(4)         Prepare proceedings and a compendium of the workshop along with an analysis of the validated action plan.

1.3       Organization of Report

The remainder of this report is divided into several chapters: 

 

Chapter 2 presents an overview of the research approach.  It also describes the major issues associated with traffic data quality.

 

Chapter 3 presents the proceedings of the two regional workshops.  This chapter includes summaries of the white papers, workshop discussions, and action items identified at the workshops.

 

Chapter 4 presents the action plan for addressing the traffic data quality issues.  The action plan describes the action items and identifies the responsible agencies for implementing the action items.

Chapter 5 presents the concluding remarks and recommendations.

 

The detailed white papers and list of workshop participants are included as appendices to the report.  Other relevant literature on traffic data quality is also included in the appendices.


2.0    RESEARCH APPROACH

The research approach adopted for the project comprises a number of steps as summarized in Figure 1.  These steps are discussed below.

 

 

Figure 1.  Traffic Data Quality Research Approach

2.1       Traffic Data Quality Issues

As a first step, a kick-off meeting was held at the start of the project with the primary objectives to (i) review the traffic data quality issues, (ii) discuss the themes for the white papers, and (iii) review the strategy for conducting the research.  Several issues associated with traffic data were identified that are common to various applications.  These issues must be addressed to ensure better quality traffic data for ATIS, ATMS, and ITS data archiving and re-use.  These issues can be grouped in different categories, as shown below:

 

Definition and Measurement Issues

·        Defining data quality attributes, including accuracy, consistency, reliability

·        Identifying differences in quality perceived by public and private sector data collectors and users

·        Quality of data as a function of its intended use

·        Measuring and ensuring quality data

·        Quantitative and qualitative metrics/levels

·        Identifying minimum acceptable levels of data quality for different applications

·        Quality control (fixing the problem at the source)

·        Lack of understanding of the full scope of the issue

·        Lack of a consistent approach for ensuring consistent quality

 

 

Equipment Installation and Maintenance Issues

·        Subcontractors install loops carelessly

·        Power and communications disruptions

·        Mix of technology introduces inherent data discrepancies

·        Innovative approaches to data collection

·        Loop detectors versus non-intrusive data collection devices

·        Those who maintain detectors may be different from those who install them

·        Effects of contracting approach on data quality

·        Relationship between data collection device and quality

·        Loops get torn out by third parties

 

Coverage Issues

·        Share traffic data or collect it yourself

·        Better quality with less coverage or lower quality with more coverage

·        Better definition of depth of coverage

·        Coverage of detectors seems to focus on traffic monitoring, but what about forecasting

 

Resource Issues

·        Budget limitations for traffic data collection

·        Lack of field staff for proper maintenance of monitoring devices

·        Lack of expertise in data management issues

·        The implications of funding levels on quality of data collected


Institutional Issues

·        Institutional issues relating to data collection and sharing

·        Regional or state versus national level interests and perspectives of data quality

 

These issues were used to scope three white paper themes.  Each white paper addresses a set of issues and includes a summary of previous literature, innovative practices, and barriers that exist in transportation operations that prevent data quality metrics, tools, and methodologies to be established.  In order to obtain more current information regarding practices, tools, and methodologies, a few states and other users of traffic data were interviewed.

 

It was also decided at the kick-off meeting that two or more regional workshops be conducted rather than the originally planned single national workshop.  The regional workshops were expected to provide the opportunity to share experiences and gather inputs from a wider range of traffic data users.

2.2       Data Collection – Interviews

In developing the white papers, officials from state DOTs and ITS groups were contacted and interviewed.  Representatives from seven states were interviewed:  Arizona, Minnesota, Ohio, Kentucky, Pennsylvania, Utah, and Virginia.  A structured interview guide was developed and used in conducting the interviews.  The contact list and interview guide are included as Appendix B of this report.  Information gathered from the interviews was incorporated into the white papers.

2.3       Development of White Papers

As noted above, the white papers were developed from literature review and information gathered through the interviews.  The draft white papers were revised based on review comments from the FHWA.  Full versions of the revised white papers are provided in Appendix A to this report.  Chapter 3 of this report presents summaries of each white paper and discussions on the findings of the regional workshops.  The following are the three white papers that were developed by the project team. 

 

White Paper #1:  Defining and Measuring Traffic Data Quality (EDL # 13767)

 

Scope:  This white paper defines measures and methods for quantifying traffic data.  Issues considered include:

 

 


White Paper #2:  State of the Practice for Traffic Data Quality (EDL # 13768)

 

Scope:  This white paper documents the issues, measures, and approaches for assessing, using, and accommodating traffic data quality in various applications.  Issues considered include:

 

White Paper #3:  Advances in Traffic Data Collection and Management (EDL #13766)

 

Scope:  This white paper identifies innovative approaches for improving data quality.  This includes new contracting methods, business models, standards, training for data collection, and data sharing between agencies and states.  Consideration was also given to public-private partnerships, advanced traffic detection techniques (intrusive versus non-intrusive), and data archiving and use.  The issues addressed in this white paper include:

 

Text Box: Full versions of the revised white papers are also available as stand-alone documents on the ITS Electronic Document Library at http://www.its.dot.gov/itsweb/welcome.htm

2.4       Regional Workshops

Two regional workshops were conducted with the primary objective of obtaining inputs from participants in developing an action plan to address traffic data quality issues.  The goal was to define an action plan with work items that can be executed by the U.S. Department of Transportation (DOT), stakeholder organizations (e.g., American Association of State Highway Transportation Officials [AASHTO], ITS America), state agencies, and private industry.

 

The regional workshops were sponsored by FHWA Office of Policy, the ITS Joint Program Office (JPO), Ohio Department of Transportation (ODOT), and Utah Department of Transportation (UDOT).  The workshops were held on March 11, 2003 in Columbus, Ohio and on March 13, 2003 in Salt Lake City, Utah.  The revised white papers were distributed to the attendees about two weeks in advance of the workshops, giving them the opportunity to read and be familiar with the concepts and material to be discussed.  The white papers served as inputs to stimulate discussions at the regional workshops.

 

The workshops were intended for state DOT professionals responsible for collecting and using traffic detector data for any application including representatives from traffic management centers (TMCs), traffic operations, traffic monitoring, and planning divisions.  The workshop attendees included data providers and users as well as those who influence data collection activities.  This group includes officials, administrators, or managers involved in budgeting and funding as well as contractors who provide and install data collection devices.  In attendance were private sector travel information providers and representatives from 10 state DOTs (Ohio, Delaware, Indiana, Kentucky, Pennsylvania, Utah, Idaho, Texas, Washington, and California).  Also in attendance were representatives from Advanced Regional Traffic Interactive Management and Information System (ARTIMIS) in Cincinnati, Ohio; Maricopa Association of Governments (MAG) in Arizona; Northeast Ohio Areawide Coordinating Agency (NOACA); Ohio-Kentucky-Indiana (OKI) Regional Council of Governments; and Akron Metropolitan Area Transportation Study (AMATS).  The list of workshop attendees is provided in Appendix C of this report.

 

The draft proceedings of the two regional workshops were prepared and circulated among the workshop attendees for review and comments.  The workshop proceedings included summaries of the white papers, the discussions, and actions items.  The combined proceedings from the two workshops are presented in Chapter 3 of this report.

2.5       Action Plan Development

Several action items were identified and prioritized at the two regional workshops.  The action plan described in Chapter 4 of this report builds upon the findings in the white papers and inputs obtained from the regional workshops and reflect a broadly based consensus of the workshop participants. 

2.6       Additional Traffic Data Quality Literature

Additional relevant information on traffic data quality issues are compiled and presented in Appendix D of this report.  Specifically, the literature pertains to data sharing, institutional issues, vehicle classification, and loop detector failures.  These documents are intended to provide more detail on some of the major issues discussed at the regional workshops and in the white papers.

 

 

 

 

 

 

 

3.0    WORKSHOP PROCEEDINGS

3.1       Introduction

This chapter presents the combined proceedings of the two regional traffic data quality workshops.  Dr. Edward Fekpe, the principal investigator of the project, opened each workshop by welcoming all participants and providing a concise overview of the traffic data quality project.  He also provided a description of the approach used in developing an action plan to address the various issues relating to traffic data quality.

 

At the regional workshop in Columbus, Ohio (March 11, 2003), Dr. Fekpe reviewed the agenda for the workshop and then introduced the Contracting Officer’s Technical Representative (COTR) for the project, Mr. Ralph Gillmann, to discuss the objectives of the workshop. 
Mr. Gillmann outlined the objectives of the project and the expectations for the one-day workshop.  He gave a background of recent efforts including workshops and studies that addressed issues of ITS-generated data.  The most recent activities that were highlighted include:

 

 

Mr. Gillmann also distinguished between real-time and archived data with respect to their uses and the quality requirements for each type.  Finally, Mr. Gillmann outlined the objectives of the workshop, which included agreeing upon the institutional and technical traffic data quality issues.  The primary goal of the workshop was to define an action plan that includes successful practices, new solutions, and priorities.  Mr. Gillmann also emphasized that data from traffic detectors were the main focus, although other traffic data would not be excluded.

 

At the regional workshop in Salt Lake City, Utah (March 13, 2003), Mr. James Pol presented objectives of the meeting and the expectations from the one-day workshop.  Mr. Pol gave a background of recent efforts including workshops and studies to address issues of ITS-generated data.  He outlined the objectives of the workshop, which included agreeing on technical and institutional traffic data quality issues.  He also mentioned the added importance of traffic data quality with new INFOstructure and integration strategies being proposed for ITS.  As at the Ohio workshop, the primary goal of the Utah workshop was to define an action plan that includes successful practices, new solutions, and priorities.

 

The three white papers were presented at each workshop, followed by a detailed discussion of the issues raised.  The remainder of each workshop was devoted to discussions to obtain inputs and ideas for the development of the action plan.  Various traffic data quality action items were identified and discussed.  The following sub-sections present summaries of the white papers, detailed discussions, and action items.

 

 

3.2       Session 1 – Defining and Measuring Traffic Data Quality

The white paper titled Defining and Measuring Traffic Data Quality” was written by
Mr. Shawn Turner (TTI) for this project.  The complete version of the white paper is provided in Appendix A.  In developing this white paper, current and advanced practices for addressing data quality were reviewed for three types of user communities:  1) real-time traffic data collection and dissemination; 2) historical traffic data collection and monitoring; and 3) other industries such as data warehousing, management information systems, and geospatial data sharing.  The recommendations in this paper follow from this review.

3.2.1   Defining Data Quality

The literature contains two similar definitions for data quality.  Strong, Lee, and Wang (1997) define information quality as “fit for use by an information consumer” and indicate that this is a widely adopted criterion for data quality.  English (1999A) further clarifies this widely adopted definition by suggesting that information quality is “fitness for all purposes in the enterprise processes that require it.” English emphasizes that it is the “phenomenon of fitness for ‘my’ purpose that is the curse of every enterprise-wide data warehouse project and every data conversion project.”  English (1999B) defines information quality as “consistently meeting knowledge worker and end-customer expectations.” It is clear from these definitions that data quality is a relative concept that could have different meanings to different consumers.  For example, data considered to have acceptable quality by one consumer may be of unacceptable quality to another consumer with more stringent use requirements.  Thus it is important to consider and understand all intended uses of data before attempting to measure or prescribe data quality levels.

 

The recommended definition for traffic data quality is as follows:

 

“Data quality is the fitness of data for all purposes that require it.  Measuring data quality requires an understanding of all intended purposes for that data.”

3.2.2   Measuring Data Quality

Based upon the review, the following data quality measures are recommended:

 

 

 

 

 

 

 

 

There are several other data quality measures that could be appropriate for specific traffic data applications.  The six measures presented above, however, are fundamental measures that should be universally considered for measuring data quality in traffic data applications.

 

At this time, it is recommended that goals or target values for these traffic data quality measures be established at the jurisdictional or program level based on a better and more clear understanding of all intended uses of traffic data.  It is evident that data consumers’ needs and expectations, as well as available resources, vary significantly by implementation program, urban area, and state and preclude the recommendation of a universal goal or standard for these traffic data quality measures.

 

It is also recommended that if data quality is measured, a data quality report be included in metadata that is made available with the actual dataset.  The practice of requiring a data quality report using standardized reporting is common in the GIS and other data communities.  In fact, several metadata standards already exist (FGDC-STD-001-1998 and ISO DIS 19115) for standardized reporting of data quality in datasets.  Until a formal traffic data archive metadata standard is approved, the traffic data community should create metadata based upon the core elements (i.e., mandatory metadata items) required in these two other geospatial metadata standards.

3.2.3   Discussion Points

The following points were suggested as discussion items at the end of the presentation:

 

  1. Agreement with the data quality measures?
  2. What are the technical or institutional barriers to measuring traffic data quality and providing data quality information with the data itself?
  3. Is there a need to provide guidelines on calculating data quality measures given typical traffic data?
  4. Is there a need for an official standard on defining or calculating these measures?
  5. What are the minimum acceptable levels of data quality for different applications?
  6. Is there a need for national benchmarks or standards for traffic data quality levels?
  7. Given that different applications and users of traffic data require different quality levels, how do public agencies reconcile these differences in quality requirements?  Particularly in cases where “non-paying” users want higher data quality than the group/agency whose budget maintains traffic data sensors?

3.2.3.1    Discussions – Ohio Workshop

Shawn Turner (Texas Transportation Institute) initiated the discussions by asking the workshop participants about their reactions to the data quality measures.  While there was overall agreement that the data quality measures are adequate, there was discussion about some of the measures.

 

The completeness measure was acknowledged as a good measure.  There was some concern that reporting this measure could be embarrassing for state agencies.  None of the state agencies currently report it.  Rob Bostrom from the Division of Planning, Kentucky Transportation Cabinet, stated that their Automatic Traffic Recorder (ATR) data do not contain data for 365 days.  He also stated that data completeness is important for applications like k-factor calculations (30th highest hour) that are used in highway design and capacity analysis.  He also stated that with the existing errors in data collection, the use of the 50th highest hour might not be very different from the 30th hour and that this might be a future research need.  Also some applications such as calculating Equivalent Single Axle Loads (ESALs) from WIM data require that all days are represented.

 

It was also suggested that the data quality measures in the white paper need to be customized by application and region.  Greg Oliver from Delaware DOT mentioned that summer periods are critical for traffic data collection in the state because of the increased flow of traffic during these months.  It is important that the data quality measure reflect this temporal component.

 

David Gardner, ODOT, questioned the usefulness of the data quality measures especially to the final user.  Most users of ODOT data expect a certain quality level to be met and do not necessarily need all the details regarding quality.  A suggestion was to have tiers of users and applications with different data quality documentation needs.

 

Andrew Pierson, URS, mentioned that it is often difficult to go back and verify data collection efforts especially since a consultant is unable to obtain the ground truth.  Data from the states typically lack metadata or the discussion of the context in which the data are produced.

 

Steve Jessberger from ODOT raised a question about the validity measure of data.  Specifically, what should be done with data collected during snow or construction?  Should agencies use the “real” but atypical data or try to collect only typical data?  Ralph Gillmann, FHWA, replied that FHWA would like to know why the data are abnormal and that while atypical conditions are not good for some applications like average annual daily traffic (AADT), metadata (data about data) for such cases would be helpful.  Metadata are not required by FHWA at this time.  None of the workshop participants indicated that the state agencies were collecting and reporting metadata.

On the question of metadata and its value, it was noted that agencies are unable to communicate effectively about data quality because there is usually no historical information or metadata that can be used for comparison; that is, there is no quality information associated with existing data. Some participants noted that their existing traffic analysis software or databases did not support the storage of metadata associated with traffic data.

 

On the issue of minimum acceptable data quality standards, the workshop participants suggested that the minimum acceptable standards vary by state, type of application, and data collection device.  Some minimum requirements are already in use by states for automated traffic recorder (ATR) data.  Ohio, Kentucky, and Indiana, for example, require two weeks of data per month from the ATRs.  Indiana also requires at least two days from each day of the week, per month.  There was no consensus as to whether it is necessary or feasible to set minimum acceptable data quality standards.

 

It was noted that the purposes of the traditional traffic monitoring groups and the ITS groups are different and that this affects their data collection and management philosophy.  Scott Evans from ARTIMIS stated that the cameras and the changeable message signs were their priority for their Traffic Management Center (TMC), and they were interested only in the change in traffic volumes.

 

Several participants expressed concerns about ITS data, including the following:

 

 

The planning division in Pennsylvania DOT has been trying to use TMC data and has encountered some challenges in educating the TMC of their data requirements.  It was also suggested that additional research be conducted to understand the value of ITS data.

 

Several traffic monitoring personnel stated that there was significant overhead involved in using ITS data including the pre-processing of data.  Ohio and Kentucky have a good relationship with ARTIMIS (the TMC in Cincinnati), and data sharing does exist between the TMC and the traffic monitoring groups.  The TMC is able to provide data to the traffic monitoring group at ODOT in a compatible Traffic Monitoring Guide (TMG) format.  While ITS groups require dense coverage, the traffic monitoring groups require coverage for a much larger area.  Dave Gardner, ODOT, cautioned that the availability of ITS data can sometimes overwhelm the resources of the traffic monitoring group in terms of the post-processing requirements.

 

All the participants agreed that guidelines are needed to explain the calculation of the suggested data quality measures.  The following observations were made regarding the need and usefulness of guidelines:


 

It was suggested that these guidelines should be similar to what is being done by ASTM (formerly American Society for Testing and Materials) for archived data.  It was also noted that standards about data quality might be useful and could be included in the AASHTO guidelines for data monitoring programs.

 

National benchmarks for data quality were also strongly encouraged.  It was noted that the concept of INFOstructure should be used in integrating all transportation-related data.  There should be greater emphasis on sharing and integrating data systems at state, local, and regional levels.  At minimum, these benchmarks should be set for loop-based detection systems.  These benchmarks also should be set based on the type of application.

3.2.3.2    Discussions – Utah Workshop

There was general agreement that the six fundamental measures of traffic data quality adequately describe all aspects.  Dr. Mark Hallenback of University of Washington added that the measures presented are the right set of quality measures.

 

The workshop participants noted that the completeness measure was difficult to define as it may differ based on the application.  The assumptions and definitions for this measure also need to be explicit.  For example, 100 percent complete data for freeways is only a partial representation if the arterial system is also considered.  It was felt that the data quality measures need to be specified differently for different applications and the uses of data should decide the nature and necessity of quality measures.  It was suggested that data quality measures need to be fluid and flexible.  One of the participants requested additional clarification on the differences between completeness and coverage.  Shawn Turner explained that “completeness” refers to the temporal aspect and “coverage” refers to the spatial aspect of traffic monitoring.  As far as data quality is concerned, it was noted that there is a lack of guidance for deploying sensors, and they are deployed ad hoc based on operational needs.

 

Text Box: Note: After considering post-workshop comments, the research team agrees that completeness can represent more than just the temporal aspects of missing data. “Completeness" can refer to both the temporal and spatial aspect of data quality, in the sense that completeness measures how much data is available compared to how much data should be available.  The "coverage" measure is most often used to refer to "how much data should be available" in terms of the extent of the transportation network. For example, the "coverage" of a dataset could be 98 percent of the freeway system within an urban area with continuous data collection (24 hours per day, 365 days per year). However, sensor downtime at a few locations and system downtime for a major system software failure might result in a completeness value of 75 percent, in which case the archive contains 75 percent of the data that should be available from the given coverage of 98 percent of the freeway system.

Qing Xia of Maricopa Association of Governments in Arizona raised a question about the weighting or ranking of the data quality measures.  Shawn Turner noted that there are no rankings or weights associated with these measures, although that is an idea for future research.

 

Peter Martin from the University of Utah suggested adding two sub-measures for the accessibility measure of data quality.  The first sub-measure suggested was “portability” to indicate the number of different formats in which the data were available to the user.  The second sub-measure would provide information on the level of manipulation and the type of manipulation used on the data.  Researchers from the University would like information on whether the data are raw or processed and how to access and reformat the data.  Mark Hallenback indicated that the TMC in Seattle has status flags for its detector data that indicate problems and applied solutions at different levels of data aggregation.

 

Martin Knopp, Utah DOT, agreed with the data quality measures and noted that the accessibility measure could place unusual demands on the states to provide data in formats to satisfy all users.  It was suggested that this measure be stated as a philosophy instead.  If all users can be defined then their accessibility also can be defined.  The problem is that some uses for data may not be immediately known—future potential uses of data may have different requirements.

 

Meeting the quality goals of non-paying users is difficult for two reasons:  (i) the provider may have different perspectives on data quality and (ii) the requirements of the non-paying user may not be clearly defined in the budget.  It was felt that if all parties (potential users or beneficiaries) pool resources to secure sufficient funds, it may be possible to meet the data quality requirements of all users.

 

In response to a question about the institutional and technical barriers involved in calculating and reporting these measures, it was noted that cost and time are the two most important issues.  There could be a significant cost to modify software to report the quality measures.  Some participants would like information on the return on investment obtained by reporting these quality measures.  Raelene Viste (Idaho Transportation Department) commented that these measures could be very useful within the transportation group itself to monitor their performance even if the external users do not need these measures.  Texas DOT feels that there is a good return on investment if these measures are followed.

 

Institutional issues arise because different departments have different data needs, operating rules, and budgets.  There is no existing mechanism for effective communication and exchange of views relating to traffic data and its quality. 

 

It was suggested that guidelines and baseline instructions could be helpful in allowing the agencies to calculate and report data quality measures.  It was also suggested that these guidelines be provisional, which will give the impetus for the agencies to start collecting quality data, allow them to start reporting data in a certain way, and provide them time to overcome the institutional barriers.  Creating a traffic monitoring master plan was suggested to describe how different components work and how they coordinate within agencies.  Caltrans indicated that they have already started work in this area.  These guidelines should take into consideration that most agencies have legacy systems, which often can be problematic.  Another idea to formalize the data quality process was to include data quality requirements in the regional ITS architectures along with data flows.  The visibility and the relevance of data collection programs can benefit greatly from data quality reporting.

 

For a particular goal or program, there is the need for a minimum set of measures to assess the quality of the data.  However, while there was no consensus on the minimum set of standards among the participants for all the applications of traffic data, it was suggested that state DOTs need to start with provisional standards that include performance statistics that have visibility within the department. 

 

There was no general agreement for the need to establish national data quality benchmarks.  Some participants felt that there is no need for a national benchmark; others thought that perhaps “national benchmark” is too strong, suggesting the use of “national goal” instead.  National goals could be set for different uses of data.  It was agreed that normalizing or leveling the playing field may be difficult given the diverse application types and needs.  However, it was also noted that such goals could lead to uniformity in data quality reporting.  Caltrans indicated that it operates according to a performance level but sees some value in having a national goal.  Such national goals also would be helpful for vendors.  Another view indicated that each state could define its own use and its own goal and standard instead of adhering to an established national goal, which may be more difficult to set and achieve.  In this way goals would be defined and met at the state level.  States that do a good job in maintaining data quality should be recognized and rewarded.

3.3       Session 2 – State of the Practice in Traffic Data Quality

The white paper titled State of the Practice in Traffic Data Quality” was written by Dr. Rich Margiotta (Cambridge Systematics) for this project.  The complete version of the white paper is provided in Appendix A.

3.3.1   Types and Applications for Traffic Data

Several types of traffic data are collected by both “traditional” and ITS means.  Where there is overlap between the two realms, the basic nature and definitions of the data collected are the same.  However, there are subtle differences in data collection methodologies that may lead to problems with data sharing and quality.  Among these are the polling rate and vehicle classification “bins”.

3.3.2   Traffic Data Quality:  Characteristics

What Causes “Bad” Traffic Data:  Several sources contribute to inaccuracies in traffic data.  These relate to the nuances of specific equipment and how data are collected and transmitted from the field:

 

 

Detection of “Bad” Data:  The white paper, “Defining and Measuring Traffic Data Quality”, presents a full discussion of how questionable/inaccurate data are identified after they are collected from the field.  A variety of methods are used, including internal range checks, cross-checks, time series patterns, comparison to theory, and historical patterns are used. 

 

Correction of “Bad” Data:  Once suspect data are identified, the question then is what to do about them.  Most applications flag the records failing quality control or set the measurement values to missing or other special codes.  Editing the measurement values is far less common, although some experimentation with “imputing” values has taken place.  Imputation appears to be most applicable where small intermittent gaps appear in the data rather than large portions of time with missing or suspect data.  A variety of techniques have been explored including time series smoothing and historical growth rates by location and day and week.  However, there is little consensus in the profession on what techniques to be used, or if imputation should be done at all.

3.3.3   Quality Issues for Using ITS-Generated Data for Traditional Uses

The applications that traffic data support in operational and traditional uses of ITS-generated traffic data – as well as the nuances of data collection in both cases – can have an impact on data quality.  Several differences exist based on these points:

 

3.3.4   Recommendations:  Possible Solutions

Sampling of ITS Locations and Data Streams:  The selection of certain strategic locations where both ITS and traffic monitoring groups can concentrate their efforts to correctly install, inspect and maintain these locations.

 

Shared Resources:  The sharing of expertise and resources among the various agencies within the state DOTs to ensure that they benefit from their strengths and help overcome weaknesses.

 

Maintenance, Calibration, and Performance Standards:  Undertaking formal studies of data quality by setting maintenance and calibration standards and goals for traffic monitoring devices

 

Contractual Arrangements:  New and emerging business models such as outsourcing and use of private contractors for collecting and archiving data.

 

More Sophisticated Operations Applications as a Data Quality Leader:  The current generation of operational strategies does not require extremely accurate data – operators typically need to know where the big problems are and their responses are geared to this.  New and emerging operations applications may drive the need for high quality data

 

New Technologies:  The use of new technologies including non-intrusive devices and probe vehicles combined with innovative uses of existing inductive loop technologies.

3.3.5   Discussion Points

The possible solutions and recommendations (section 3.3.4) served as the main points for the session’s discussions.

3.3.5.1    Discussions – Ohio Workshop

Rich Margiotta initiated the discussion by asking the participants what they thought of the potential solutions listed in the white paper.  The participants agreed that sharing resources between the ITS and traffic monitoring groups is a good idea.  The Division of Planning in Kentucky described an example of shared resources.  The Division of Planning invested in equipment they like and trust and ARTIMIS identified modifications to those devices so that they also can be used for ITS applications by the TMC.  James Pol, ITS/JPO, mentioned that there will be a greater need for sharing data in the future due to scarce resources.

 

On the question of whether there have been any observed cost savings due to data sharing, David Gardner, ODOT, responded that the data sharing with ARTIMIS was very recent and no cost information was available.  Indiana DOT commented that there should be some expected savings from a safety standpoint as they no longer have to place road tubes on the roadway.  It was suggested that TMCs start using ITS data only from select locations.  It was noted that the TMC in Cleveland is beginning to consider the use of ATR data for their operations.

 

One of the major themes of the discussion was the problems encountered during installation of traffic monitoring devices.  Installation of equipment is the most critical aspect to ensure that high quality data are obtained from the device.  It was noted that the use of pre-qualification of contractors for installing loops and piezo-based detectors was not the usual practice.  Ohio does not have any pre-qualification standards for installation and contractors install devices based on manufacturer’s instructions.  Indiana DOT calibrates their devices annually but does not have any standards for installation.  Pennsylvania DOT uses manual counts as the standard to assess the accuracy of ATR counts.  It is recognized, however, that manual counts also can be in error depending on the volume of traffic and thus may not be the most effective measure of ATR count accuracy.

 

David Gardner, ODOT, mentioned that Ohio DOT is working on a contract to maintain ATRs.  The contract would be a task order in which the successful contractor would be given maintenance tasks as needed.  ODOT hopes that such a contract would save time in fixing maintenance problems by having a contractor in place.

 

The overall consensus was that there is some existing information about installation and maintenance of equipment but more guidelines and standards are needed.

 

Quicker notification of sensor problems was discussed.  Today, in some cases, a problem might not be known for a period of four to six weeks (during data processing).  While in some instances it is possible to poll the devices daily (Kentucky polls its 77 sites daily), states with more sites usually poll less frequently.

 

On the question of whether the quality assurance software used by the traffic monitoring groups can be shared with the ITS groups, various states expressed an interest in the data validation rules used to check traffic data.  It was noted that state agencies had developed in-house software to validate traffic data using specific validation checks.  A synthesis of the data validation checks was suggested as a very important and desired research need.

 

It was also noted that some equipment does not have sufficient level of accuracy and it was recognized that vendors need to test the equipment better and make it more robust.  State DOTs also do not have information on the lifecycle cost of the equipment.  The participants also noted that the value of data to the customers was not clear.  In other words, what benefit would an increase in data quality provide to the customers?

3.3.5.2    Discussions – Utah Workshop

The participants felt that strategic ITS detector locations in which the traffic monitoring groups and the ITS groups share resources and devices was a good idea.  Washington DOT already has started using a similar concept in which certain detectors are more important than others. However, it was felt that these priority locations are politically driven and land-use factors can change the priority very quickly.  It is essential to include the planning groups in identifying the location selection and reevaluate priorities periodically.

 

The participants also agreed that sharing resources is a good idea.  However, doing it well requires understanding what is possible and what is practical.  It is necessary to define the types of data needed and collected by all the agencies sharing the data and equipment.  Vehicle classification was discussed as an example.  The 13 vehicle classes used by FHWA are required by very few analysis procedures but are required to be collected and reported by the traffic monitoring agencies.  However, ITS groups do not have the equipment to collect such detailed classification.  Some other groups within the DOT require information on body types and commodity hauled.  These discrepancies and specific needs should be understood and resolved to ensure synergies from the shared resources and equipment.

There were some concerns about sharing equipment, as different protocols and storage requirements used by different groups in the same agency make the use of the same devices difficult.

 

States have experienced problems in data collection equipment maintenance, primarily in inspections of installation after construction begins.  Coordinating with construction, planning, and operations groups to ensure proper installation and inspection is often a problem.  Joe Avis from Caltrans commented that devices that have had electrical inspections last longer than those which have not been inspected.  The biggest impediment in performing such inspections is the time and cost.  Sharing resources to achieve this goal is very beneficial to everyone.

 

Various participants noted their frustrations with equipment installation.  Texas DOT is developing procedures for design, installation, and maintenance, and will make these available on the Internet so that contractors can access them.  They are also planning to train all their regional offices on the procedures related to installation and maintenance of traffic data collection devices.

 

The participants expressed interest in quality control and assurance software used by traditional traffic monitoring groups.  The software used by states varies greatly and is typically developed using their respective in-house business rules.  Mark Hallenbeck proposed creating an open-source software model or at least having the documentation of such software available on the web so that a DOT investing in such software knows what other agencies have used.  Martin Knopp (Utah DOT) mentioned a voluntary group of state agencies that encourages informal exchange of information.  Currently, the scope of this group is very limited.  There also has been a pooled fund study to look into the elements of quality assurance software.  There was a consensus that this is an area of great interest to participants.

3.4       Session 3 – Advances in Traffic Data Collection and Management

The third white paper titled Advances in Traffic Data Collection and Management was written by Dr. Dan Middleton (TTI) for this project.  The complete version of the white paper is provided in Appendix A.

3.4.1   Introduction

Without accurate and reliable detectors, traffic management decisions based upon real-time or historical data are compromised.  Many agencies use post processing for quality assurance as opposed to quality control.  Quality assurance attempts to “fix the data” or identify defective data rather than ensuring the accuracy and reliability of the equipment.  Quality control emphasizes good data by ensuring selection of the most accurate detector then optimizing detector system performance.  This white paper identifies innovative approaches for improving data quality through innovative contracting methods, standards, training for data collection, data sharing between agencies and states, and advanced traffic detection techniques. 


3.4.2   Innovative Contracting Methods

A few agencies have already invested resources in developing new contracting methods as a means of ensuring data quality at its source.  Performance criteria in contracts, while not common, are being considered by DOTs as a method to transfer some of the risk and maintenance requirements to contractors. 

 

The Virginia Department of Transportation (VDOT) at the Hampton Roads Traffic Management Center uses contractors for support of its day-to-day operations.  The TMC accomplishes the necessary maintenance on its detection system through hiring contractor personnel who are supervised by VDOT personnel.  VDOT treats contractor personnel as an extension of its own staff, apparently giving the TMC director even more latitude to add or remove contractor personnel compared to VDOT staff.  The second example in Virginia is the VDOT Mobility Management Section (traditional data collection), which leases its traffic counters and modems from Digital Traffic Systems (DTS).  A state inspector checks the equipment once a year, but if there are substantial errors in the data, the contractor has to re-collect the data.  VDOT has established performance-based lease criteria for payment of data collection services.  Contractor compensation is based on the amount of acceptable data being submitted by the contractor. 

 

Another example of an innovative contracting method is with the Ohio Department of Transportation’s Office of Technical Services, Traffic Monitoring Section.  ODOT is in the process of executing a task-order-type contract for maintenance to have contractors on board for anticipated and unanticipated maintenance requirements of the traditional data collection equipment statewide.  The contract is expected to begin in the summer of 2003. 

3.4.3   Standards

Standards development is another aspect of traffic data quality.  The U.S. DOT ITS Standards Program is working toward the widespread use of standards to encourage the interoperability of ITS systems, including traffic data collection systems.  There is also a draft standard being developed by the ASTM, entitled “Standard Specification and Test Methods for Highway Traffic Monitoring Devices (ASTM, 2002),” which will be available soon.  Standardization has occurred in Germany, the Netherlands, and France, where national standards for data collection equipment have been developed (U.S DOT, 1997).  The process has increased the quality and accuracy of the data collected, decreased the effort needed to transfer data between agencies or offices, and increased the reliability of field equipment.  However, there is increased initial cost of the equipment when compared to non-standard equipment. 

3.4.4   Training for Data Collection

Training of personnel on the intricacies of the equipment is an essential part of ensuring data quality.  With improvements in non-intrusive detector hardware and software occurring at a rapid pace, maintenance personnel must be computer literate and must maintain an awareness of the latest changes for a variety of detection systems.  Initial training of new systems is often available through the vendor, but turnover in state DOT maintenance staff and new models requires an ongoing training program. 

3.4.5   Data Sharing Between Agencies and States

Budget cuts are causing agencies to seek alternate means of meeting data supply needs, with one solution being to share data between agencies.  The Hampton Roads TMC currently shares video with the city of Norfolk and plans to share video, voice, and data with six other cities in the immediate area, including Norfolk, which also has a TMC so there is mutual benefit to sharing each other’s data.  The New England states of Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont have cooperated to help each other and share transportation data.  ARTIMIS supplies data to the following agencies:  planning agencies within the Ohio DOT, the Kentucky Transportation Cabinet, the local MPO (Ohio-Kentucky-Indiana Regional Council of Governments), the City of Cincinnati Traffic Engineering office, local FHWA contacts, and the FHWA Mobility Monitoring project.  The agencies sharing data about ARTIMIS perform their own analysis of data quality. 

3.4.6   Advanced Traffic Detection Techniques

Quality control emphasizes data quality by ensuring selection of the most accurate detector then optimizing detector system performance.  Two of the most recent research efforts focusing on the performance attributes of advanced detection techniques occurred at the Texas Transportation Institute (Middleton et al., 1999, 2000, 2002) and in Phase II of the Minnesota DOT Non-Intrusive Tests (MinnDOT & SRF Consulting, 2002).  Of the detectors recently tested by TTI and MinnDOT, the multi-lane detectors that are most competitive from a cost and accuracy standpoint are Autoscope Solo Pro, Iteris Vantage, RTMS by EIS, SAS-1 by SmarTek, Traficon NV, and 3M Microloops. 

3.4.7   Discussion Points

The following points were suggested as discussion items at the end of the presentation:

 

·                     What are the equipment-related impediments to data sharing?

·                     What are the data accuracy concerns for ITS data?

·                     How many detectors can be “out” at any given time?

·                     Standards development takes time.  What do we do in the meantime?  Current standard output is “contact closure.”

·                     How should/will equipment vendors help (training, product consistency, information dissemination, diagnostics)?

3.4.7.1    Discussions – Ohio Workshop

Dan Middleton (Texas Transportation Institute) presented the paper on innovative approaches to traffic data collection management.  European agencies have extensive experience with loop detectors and are satisfied with their performance.  These agencies are careful with installations and have national standards for loop installations.  Dan Middleton remarked that the specifications for the loop detectors themselves are not very different from those currently being followed by Texas DOT (TxDOT), but that there are stricter installation and maintenance standards in Europe.

 

The participants described the perfect detector as one that is easily installed off the road; weather proof; self-diagnostic; and capable of collecting multi-lane volume, speed, and classification data.

 

There was also discussion of the appropriate spacing of detectors.  Participants felt that the current 0.5-mile-spacing was driven primarily by ramp-metering applications and the one-mile spacing of urban interchanges.  For current applications at TMCs, 0.5-mile spacing is not required.  However, advanced traffic management applications might need such dense coverage. Traditional traffic monitoring groups need data from only one location in each segment.  Thus, the spacing is determined by potential application of data.

 

In terms of contracting, it was noted that most manufacturers provide a one-year warranty on their equipment and it might be useful if they provided longer warranties (e.g., five years).  Performance-based contracts were viewed as an interesting approach but the participants needed more information on how to set up and manage these contracts.  There were concerns expressed about situations where the contractor and the state do not agree on the quality of the data and the increased costs of these contracts.  Currently, the primary mode of contracting is low-bid. Another idea was to develop an asset management approach for certain devices.  It was noted during the discussions about contracting and business models that universities are now becoming archivists of traffic data.  The field operational test (FOT) being planned in Virginia would provide more information on such a framework and its advantages and disadvantages.

 

The participants also indicated the need for a clearinghouse of traffic detectors.  Ralph Gillmann mentioned the Vehicle Detector Clearinghouse (VDC), a pooled-fund project operated by New Mexico State University.  The clearinghouse has information on traffic detector tests conducted, and offers limited technical assistance.  It was noted that the clearinghouse is not a testing facility.  The need for such a testing facility was also expressed.

 

It was noted that vehicle classification was a problem for most of the detectors.  The 13 vehicle classes required by FHWA restrict the type of traffic detection device that can be used.  Also, length-based detectors have different classification schemes based on the manufacturer.  Ralph Gillmann mentioned that FHWA has worked with Illinois DOT to allow it to report length-based classification data.

3.4.7.2    Discussions – Utah Workshop

The participants were receptive to newer detection technologies as long as they are cost effective and approach the accuracy of inductive loops.  Participants from traffic monitoring groups indicated that they had tried non-intrusive technologies including remote traffic microwave sensor (RTMS) and video-based detection with varying degrees of success.  In terms of the cost-benefit of using newer detection technologies, it was felt that life-cycle costs for traffic detectors would be very valuable in decision-making; however, cost information is often not available.  It was also noted that while the cost of traffic control and maintenance are reduced in the case of non-intrusive detectors, there are still some costs which need to be considered in the cost-benefit.

 

It is not uncommon for vendors to release new or modified equipment before it has been fully tested and before proper training is provided to the vendor’s own personnel.  A testing institute was suggested as a solution.  The Vehicle Detector Clearinghouse was suggested as a potential candidate to perform such a service.  Currently the clearinghouse provides information about detectors and tests conducted by the states, but it does not conduct independent testing

 

Installation of devices was discussed again in this session as being critical.  Dan Middleton remarked that the Netherlands scanning tour indicated that the success of the inductive loops greatly depended on their installation.  There needs to be coordination during installation and even afterwards between different divisions of the same agency.  For example, milling operations to smooth the pavement can completely destroy loops, and lane-striping resulting in lane shifts can render the loops ineffective because they are no longer centered in the lanes. 

 

Each detector has its issues and problems related to installation and calibration.  Location and set-up of these devices sometimes is more art than science.  While there are manufacturer’s instructions for set-up and installation, the installer must still use trial-and-error in some installations to achieve optimum performance.  Experience gained over time is helpful in correctly and efficiently setting up these devices.  Also, a compilation of the installation, maintenance procedures, and best practices would be very useful.

3.5       Action Plan Discussion

This section summarizes the action items from brainstorming sessions conducted to identify and prioritize the action items to address the data quality issues discussed in the previous sessions. The actions are organized by white paper topic.

3.5.1   Defining and Measuring Traffic Data Quality

3.5.1.1    Ohio Workshop

Following are the action items identified to address issues relating to defining and measuring traffic data quality:

 

 

 

 

3.5.1.2   Utah Workshop

Following are the action items identified to address issues relating to defining and measuring traffic data quality:

 

 

 

 

 

 

 

3.5.2   State of the Practice

3.5.2.1    Ohio Workshop

Following are the action items identified to address issues relating to the state of the practice:

 

 

 

 

 

3.5.2.2    Utah Workshop

Following are the action items identified to address issues relating to the state of the practice:

 

 

 

 

 

 

 

 

3.5.3   Innovative Approaches

3.5.3.1    Ohio Workshop

Following action items identified to address issues relating to innovative approaches to data quality:

 

 

 

3.5.3.2    Utah Workshop

Following action items identified to address issues relating to innovative approaches to data quality:

 

 

 

 

3.5.4   Responsibilities and Timeline

Responsibilities and timelines for implementing the action items were not discussed at the regional workshops.  Although responsibilities as to which agency should perform the action items were not explicitly identified, it was implicit that FHWA and state agencies will be playing leading roles. 

 

 


4.0    Action Plan for Improving Traffic Data Quality

4.1       Introduction

As noted earlier, the primary objective of this project is to define an action plan with work items that can be executed through the U.S. Department of Transportation (DOT), stakeholder organizations (e.g., American Association of State Highway Transportation Officials [AASHTO], ITS America), state agencies, and private industry.  Several action items were identified and prioritized at the workshops.  The action plan builds upon the findings in the white papers and inputs obtained from the regional workshops.  The action plan provides a blueprint for specific actions to address traffic data quality issues. 

4.2       Partnerships and Coordination

Even though the regional workshops were not attended by representatives from every state, the plan is considered to reflect a broadly based consensus of the states DOTs and others involved in traffic monitoring activities on actions to address data quality issues.  Implementation of the plan will require collaboration among both public and private partners with the FHWA and state DOTs playing leading roles.

 

Coordinators were identified for each action item.  It is assumed that the coordinators will assume the primary responsibility of implementing the specified action items.  Although specific agency responsibilities for action items were not explicitly identified, it was implicit that FHWA and state agencies will play leading roles.  For example, FHWA would lead development of data quality assessment guidelines and the states would lead the use of task order contracting approaches.  In other areas, some FHWA assistance may be required in developing general guidance for the states.  States can then customize the approach to suit their individual circumstances.

 

There are three primary organizational units involved in the traffic monitoring activity:  Planning, Design, and Intelligent Transportation Systems (ITS) or Traffic Management Centers (TMC).  The degree of involvement in traffic monitoring activity can vary from conducting simple road tube counts to operating elaborate ITS installations.  Since methods, techniques, and equipment for conducting traffic monitoring activities are similar across the three organizational units, there is significant opportunity for partnering between the units.  These partnerships are critical in implementing some of the action items.

 

The plan identifies 10 priority action items based on those identified at the two regional workshops.  These action items were distilled from comments from both regional workshops. 

 

 

4.3       Action Items

This section describes the ten action items identified for improving traffic data quality from ITS and non-ITS sources.  These action items are presented in descending order of priority.  The plan includes descriptions of the action items and the issues they address.  For each action item, coordinating and collaborating agencies are specified. 

4.3.1   Guidelines and Standards for Calculating Data Quality Measures

Description:  Develop guidelines and standards for calculating traffic data quality measures.  The guidelines and standards are expected to contain methods to calculate and report the data quality measures for various applications and levels of aggregation.  In addition, the guidelines should also include:

 

·                    Examples or case studies of application of data quality methods 

·                    National goals (by application) – these data quality goals represent what state agencies can strive to achieve in their operations

·                    Guidance on how to construct and store quality measures

·                    Specifications and procedures for reporting data quality metadata

·                    Costs to calculate and report quality measures. 

 

Issues:  This action item was identified as top priority at the two regional workshops.  The action item addresses the following key issues:

 

·        Defining and measuring traffic data quality

·        Quantitative and qualitative metrics/levels of data quality

·        Acceptable levels of quality

·        Methodology for assessing traffic data quality.

 

Coordinators:  It was suggested that FHWA or AASHTO would be the appropriate agency to develop these guidelines.  A suggestion was to include guidelines for calculating data quality measures in the “AASHTO Guidelines for Traffic Data Programs” publication or in the Traffic Monitoring Guide. 

4.3.2   Compilation of Business Rules/Data Validity Checks and
Quality Control Procedures

Description:  Synthesize validation procedures and rules used by various states and other agencies for traffic monitoring devices.  This synthesis report will also serve as a guide to DOTs and other agencies investing in new software for traffic data collection.  The synthesis document should also include quality control procedures for all types of applications and data management methods for maintaining high quality data.

 

The development and adoption of common software was identified as a possible approach to ensure uniformity among state agencies.  Recognizing that software development and testing is expensive and time-intensive, it was suggested that an immediate action would be to share documentation and knowledge of existing software among state agencies.

 


Issues:  This action item addresses the following key issues:

 

 

Coordinators:  FHWA, state DOTs

4.3.3   Best Practices for Equipment Installation and Maintenance

Description:  Develop a synthesis of best practices of installation and maintenance of traffic monitoring devices.  This document should, among other things, include:

 

 

Issues:  This action item addresses the following key issues:

 

 

Coordinators:  FHWA, state DOTs

4.3.4   Clearinghouse for Vehicle Detector Information

Description:  Establish an independent testing entity to test and verify claims of the new and emerging traffic detection devices on the market.  Such an ongoing program would conduct periodic independent accuracy tests of new equipment.  Results from the independent tests should be stored in a clearinghouse that can be accessed by all potential users.

 

The clearinghouse would also provide technical guidelines on the capabilities of detectors by application and conditions.  The guidelines would enable agencies to select the appropriate devices for its applications, budget, and environmental conditions. 

 

It was noted that the capabilities of the existing Vehicle Detector Clearinghouse (VDC), operated out of the New Mexico State University, could potentially be expanded to serve the needs expressed above.  In the short-term, a web-log or a moderated discussion forum needs to be added to the existing Vehicle Detector Clearinghouse to help users share experiences.

 

Issues:  This action item addresses the following key issues:

 

 

Coordinators:  FHWA, state DOTs, and VDC

4.3.5   Sensitivity Studies to Demonstrate “Value of Data”

Description:  Conduct extensive sensitivity analyses and document the results to illustrate the implications of data quality on user applications.  This action item is considered important because it would help document and demonstrate the “value of data” and highlight the effects of poor quality data on various applications.  Such a document would serve as a reference for potential users in deploying data of different levels of quality.  Some applications are extremely sensitive to data quality, whereas others are not.  The documentation should include sensitivity of results for selected applications to variations in data quality measures such as accuracy, coverage (density of detectors), and completeness (missing values).

 

Based on the results of the sensitivity analysis, develop data quality “targets” or “benchmarks” for each application.  Also, the results of the sensitivity analysis would be used to provide guidance or procedures for imputing missing data points.

 

Issues:  This action item addresses the following key issues:

 

 

Coordinators:  FHWA, state DOTs

4.3.6   Guidelines for Sharing Resources

Description:  Develop guidelines for sharing resources for traffic monitoring activities including shared equipment, personnel, funding, and cooperation among different agencies and departments.  These should also include guidelines for establishing public-private partnerships for sharing resources as well as guidelines for assessing and validating traffic data collected by the private sector and vice versa.

 

Information gathered from the regional workshops clearly indicated that budget cuts and financial considerations have forced different groups (within an agency or organization) to look into synergies that would lead to the use of other group’s resources to meet their data needs.  Identifying opportunities for different groups within and outside state DOTs to work together to meet their data needs was mentioned as critical.  Furthermore, these guidelines will establish trust and confidence in private sources of data for use by the public sector and vice versa.

 

Issues:  This action item addresses the following key issues:

 

 

Coordinators:  State DOTs, FHWA

4.3.7   Life-cycle Costs of Detection Equipment

Description:  Develop a methodology for calculating lifecycle costs to enable states and other agencies to:

 

 

These include cost of equipment, installation, training, and maintenance.  The costs of equipment and maintenance impact coverage and other measures of quality.  A better understanding of the life-cycle costs and guidance on how to estimate these costs, is expected to help planning and investing in traffic monitoring activities.

 

Issues:  This action item addresses the following key issues:

 

Coordinators:  State DOTs, FHWA

4.3.8   Improved Contracting Approaches

Description:  Develop guidelines for innovative contracting approaches for traffic data collection.  This should include:

 

·                    Information regarding performance-based contracting approach and management, and the associated costs and benefits

·                    Guidance on task-order-type contracts and cooperative agreements for equipment installation and maintenance

·                    Guidance on life-cycle-cost-based bidding approach.

 

The question of the contracting approach for data collection device procurement, installation, and maintenance was identified as one of the key issues impacting traffic data quality.  This action item is intended to address the issue by providing guidelines that would ensure that vendors are held accountable for the performance of their devices.

 

Issues:  The action item addresses the following key issues:

 

 

Coordinators:  State DOTs, FHWA

4.3.9   Case Study or Pilot Tests

Description:  Conduct a case study or a pilot test to observe a state DOT and TMCs working to improve data quality and evaluate the return on investment from the improved data quality.  Information gathered from such a case study is expected to help implement some of the action items outlined above.

 

The action item addresses the following key issues:

 

 

Coordinators:  FHWA, state DOTs

4.3.10 Guidance on Technologies and Applications

Description:  Provide guidance on the data elements to measure and report since this dictates the type of device procured by the agency.  For example, the FHWA’s 13 vehicle categories should be revisited and length-based classifications explored.  Similarly, new and emerging applications might have additional data needs, which again influence the type of device.

 

Provide guidance on the innovative uses of loops and existing technologies.  Improvements in inductive loop technologies can expand their capabilities beyond volume and speeds (e.g., approaches to derive vehicle classifications from loop signatures). 

 

The action item addresses the following key issues:

 

 

Coordinators:  FHWA, state DOTs

4.4       Implementation and Work Items

As noted earlier in Section 4.2, the coordinators would assume primary responsibility for implementing the specified action items.  FHWA would play a leading role in the overall implementation of the action plan.  State DOT involvement, coordination, and participation are critical for some action items more than others.  Following are the three potential groups of activities or work items to implement the action plan.

4.4.1   Research Studies

The majority of the action items relate to the development of guidelines, which are best implemented through research studies.  The findings of the research effort would then be disseminated to all potential users.  This will then be followed by evaluation to assess the success of implementation and identify limitations and shortcomings.  FHWA would the conduct these research activities with support from state DOTs and other agencies and organizations.

 

For action items falling into this category, the first activity would be to develop research topics and statements of work for each or combination of action items.  Action items in this category include the following (with report section identified):

 

·         Guidelines and standards for calculating data quality measures (4.3.1)

·                                                         Compilation of business rules/data validity checks and quality control procedures (4.3.2)

·                                                         Best practices for equipment installation and maintenance (4.3.3)

·                                                         Sensitivity studies to demonstrate “value of data” (4.3.5)

·                                                         Guidance on technologies and applications (4.3.10)


4.4.2   Workshops

Some of the action items could be implemented through regional workshops.  It is believed that action items in this category are those that require sharing of experiences and success stories where a workshop or similar forum provides the best environment.  FHWA would coordinate with the state DOTs to sponsor and organize such workshops.  The following are action items in this category:

 

·                                                         Guidelines for sharing resources (4.3.6)

·                                                         Life-cycle costs of detection equipment (4.3.7)

·                                                         Improved contracting approaches (4.3.8)

4.4.3   Case Studies and Clearinghouse

Action item in this category require establishing or identifying an independent entity and conducting case studies.  These action items can be implemented only after some of those in the other categories have been completed.  It is expected that participation in the case studies would be voluntary.  It is envisaged that FHWA, state DOTs, and other agencies or organizations would work jointly to successfully complete these action items.  The following are the action items in this category:

 

·                                             Case study or pilot tests (4.3.9)

·                                             Clearinghouse for vehicle detector information (4.3.4)


5.0    CONCLUDING REMARKS

The action plan was developed based on information from published literature and discussions at two regional workshops.  Ten action items were identified directed at addressing traffic data quality issues.  Coordinators and work items have been suggested for the various action items.  The action items represent the general consensus of the workshop participants regarding the major traffic data quality issues.  Implementation of the action plan is seen as a major step towards enhancing the quality of traffic data and encouraging usage by federal, state, local agencies, and other organizations. 

 

The action plan in its current form would serve as input for a national workshop on data quality for review and adoption.

 

 

 

 

 

 

 

 

 


REFERENCES

Battelle Memorial Institute, Sharing Data for Traveler Information:  Practices and Policies of Public Agencies, prepared for U.S. Department of Transportation, July 2001.

 

Closing the Data Gap:  Guidelines for Quality ATIS Data, Prepared for:  ITS America and

The U.S. Department of Transportation, April 2000.

 

D. Middleton and R. Parker.  Initial Evaluation of Selected Detectors to Replace Inductive Loops on Freeways, Research Report FHWA/TX1439-7, Texas Transportation Institute, College Station, Texas, April 2000.

 

D. Middleton, D. Jasek, and R. Parker, Evaluation of Some Existing Technologies for Vehicle Detection, Research Report FHWA/TX-00/1715-S, Texas Transportation Institute, College Station, Texas, September 1999.

 

D. Middleton and R. Parker.  Evaluation of Promising Vehicle Detection Systems, Research Report FHWA/TX-03/2119-1, Draft, Texas Transportation Institute, College Station, Texas, October 2002.

 

English, L.P.  7 Deadly Misconceptions about Information Quality.  INFORMATION IMPACT International, Inc., Brentwood, Tennessee, 1999.

 

English, L.P.  Improving Data Warehouse and Business Information Quality.  John Wiley & Sons, Inc., New York, New York, 1999.

 

FHWA Study Tour for European Traffic Monitoring Programs and Technologies, FHWA’s Scanning Program, U.S. Department of Transportation, Federal Highway Administration, Washington D.C., August 1997. 

 

MNDOT and SRF Consulting Group, NIT Phase II:  Evaluation of Non-Intrusive Technologies for Traffic Detection, Final Report, September 2002.

 

Strong, D.M., Y.W. Lee and R.Y. Wang.  10 Potholes in the Road to Information Quality. Institute of Electrical and Electronic Engineers, August 1997(A), pp. 38-46.

 

Standard Specification and Test Methods for Highway Traffic Monitoring Devices, The American Society for Testing and Materials, Review Copy:  Version C for E17.52, Draft December 2002.

 


APPENDIX A

 

 

 

WHITE PAPERS


“Defining and Measuring Traffic Data Quality”

By Shawn Turner

Introduction

Although not specifically referring to intelligent transportation systems (ITS), a Wall Street Journal article speaks to the related subject of data quality:  “Thanks to computers, huge databases brimming with information are at our fingertips, just waiting to be tapped.  . . .  Just one problem:  Those huge databases may be full of junk.”  (Wand and Wang 1996)  As Alan Pisarski noted in his Transportation Research Board (TRB) Distinguished Lecture in 1999, “we are more and more capable of rapidly transferring and effectively manipulating less and less accurate information” (Pisarski 1999).

 

Recent research and analyses have identified several issues regarding the quality of traffic data available from intelligent transportation systems for transportation operations, planning, or other functions.  The Federal Highway Administration (FHWA) is developing an action plan to assist stakeholders in addressing traffic data quality issues.  Regional stakeholder workshops and white papers will serve as the basis for this action plan. 

 

As one of those white papers, this document presents recommendations for defining and measuring traffic data quality.  This white paper:

 

Recommended Definition for Data Quality

Several terms should be defined at the outset.  Data and information are sometimes used interchangeably.  Data typically refers to information in its earliest stages of collection and processing, and information refers to a product likely to be used by a consumer or stakeholder in making a decision.  For example, traffic volume and speed data may be collected from roadway-based sensors every 20 seconds.  This traffic data is then processed into information for the end consumer, such as travel time reports provided via the Internet or radio.  But the terms are also relative, as one person’s data could be another person’s information.  Throughout this paper the term data quality will be used to refer to both data and information quality.  No attempt is made to delineate the point at which data becomes information (or knowledge or wisdom, for that matter).

 

The literature contains two similar definitions for data quality.  Strong, Lee and Wang (1997A) define information quality as “fit for use by an information consumer” and indicate that this is a widely adopted criterion for data quality.  English (1999A) further clarifies this widely adopted definition by suggesting that information quality is “fitness for all purposes in the enterprise processes that require it.” English emphasizes that it is the “phenomenon of fitness for ‘my’ purpose that is the curse of every enterprise-wide data warehouse project and every data conversion project.”  In his book, English (1999B) defines information quality as “consistently meeting knowledge worker and end-customer expectations.” It is clear from these definitions that data quality is a relative concept that could have different meaning(s) to different consumers. For example, data considered to have acceptable quality by one consumer may be of unacceptable quality to another consumer with more stringent use requirements.  Thus it is important to consider and understand all intended uses of data before attempting to measure or prescribe data quality levels.

 

The recommended definition for traffic data quality is as follows:

 

Data quality is the fitness of data for all purposes that require it.  Measuring data quality requires an understanding of all intended purposes for that data.

Recommended Practices for Measuring Traffic Data Quality

Several data quality measures were consistently found in both current practice and data quality literature.  Based on the findings discussed later in this paper, the following data quality measures are recommended:

 

 

There are several other valid data quality measures presented that could be used for specific traffic data applications in some regions.  The five measures presented above, though, are fundamental measures that should be considered universally for measuring data quality in all traffic data applications.

 

At this time, we recommend that goals or target values for these traffic data quality measures be established at the regional level based on a better understanding of all intended uses of traffic data.  It is clear that data consumers’ needs and expectations, as well as available resources, vary significantly by region and preclude the recommendation for a national goal or standard for these traffic data quality measures.

 

The research team also recommends that if data quality is measured, the information should be made available and accessible with the data as metadata.  This practice of requiring a data quality report using standardized data quality measures is common in the GIS and other data communities.  The American Society of Testing and Materials (ASTM) is developing a data archive metadata standard that could be used to document and describe these data quality measures in sufficient detail for data consumers.  The ASTM metadata standard under development has been adapted from the GIS communities’ metadata standard (FGDC-STD-001-1998 and ISO DIS 19115) with their data quality reporting sections intact.

Current Practices in Measuring Traffic Data Quality

Current practices in measuring traffic data quality are summarized below for three common consumer groups involved in highway transportation:

 

 

Our review of current practice found that, in general, consistent and widespread reporting of traffic data quality measures was not evident in any of these three consumer groups.  Efforts to address data quality were more evident in the latter two groups than with real-time monitoring and control.  A few data quality measures have been suggested or are used in each of these groups.  These data quality measures are discussed in the following paragraphs:

Real-Time Traffic Monitoring and Control

Data consumers in this group are typically engaged in traffic management and control or the provision of traveler information.  Data uses are considered real-time and are generally concerned only with the most recent data available (e.g., typically five to fifteen minutes old). Some agencies are beginning to use historical data to provide additional value to traveler information.  In some cases field data collection hardware and software provide rudimentary data quality checks; in other cases, no data quality checks are made from the field to the application database.  Field hardware and software failures are common.  In some cases, equipment redundancy provides sufficient information to cover gaps in missing data.  In other cases, missing data is simply reported “as is” and decisions are made without this data.

 

Many agencies provide time-stamped traveler information via websites, thus providing an indication of the data timeliness.  Selected examples can be found at Houston TranStar (http://traffic.tamu.edu), WSDOT (http://www.wsdot.wa.gov/PugetSoundTraffic/), and Wisconsin DOT (http://www.dot.wisconsin.gov/travel/milwaukee/index.htm), just to name a few.

 

Several traffic management centers track failed field equipment through maintenance databases and report such things as the average percent of failed sensors.  The Michigan Intelligent Transportation Systems (MITS) Center has defined lane operability as the sensor-minutes of failure, which is a product of the number of failed sensors and the duration of the failure in minutes (Turner et al. 1999).  These measures can be classified as measures of coverage or completeness.

 

Some traffic management centers evaluate the accuracy of new types of sensors before widespread deployment.  For example, the Arizona DOT traffic operations center in Phoenix used accuracy to measure the data quality from non-intrusive sensors for which they were considering installation (Jonas 2001).  In their evaluation, ADOT compared traffic count and speed data from non-intrusive, passive acoustic detectors to calibrated inductance loop detectors under the assumption that the loop detector data represented the most error-free data obtainable. The measure used in the evaluation was absolute and percentage differences between traffic counts and speeds measured with the two sensor types.(incomplete sentence)

 

ITS America and the U.S. DOT convened numerous stakeholders in 1999 and developed guidelines for quality advanced traveler information system (ATIS) data (ITS America 2000). The guidelines were developed in an effort to support the expansion of traveler information products and services.  One of the explicit purposes of the guidelines was to increase the quality of traffic data being collected.  The ITS America guidelines recommended seven data attributes, six of which can be considered data quality measures:

 

 

The ITS America guidelines further defined quality levels of “good”, “better”, and “best” and provided specific quality level criteria for each attribute.  For example, five to ten percent error in travel times and speeds was classified as a “better” quality level under the Accuracy attribute.

 

In another white paper about data quality requirements for the INFOstructure (i.e., a national network of traffic information and other sensors), Tarnoff (4) suggests the following data quality measures and possible requirements (Table 1):


Table 1.  Possible INFOstructure Performance Requirements

Measure

Application

Requirement

Local Implementation

National Implementation

Speed Accuracy

Traffic Management

5-10%

5-10%

Traveler Information

20%

20%

Volume Accuracy

Traffic Management

10%

N/a

Traveler Information

N/a

N/a

Timeliness

All

Delay < 1 minute

Delay < 5 minutes

Availability

All

99.9% (approx. 10 hours per year)

99% (approx. 100 hours per year)

Source:  Tarnoff 2002

 

 

Tarnoff presented these data quality requirements as a “starting point for the discussion of these issues” and suggested that there is a tendency in the ITS community to specify performance without a complete understanding of the actual application requirements or cost implications.  Thus Tarnoff suggests that any decisions about data quality requirements be grounded in actual application requirements and cost implications.

Operations/ITS Data Archives

Data consumers in this group are typically engaged in off-line analytical processing of data generated by traffic operations.  Archived data uses vary widely, from academic research (e.g., traffic flow theory) to traveler information (e.g., “normal” traffic conditions), operations evaluation (e.g., ramp meter algorithms), performance monitoring, and basic planning-level statistics.  Although the operations data in archives are generated in real-time, most of the applications to-date have been historical in nature and outside of the traffic operations area.  Data archive applications are still in relative infancy and thus quality assurance procedures are still being established in most areas.  Several data archive managers have voiced concerns about the quality of the data generated by operations groups, presumably because the data archive managers have more stringent data quality requirements for their applications than the operations applications.  In fact, this concern about archived data quality is part of the genesis for this FHWA-sponsored project.  Most current archived data users recognize these data quality issues but maintain an optimistic attitude of “this is the best data I can get for free” and attempt to use the data for various applications.  However, interviews conducted in this project revealed several potential data archive consumers that were reluctant to use the data because of real or perceived data quality issues.

 

As noted previously, data archive applications are still in relative infancy and thus data quality measures are not extensively or consistently used.  Data completeness, expressed as the number of data samples or the percent of available samples in a summary statistic, is the measure most often used in data archives.  The data completeness measure is used frequently because operations data is often aggregated or summarized when loaded into a data archive.  For example, the ARTIMIS center in Cincinnati, Ohio/Kentucky reports the number of 30-second data samples (shown in bold in Table 2) that have been used to compute each 15-minute summary statistic.

 

 

Table 2.  ARTIMIS Reporting
of Data Completeness

Data for segment SEGK715001 for 07/15/2001

Number of Lanes: 4

 

#  Time   Samp   Speed   Vol   Occ

00:01:51    30     47    575     6

00:16:51    30     48    503     5

00:31:51    30     48    503     5

00:46:51    30     49    421     4

01:01:52    30     48    274     5

01:16:52    30     42    275    14

...

Source:  ARTIMIS Data Archives

 

 

The Washington State DOT reports data completeness as well as data validity measures for the Seattle data archives that are distributed on CD-ROM (Ishimaru 1998).  In their data archive, they report the number of 20-second data samples in a 5-minute summary statistic (e.g., maximum of 15 data samples possible).  A data validity flag (with values of good, bad, suspect, and disabled loop) is also included in data reports to indicate the validity of 5-minute statistics (Table 3).  Peak hour, peak period, and daily statistics generated by WSDOT’s CDR data extraction program also report data validity and completeness summary measures (Table 4).  The CDR software also has a data quality mapping utility that allows data users to create location-based summaries of data completeness and validity (Ishimaru and Hallenbeck 1999).  This utility is designed for data consumers who would like to analyze the underlying data quality for various purposes.

 

In the FHWA-sponsored Mobility Monitoring Program (http://mobility.tamu.edu/mmp), the Texas Transportation Institute and Cambridge Systematics, Inc. gather archived operations data from numerous traffic management centers nationwide and analyze the archived data to report mobility and reliability trends in the urban areas (Lomax, Turner and Margiotta 2001).  As such, the program is an archived data consumer with the primary application of performance monitoring.

 

The program team performs various data quality checks in the course of processing and analyzing the archived data.  In addition to summary statistics on mobility and reliability, performance reports also include information on the following data quality measures:

 


Table 3.  WSDOT Reporting of Data Validity
and Completeness

***********************************

Filename: 5TO15.DAT

Creation Date: 02/2/98 (Wed)

Creation Time: 03:16:59

File Type: SPREADSHEET

***********************************

ES-145D:_MS___1 I-5 Lake City Way 170.80

09/01/97 (Mon)

---Raw Loop Data Listing---

Time Vol Occ Flg nPds

0:00 49 3.80% 1 15

0:05 37 2.90% 1 15

0:10 38 3.50% 1 15

0:15 34 2.60% 1 15

0:20 48 4.40% 1 15

0:25 44 3.60% 1 15

0:30 35 2.80% 1 15

0:35 33 3.30% 1 15

0:40 28 2.50% 1 15

0:45 30 2.30% 1 15

Source:  Ishimaru and Hallenbeck 1999

 

 

 

Table 4.  WSDOT Reporting of Data Validity and Completeness

in Summary Statistics

***********************************

Filename: AADT.MDS

Creation Date: 02/2/98 (Thu)

Creation Time: 10:54:09

File Type: SPREADSHEET

***********************************

ES-145D:_MS___1 I-5 Lake City Way 170.80

Monthly Avg for 1996 Jan (Sun)

---Multi-Day Loop Summary Report---

Summary     Valid   Vol    Occ     G   S  B  D  Val Inv Mis

Daily        VAL   19392   7.50% 1133 18  1  0   4   0   0

AM Peak      VAL    1493   3.50%  142  2  0  0   4   0   0

PM Peak      VAL    5069  15.60%  190  2  0  0   4   0   0

AM Pk Hour   VAL    1381  10.00%   47  1  0  0   4   0   0 10:45 11:45

PM Pk Hour   VAL    1576  11.90%   48  0  0  0   4   0   0 13:45 14:45

Source:  Ishimaru and Hallenbeck 1999

 

 

For example, Figure 1 shows summary information for data validity and data completeness. Significant detail for these data quality measures is also stored in databases.  For example, one could do time-based and location-based analyses of data quality using the full database.


 

 

 

 

 

 

 

 

 

 

 

 

 

 


Figure 1.  Data Quality Statistics for 10 Cities in 2000 Mobility Monitoring Program

 

 

 

Historical/Planning-Level Traffic Monitoring

 

Data consumers in this group are typically engaged in mid- to long-range (5 to 20-plus years) traffic planning and analysis.  Data uses are mostly of an historical nature, so in some cases annual average statistics may not be available (or needed) until six or more months after the past year ends.  Thus, the consumer groups’ frame of reference for data timeliness differs from the other two groups by an order of magnitude.  Whereas operations data consumers may consider data older than 5 minutes unacceptable, planning data consumers may consider waiting up to 9 months for annual statistics to be acceptable.  The use of data quality checks or “business rules” for determining the validity of traffic data appears to be fairly common among this group.  In many cases, these planning groups serve as the “official source” of traffic data for a particular jurisdiction.

 

Numerous state departments of transportation (DOTs) use data validation checks or “business rules” when they load traffic data into their information systems.  These data quality checks are typically based upon traffic capacity principles, typical traffic trends or patterns, or simply local traffic experience and insight.  Thus data validity is a common data quality measure using in many historical traffic monitoring groups.  For example, the Texas DOT (TxDOT) plans to use 23 business rules for continuous vehicle counts in their Statewide Traffic Analysis and Reporting System (STARS) (TxDOT 2001).  Once a data record has failed a business rule, that record is flagged as “suspect” and must be reviewed by a traffic data analyst prior to the beginning of the traffic monitoring program’s year-end process.  Additionally, STARS uses data integrity as a data quality measure as they also run checks on the data file and station integrity.


The traffic monitoring group in the Virginia DOT (VDOT) also uses established business rules to perform traffic data validity checks prior to loading them into their information system.  As with TxDOT’s process, data that fails the business rules are flagged as suspect and must be reviewed by a traffic data analyst.  If the traffic data is deemed erroneous, it will not be loaded into the traffic information system.  VDOT has a unique contracting arrangement in that they lease the traffic data collection equipment from sub-contractors; thus, they pay the sub-contractors lease payments based upon the quality and completeness of the data collected by the sub-contractors’ equipment.  For example, a full monthly payment is made for locations “where 25 or more days of useable (for factor creation) classification and volume traffic information are available during a calendar month”.  A partial lease payment of 50 percent is made “where 15 or more days of useable (for factor creation) volume traffic information, but less than 15 days (useable for factor creation) classification data are available.”  Thus VDOT’s payment for traffic data collection is based on the quality measures of data validity and data completeness.

 

VDOT also designates quality levels for their traffic data they distribute.  The quality level codes and descriptions are as follows:

 

·        Code 0 - Not Reviewed

·        Code 1 - Acceptable for Nothing

·        Code 2 - Acceptable for Qualified Raw Data Distribution

·        Code 3 - Acceptable for Raw Data Distribution

·        Code 4 - Acceptable for use in AADT Calculation

·        Code 5 - Acceptable for all TMS uses

 

These quality codes are designed to indicate to data consumers what the data producers believe to be the fitness of the data for various purposes.

 

Similar software-based data validity checks are used in several other states.  The Pennsylvania and Ohio DOTs both use data validity checks in their traffic information system.  These validity checks are performed on a daily basis for all traffic data.  The Michigan DOT uses Traffic Data Quality (TDQ), a software tool developed as a result of a pooled-fund study (Flinner and Horsey, no date).

 

The international experience with traffic data validity checks is comparable to the U.S. experience.  A European scanning tour found that several countries perform an automated validation of traffic data (FHWA 1997).  All ITS systems observed in the tour countries (the Netherlands, Switzerland, Germany, France, and the United Kingdom) perform some type of automated data validation, usually by comparing current data from a particular site with historical data from that same site during a similar time interval.  If an operator identifies questionable data, they use graphic displays to review the data and determine acceptability.

 

Several of the countries have fairly extensive data validation systems, and all of them require manual input.  Most cases involve validation methods based on site-specific development of “rules” based on historical patterns by time of day, day of week, and lane for that site.  Data that fail the validation routines alert the attention of system operators, who then decide whether the data are correct.  Operators replace invalid data with data from previous time periods at that site, factoring the data with growth estimates (based on nearby counters that worked properly) when appropriate.  The discussion that follows covers processes used in individual countries.

 

The Netherlands uses a software system called INTENS.  This system collects traffic data from the various traffic-monitoring sites, conducts automated validation checks, facilitates manual review of flagged data, and produces a variety of summary graphics and statistics.  The data validation process consists of a series of parameter checks comparing the data submitted for each site with confidence limits set specifically for that site.  Initial data checks ensure that data are labeled correctly (i.e., belong to a site for which data are expected), have the proper number of lanes, and pass other site identification checks.  The next set of checks are called “primary control”, which are a series of maximum and minimum allowable data ranges for specific variables that are based on historical data.

 

At the national level, Switzerland has two sets of data validation checks.  The first determines if the telemetry system functioned properly.  The second set of validation data examines the submitted records and identifies those that are questionable based on several criteria.  These include:  zero volumes or other errors in the hourly records; hourly volumes that exceed a maximum percentile; variation in the ratio of 14-hour volumes to 24-hour volumes (14 hours from 6:00 a.m. to 8:00 p.m.) for weekdays; variation in the ratio of 5-hour volumes to 14-hour volumes (5 hours from 3:00 p.m. to 8:00 p.m.) per weekday; and variations in directional distribution.

 

Like other countries included in the scan tour, Germany utilizes multiple validation procedures.  The one included here is being developed for an ITS application in Hesse.  The system uses a combined fuzzy logic/expert system approach for data validation.  It is trained on data that are considered “valid” and then reports invalid data for subsequent manual review.  Data determined to be valid are then included in the training of the system, so that other data with those characteristics will be considered valid.

 

France uses a software system called MELODIE, which creates many of the basic reporting statistics needed for later analysis.  There are no specific algorithms within the system itself, but MELODIE generates graphical output that is viewed by an operator who makes decisions pertaining to its validity.  If the operator determines that some data are not valid, the program will use the previous month’s data for replacement.  The MELODIE system keeps track of the fact that invalid data have been replaced.

 

In the United Kingdom, the scan team found multiple validation techniques.  The one covered in this document is the Motorway Incident Detection and Analysis System (MIDAS).  It performs two levels of validation.  In the first level, the system itself has an internal validation method that indicates when the loop system needs recalibration or has failed (other details unavailable).  In the second level of validation, the system plots the volume, speed, or loop occupancy by geographic location and time of day.  The graphic provides an easy to use visual reference for detecting specific types of equipment errors.


Current Practices in Measuring Data Quality in Other Disciplines

Data quality literature is readily available in several other disciplines, especially the business management and data warehousing industries.  The research team conducted a literature review and identified at least two dozen resources that related directly to data quality measures.  Selected resources are summarized below with an emphasis on their relevancy to traffic data quality measures.

 

The geographic information systems (GIS) community has developed standards for documenting data quality in their Spatial Data Transfer Standard (SDTS) (O’Looney 2000; ANSI 1998).  The SDTS data quality categories are shown in Table 5.  The purpose of the data quality standard within SDTS is not to require acceptable levels of data quality, but to require a data quality report in all GIS data transfers.  Following are the SDTS standardized definitions and measures that are to be used in describing and documenting GIS data quality.

 

 
Table 5.  Five Categories for Data Quality in the Spatial Data Transfer Standard

Category

Definition

Example

Positional Accuracy

The degree of horizontal and vertical control in the coordinate system.

The available precision or detail of longitude and latitude coordinates.

Attribute Accuracy

The degree of error associated with the way thematic data is categorized.

The degree to which a soil description is likely to vary from a soil measurement taken from the corresponding location.

Completeness

The degree to which data is missing and the method of handling missing data.

The ability to estimate crime rates in specific areas may be compromised if data is not available for specific areas.

Logical Consistency

The degree to which there may be contradictory relations in the underlying database.

Location data on some crimes may be based on the place where the crime occurred, while for other crimes the location might be the place where a crime report is taken.

Lineage

The degree to which there is a chronological set of similar data developed using the modeling and processing methods.

Population estimates may not be available for all years; may be estimated on different days of the year; or may be estimated using different estimation techniques and data sources.

Source:  O’Looney 2000 and ANSI 1998

 

 

Strong, Lee and Wang (1997A, 1997B) suggest four major categories in data quality with 15 dimensions underlying these four categories (Table 6).  The authors suggest that traditional quality control techniques (e.g., validity checks, integrity checks, etc.) mostly improve intrinsic data quality dimensions such as accuracy.  However, the authors caution that attention to accuracy alone does not correspond to the data consumers’ broader data quality concerns.  For example, they argue that conventional approaches treat accessibility as a technical systems issues and not a data quality issue.  Some data custodians may insist that data is accessible if the physical and software connections are present.  The authors suggest, though, that accessibility goes beyond simple technical accessibility; it includes the ease with which the data consumers can manipulate the data to meet their needs.

 

 

Table 6.  Data Quality Categories and Dimensions

Data Quality Category

Data Quality Dimensions

Intrinsic

·        Accuracy

·        Objectivity

·        Believability

·        Reputation

Accessibility

·        Accessibility

·        Security

Contextual

·        Relevancy

·        Value-Added

·        Timeliness

·        Completeness

·        Amount of Information

Representational

·        Interpretability

·        Ease of Understanding

·        Concise Representation

·        Consistent Representation

Source:  Strong, Lee and Wang (1997A, 1997B)

 

 

A relevant analogy to this accessibility issue exists in current practice.  Several traffic management centers log detailed traffic data to “file-based archives” where file sizes reach 50-plus MB or the files for a day number in the thousands.  These file-based archives are then made available on CD or through the Internet.  Some may argue that this data is accessible because it is publicly available.  However, the size or nature of the data prevents many data consumers from easily manipulating the data to meet their needs.  Thus the authors would argue that these large file-based data archives are not easily accessible to many data consumers.

 

Wand and Wang (1996) suggest numerous data quality dimensions that distinguish between internal and external views of an information system.  External views are concerned with the use and effect of the information system, whereas internal views address the procedures necessary to attain the required functionality that is reflected in an external view.  Table 7 contains the various data quality dimensions for both internal and external views.

 


Table 7.  Data Quality Dimensions as Related to Internal or External Views

 

Data Quality Dimensions

Internal View (design, operation)

Data-related

accuracy, reliability, timeliness, completeness, currency, consistency, precision

 

System-related

reliability

External View (use, value)

Data-related

timeliness, relevance, content, importance, sufficiency, usability, usefulness, clarity, conciseness, freedom from bias, informative, level of detail, quantitative level, scope, interpretability, understandability

 

System-related

timeliness, flexibility, format, efficiency

Source:  Wand and Wang (1996)

 

 

The Department of Defense (DoD) offers a more pragmatic core set of data quality measures for all automated information systems within DoD (Table 8).  The DoD also provides guidelines on a total data quality management process and how it can be implemented within the various service units.  The guidelines include several real-world examples of data quality management and use of the data quality measures.

 

The Department of Energy (DOE) has established a Data Quality Objectives (DQO) process and maintains a website on the DQO process at http://dqo.pnl.gov/index.htm.  The DQO process is a planning tool for environmental data collection activities that provides a basis for balancing decision uncertainty with available resources.  The DQO process is required for all significant data collection projects within DOE's Office of Environmental Management.  The DQO process defines 7 steps related to identifying problems, decisions, and inputs, but does not suggest or recommend any specific data quality measures.

 

 

 


Table 8.  DoD Core Set of Data Quality Requirements

Data Quality

Characteristics Description

Example Metric

Accuracy

A quality of that which is free of error.  A qualitative assessment of freedom from error, with a high assessment corresponding to a small error.  (FIPS Pub 11-3)

Percent of values that are correct when compared to the actual value.  For example, M=Male when the subject is

Male.

Completeness

Completeness is the degree to which values are present in the attributes that require them.  (Data Quality Foundation)

Percent of data fields having values entered into them.

Consistency

Consistency is a measure of the degree to which a set of data satisfies a set of constraints.  (Data Quality Management and Technology)

Percent of matching values across tables/files/records.

Timeliness

As a synonym for currency, timeliness represents the degree to which specified data values are up to date.  (Data Quality Management and Technology)

Percent of data available within a specified threshold time frame (e.g., days, hours, minutes).

Uniqueness

The state of being the only one of its kind.  Being without an equal or equivalent.

Percent of records having a unique primary key.

Validity

The quality of data that is founded on an adequate system of classification and is rigorous enough to compel acceptance.

(DoD 8320.1-M)

Percent of data having values that fall within their respective domain of allowable values.

 Source:  DOD Guidelines on Data Quality Management, no date.

 

 

The Ken Orr Institute, a systems/software research organization, provides a set of data quality measures very similar to the DoD’s data quality measures (Ken Orr Institute, no date).  They define these data quality measures as:

 

 


The institute also suggests that quality measures and standards be communicated in several ways:

 

Recommended Approaches to Defining and Measuring Traffic Data Quality

Based upon the reviews conducted for this white paper, we recommend the following definition for traffic data quality:

 

Data quality is the fitness of data for all purposes that require it.  Measuring data quality requires an understanding of all intended purposes for that data.

 

The following data quality measures are recommended:

 

 

There are several other valid data quality measures presented in this paper that could be use for specific traffic data applications in some regions.  The five measures presented above, though, are fundamental measures that should be considered universally for measuring data quality in all traffic data applications.

 

At this time, we recommend that goals or target values for these traffic data quality measures be established at the regional level based on a better and more clear understanding of all intended uses of traffic data.  It is clear that data consumers’ needs and expectations, as well as available resources, vary significantly by region and preclude the recommendation for a national goal or standard for these traffic data quality measures.

 

The research team also recommends that, if data quality is measured, this data quality information be made available and accessible with the data as metadata.  This practice of requiring a data quality report using standardized data quality measures is common in the GIS and other data communities.  The American Society of Testing and Materials (ASTM) is currently developing a data archive metadata standard that could be used to document and describe these data quality measures in sufficient detail for data consumers.  The ASTM metadata standard under development has been adapted from the GIS communities’ metadata standard (FGDC-STD-001-1998 and ISO DIS 19115) with their data quality reporting sections intact.

Bibliography

American National Standards Institute (ANSI).  Spatial Data Transfer Standard (SDTS) –Part 1, Logical Specifications.  ANSI NCITS 320-1998, available at http://mcmcweb.er.usgs.gov/sdts/.

 

English, L.P.  7 Deadly Misconceptions about Information Quality.  INFORMATION IMPACT International, Inc., Brentwood, Tennessee, 1999.

 

English, L.P.  Improving Data Warehouse and Business Information Quality.  John Wiley & Sons, Inc., New York, New York, 1999.

 

Federal Highway Administration.  FHWA Study Tour for European Traffic Monitoring Programs and Technologies.  FHWA’s Scanning Program, U.S. Department of Transportation, Federal Highway Administration, Washington D.C., August 1997.

 

Flinner, M. and H. Horsey.  Traffic Data Editing Procedures:  Traffic Data Quality (TDQ) Final Report.  Available at http://www.nic-idt.com/tdq/tdq.html, no date.

 

Ishimaru, J.M. and M.E. Hallenbeck.  FLOW Evaluation Design Technical Report.  Washington State Transportation Center, Seattle, Washington, May 1999.

 

Ishimaru, J.M.  CDR User’s Guide.  Washington State Transportation Center, Seattle, Washington, Version 2.52, March 1998.

 

ITS America and U.S. Department of Transportation.  Closing the Data Gap:  Guidelines for Quality Advanced Traveler Information System (ATIS) Data.  Version 1.0, September 2000.

 

Jonas, G.  “Evaluation of SmarTek SAS-1 Passive Acoustic Detectors on Interstate 17.” Arizona Department of Transportation, Phoenix, Arizona, 2001.

 

Lomax, T., S. Turner and R. Margiotta.  Monitoring Urban Roadways:  Using Archived Operations Data for Reliability and Mobility Measurement.  Texas Transportation Institute and Cambridge Systematics, Inc., December 2001.

 

O’Looney, J.  Beyond Maps:  GIS and Decision Making in Local Government.  Environmental Systems Research Institute, Inc., Redlands, California, 2000.

 

Strong, D.M., Y.W. Lee and R.Y. Wang. 10 Potholes in the Road to Information Quality. Computer.  Institute of Electrical and Electronic Engineers, August 1997(A), pp. 38-46.

 

Strong, D.M., Y.W. Lee and R.Y. Wang.  Data Quality in Context.  Communications of the ACM.  Association for Computing Machinery, Vol. 40, No. 5, May 1997(B), pp. 103-110.

 

Tarnoff, P.J.  Getting to the INFOstructure.  White Paper prepared for the TRB Roadway INFOstructure Conference, August 2002.

 

Turner, S.M., W.L. Eisele, B.J. Gajewski, L.P. Albert, and R.J. Benz.  ITS Data Archiving:  Case Study Analyses of San Antonio TransGuide® Data.  Report No. FHWA-PL-99-024.  Federal Highway Administration, Texas Transportation Institute, August 1999.

 

Wand, Y. and R.Y. Wang.  Anchoring Data Quality Dimensions in Ontological Foundations. Communications of the ACM.  Association for Computing Machinery, Vol. 39, No. 11, November 1996, pp. 86-95.


“State of the Practice for Traffic Data Quality”

By Rich Margiotta

Introduction

Purpose of Report

This White Paper documents the current state of the practice in the quality of traffic data generated by Intelligent Transportation Systems (ITS).  The current state of the practice is viewed from the perspectives of both Operations and Planning personnel; the distinction between these two groups is that Operations personnel use the data primarily for real-time or near real-time applications (e.g., incident management, ramp metering) while Planning personnel use the data for applications that are not nearly as time sensitive (e.g., monitoring trends in travel monitoring).  The paper considers:

 

 

For the purpose of this paper, when “Operations” or “ITS” is used, it is meant to refer to the activities of Traffic Management Centers (TMCs) in urban areas.  Rural ITS applications are emerging, but the current state of the practice in ITS-generated traffic data is clearly focused on urban TMC deployments.

 

Methodology

This report draws heavily on past work conducted for FHWA under the Archived Data User Service (ADUS) program.  Additional information was gathered from phone interviews with state transportation agency personnel from traffic monitoring programs (usually within Planning divisions) as well as ITS groups.  (ITS personnel were usually those directly involved in traffic management center (TMC) operation.)

Types and Applications for Traffic Data

Data Types

Several types of traffic data are collected by both “traditional” and ITS means.  Table 1 displays these types of data.  Where there is overlap between the two realms, the basic nature and definitions of the data collected are the same.  However, there are subtle differences in data collection methodologies that may lead to problems with data sharing and quality.  Among these are the polling rate and vehicle classification “bins”.  (Section 4 discusses these discrepancies in more depth.)


Table 1.  Types of Traffic Data Used by Transportation Agencies

Data Type

Description

Collection Details

Volume

Total number of vehicles passing a point on the highway over a given time interval

Planning:  Collected continuously at a limited number of sites statewide; 24-48 hour counts cover most highway segments (but counts may be up to 3 years old on major highways, more on lower classes); data usually aggregated to hours for reporting from field.

ITS:  Collected continuously on every segment (1/2 mile spacing is typical on urban freeways); data reported at 20-30 second intervals from field; data aggregated for later use anywhere from 20-30 seconds up to 15 minutes.

Vehicle Classification

Same as volume except counts are made by individual vehicle classification

Planning:  Collected continuously at a limited number of sites statewide; 24-48 hour counts taken at selected locations; FHWA 13-bin scheme based on number axles, type of power unit, and trailering is the most common.

ITS:  For urban TMCs, it is uncommon that vehicle classification is collected – where it is, 3-4 length-based bins are typically used.  (CVO deployments used primarily to capture intercity truck movements do collect vehicle classification.)

Truck Weight

Total weight and individual axle weights and spacings of trucks

Planning:  Same as vehicle classification except that short-counts are less frequent.

ITS:  For Urban TMCs, neither collected by ITS deployments nor used in ITS applications.  (CVO deployments used primarily to capture intercity truck movements do collect vehicle weights.)

Occupancy

The percent of time that a roadway detection zone is “occupied” with vehicles

Planning:  Not collected.

ITS:  Collected continuously on every segment (1/2 mile spacing is typical on urban freeways); data reported at 20-30 second intervals from field; data aggregated for later use anywhere from 20-30 seconds up to 15 minutes.  (The same equipment is used for both volume and occupancy measurements.)  Roadway density and average headways can be calculated from occupancy if length of the detection zone and average vehicle length are known.

Speed

Speed of vehicles passing a point on the highway over a given time interval (also known as “time-mean speed”)

Planning:  Newer equipment used to measure volumes, vehicle classifications, and truck weights are capable of collecting speeds, but the data are rarely used.

ITS:  Either collected directly (same characteristics as for volume and occupancy) or estimated from volume and occupancy measurements (older “single roadway loop” systems).

Travel Time

The measured time a vehicle takes to traverse a highway segment

Planning:  Rare for state agencies to collect; local agencies collect using “floating car” method (drivers specifically tasked to collect travel times).  License plate matching using imaging technology becoming more prevalent.

ITS:  Collected with vehicle-based technologies:
(1) GPS transmission of location and time, or (2) roadway-based “readers” of vehicle tags.  (Most of the vehicle “tags” in current use are from automated toll collection systems.  Readers may also be installed off of toll highways to detect the passage of “tagged” vehicles.)

Queues

Stopped or slow moving vehicles impeded by a bottleneck

Planning:  Not usually collected.

ITS:  Where collected, restricted to queues at ramp meters.

 

Applications:  Planning-Related Traffic Monitoring

Planning-related traffic monitoring activities are usually conducted as a service to support a variety of other functions with transportation agencies.  Brief examinations of the Planning applications that use traffic data are presented in Table 2.  Also included in Table 2 is an assessment of the advantages of using ITS-generated traffic data for these applications.  It is clear that ITS-generated data potentially offers many advantages over general use traffic data:

 

·        The continuous nature and detailed geographic coverage of traffic data generated by ITS removes temporal sampling bias from traffic measurements.  The vast majority of traffic data currently collected for planning, administration, and research applications are based on short-duration traffic counts.  Although attempts are made to adjust or expand the sample, the procedures are imperfect.  With continuous data, there is no need to perform adjustments to control sample bias.  (Equipment-based errors are still present, though). 

 

·        Continuous data from ITS sources allows the direct study of variability in travel times.  This variability is often termed the reliability of travel times and it is becoming an important factor in both the operations and planning communities.  Continuous data also capture the full range of factors influencing reliability, most notably incidents and weather – short duration counts either completely miss these events or are unduly biased by them.  (Many agencies will discard short counts and floating car runs taken during “unusual” events.)

 

·        ITS-generated traffic data can supplement – and in some cases supplant – traffic data collected for Planning and general use.  Traffic monitoring on heavily traveled urban highways has become extremely difficult for field personnel.  Installing portable devices on the mainlines of these highways has become practically impossible for safety reasons, and the reliance on ramp-based methods requires that multiple devices be installed and that all devices be operating properly during the data collection.  By accessing data that already exist through ITS sources, these problems are avoided.  Recent work indicates that ITS data can be used as volume resource in these circumstances.

 

·        Data to meet emerging requirements and for input to new modeling procedures will have to be more detailed than what is now collected.  The next generation of Travel Demand Forecasting (TDF) models (e.g., TRANSIMS) and air quality models (modal emission models) will operate at a much higher level of granularity than existing models.  Traditional data sources are barely adequate for existing models and there is little doubt that they will be incapable of supporting the next generation of models.  ITS can provide many of the data types to support these models, especially at the detailed geographic and temporal resolutions that are required.  For example, roadway surveillance data (volumes, speeds, and occupancies) are typically reported every 20 seconds and GPS-instrumented vehicles can report positions and activity at time intervals as short as one second.  Also, GPS-derived locations can pinpoint incident locations to within a few meters.  This level of detail will be required for the input and calibration data used by the new models.  Finally, as data generated by ITS are used more frequently for non real-time purposes, it is likely that additional uses not currently foreseen will emerge.  In addition, data on activity patterns and how travelers respond to system conditions will be important for the next generation of models.

 

Applications:  Operations

In urban areas, Operational responses originate at TMCs whose primary focus is freeway performance.  Roadway surveillance is a typical feature of TMCs, both in terms of visual coverage (e.g., CCTV) and electronic traffic data.  Electronic traffic data always include volumes and detector zone occupancies and most TMCs also include measured traffic speeds.  (The same equipment is used to measure all three data types.)  Current TMC applications that potentially can use traffic data include:

 

·        Ramp meter control – most algorithms for dynamically adjusting ramp metering rates are based on occupancies.

·        Lane control – speeds caused by bottlenecks are used to provide lane control guidance.

·        Traffic signal control – real-time traffic adaptive control strategies (e.g., SCOOT, SCATS) rely on detailed information about signal performance and mid-block speeds.

·        Incident detection – incident detection algorithms use speeds, occupancies, or some combination.[1]

·        Variable speed limits – adjusting speed limits based on current environmental and traffic conditions.

·        Evacuation, special event, and military deployment – these functions usually have special traffic control needs.

·        General bottleneck performance – speeds are used by TMC personnel to gain a general understanding of real-time system performance.

·        Traveler information – maps showing current speeds by link are a typical form of information disseminated by TMCs.  Also, messages of general congestion (based on speeds) and specific incidents are often posted on dynamic message signs and broadcast over highway advisory radio.

·        Evaluations and Performance Monitoring – where these are conducted, volumes and speeds are used.

 


Table 2.  Traditional Applications for Traffic Data

 

Category

Specific Application

Current Traffic

Data Used

Advantages of Using

ITS-Generated Data

Travel Demand Forecasting Models

Validation of predicted link volumes

AADTs for 24-hour forecasts (generally used in smaller areas); peak hour volumes in larger areas

Continuous data removes sampling and adjustment bias present in short counts and in developing peak hour volumes from K- and D-factors.

Validation of predicted link speeds

None available for this purpose

Can be derived directly from measured data for either daily or peak hour.

Free flow speeds

None available for this purpose; based on speed limit or judgment

Can be derived directly from measured data.

Link capacities

None available for this purpose; based on judgment and (rarely) HCM analysis

Direct measurement of highest flow rates based on actual link conditions.

Link truck percentages

Based on limited amount of urban vehicle classification

New technologies can provide much better estimates of urban vehicle classification (length-based, continuous, greater coverage).

Congestion Management Systems

Performance measures (mobility-based)

Limited floating car data; synthetic methods based on volume estimates

Direct measurement of long-term performance and speeds, including the effects of incidents, weather, work zones, and other sources of non-recurring congestion missed with synthetic methods.

 

Emissions Models

(MOBILE6)

Hourly speed estimates by functional class

Synthetic methods based on volume estimates

VMT by 28 vehicle classes

Based on limited amount of urban vehicle classification and vehicle registrations

Length-based classifications can be a basis for developing these.

Highway Design

Design volumes

Estimated using forecasted AADTs with areawide K-, and D-factors

Facility-specific K- and D-factors can be derived.

Safety Analysis

Crash rates for performance monitoring and specific studies

Exposure (typically VMT) derived from short-duration traffic and vehicle classification counts; traffic conditions under which crashes occurred must be inferred. 

Continuous volume counts, truck percents, and speeds, leading to improved exposure estimation and measurement of the actual traffic conditions for crash studies. 

 

Freight Analysis

Truck travel patterns

Data collected through rare special surveys or implied from available vehicle classification

Electronic credentialing, AVI, and new roadway technologies for vehicle classification allows tracking.  Improved understanding of truck patterns and can lead to improved assessments of inter-modal access and highway design for heavily used truck highways.

Pavement and Bridge Management

Historical and forecasted loadings

Volumes, vehicle classifications, and vehicle weights derived from short-duration counts (limited number of continuously operating sites)

Continuous volume counts and vehicle classifications taken over a larger area.

 

 

·        Weather Management – includes detecting and forecasting weather-related hazards such as snowy/icy road conditions, dense fog, high winds, and approaching severe weather fronts.  This knowledge can be used to more effectively deploy road maintenance resources.  It can also be used in conjunction with other core functions such as traffic control (e.g., variable speed limits, signal coordination timings), incident management (e.g., routing response vehicles), and traveler information (e.g., general advisories, location specific warnings).

Traffic Data Quality:  Characteristics

What Causes “Bad” Traffic Data

Several sources contribute to inaccuracies in traffic data.  These relate to the nuances of specific equipment and how data are collected and transmitted from the field.  A more thorough discussion of data quality issues associated with particular technologies is covered in the white paper, Innovative Approaches to Traffic Data Quality.  A few generalizations can be made about the sources of data quality problems:

 

 

 

 

 

 

 

Detection of “Bad” Data

The white paper, Defining and Measuring Traffic Data Quality, presents a full discussion of how questionable/inaccurate data are identified after they are collected from the field.  A variety of methods are used including:  internal range checks, cross-checks, time series patterns, comparison to theory, and historical patterns are used. 

Correction of “Bad” Data

Once suspect data are identified, the question then is what to do about them.  Most applications flag the records failing quality control or set the measurement values to missing or other special codes.  Editing the measurement values is far less common, although some experimentation with “imputing” values has taken place.  Imputation appears to be most applicable where intermittent gaps appear in the data rather than large portions of time with missing or suspect data.  A variety of techniques have been explored including time series smoothing (2) and historical growth rates by location and day and week (3).  However, there is little consensus in the profession on what techniques to be used, or if imputation should be done at all.

Quality Issues for Using ITS-Generated Data for Traditional Uses

Operational vs. Traditional Uses of ITS-Generated Traffic Data

The applications that traffic data support in each of the realms – as well as the nuances of data collection in both cases – can have an impact on data quality.  Several differences exist based on these points, as discussed below.

 

Volumes vs. Speeds.  A review of operational and traditional applications was presented in Section 2.  Based on these applications, the most notable difference between operational and traditional use of traffic data is the emphasis on speeds and occupancies in the former and on volumes in the latter.  Traditional applications use volumes are their basis – speeds are often modeled after the fact in specific applications.  Yet, most current operational uses do not use volume very much, if at all.  This lack of focus on volumes may lead to ignoring data quality problems related to volumes.  This situation is highlighted by the case of Houston’s Transtar system.  Originally, roadway-based traffic detection was installed on many of Houston’s freeways.  Later, as electronic toll tags were implemented, Transtar instrumented both toll and non-toll roads to monitor travel times of tag-equipped vehicles.  For their applications up to this point, Transtar has found the tag-based travel times to be sufficient and use the roadway-based traffic data as a supplement.

 

Data Quality Control Methods.  The interviews with Operations and Planning personnel revealed that while Planning personnel are used to performing in-depth reviews of traffic data, including the use of QC software, Operations personnel rarely examine the data at this level of detail.  Data review from an Operations perspective review is typically limited to whether the detector is reporting any data at all and identifying obvious outliers.  Planning review of data is more likely to include more sophisticated range checks, cross-checks, checks against theory, checks against history profiles, and equipment quirks (e.g., consecutive values). 

 

Level of Accuracy.  Data quality requirements (i.e., level of accuracy) also vary between the two realms.  In terms of volume, a review of the INFOstructure effort (4) reveals that for advanced traffic management purposes, volumes with a +/-10% accuracy would suffice.  (Presumably these are applications behind the current state-of-the-practice in traffic management.)  This level of accuracy corresponds roughly to those of Planning-oriented traffic monitoring for short-duration counts, considering the inherent problems in the adjustment process.  For continuous count data, however, +/-10% accuracy may be too lenient a threshold – most traffic monitoring units would like a much tighter error bound on these data.  Therefore, ITS-generate data with +/-10% error tolerance are probably adequate for estimating AADTs on roadway segments, but other applications of continuous count data (factor and temporal distribution development) are questionable.

 

The INFOstructure’s estimates of speed accuracy requirements are 5-10% for traffic management and 20% for traveler information applications.  For performance monitoring purposes, an error tolerance of 5-10% is probably adequate.  However, the degree to which this tolerance is currently achieved is largely unknown and likely varies significantly from area to area.

 

Recent work by Mitretek Systems on data accuracy requirements for advanced traveler information systems (ATISs) indicates that familiar commuters benefit from knowing point-to-point travel times within 10-20 percent of their true values.  Travel time estimates beyond 20 percent accuracy range still benefit certain subsets of commuters, but most commuters would be better off just relying on their own experience and sticking to a habitual route.  In the Mitretek study, squeezing error below 5 percent doesn't seem to have a great deal of benefit.  The Mitretek results correspond to the estimates subjectively developed in an earlier ATIS effort that found the desired error rate of travel times developed by aggregating point speeds should be “less than 15 percent”.  However, these results need to be tempered by the method used to estimate travel times.  Direct measurement systems – those that measure the passage of vehicles over extended highway segments (such as probes) – provide the most accurate estimates.  If point-to-point travel times are synthesized using a series of roadway-based detectors (spot speeds), then the accuracy of the individual measurements becomes more critical.  If the individual measurements are independent (unbiased), then errors will tend to cancel out so that the accuracy of any given detector can be in the 10-20 percent range.  If, however, the measurements are biased in one direction, then the errors will be additive, and the accuracy of individual detectors will have to be more stringent.

 

Data Collection Nuances.  Differences in data collection methodology can also lead to quality problems.  One of the most significant is the polling rate and how communication failures interact with it.  In traffic monitoring programs, continuous traffic volumes are usually accumulated to hour summaries by the field equipment and then transmitted to a central location every 24 hours.  If the communications link for this transmission fails, it is simply re-established.  ITS traffic data are typically accumulated to 20- or 30-second intervals by the field equipment and then transmitted immediately.  However, if the transmission fails, the field equipment is not likely to be re-polled since it’s well into its next reporting cycle.  This potentially leads to intermittent gaps in ITS-generated traffic data. 

 

Data Management.  An issue related to the aggregation and polling issue is that of data management.  Because of the lower level of aggregation and the multitude of sensor locations in an urban area, the sheer volume of ITS-generated data can easily overwhelm Planning-oriented traffic monitoring programs.  While this is largely an issue that can be dealt with by increasing computer resources and developing software, it is still a barrier to the sharing of data between the two realms.

 

Level of Coverage.  Another problem raised by the differences in data collection methodology is that of coverage.  Detailed traffic data collection for operations only currently cover a portion of urban freeways (22% of urban freeway miles in the 76 largest metropolitan areas had electronic surveillance in 2000) and a smaller portion for signalized arterials.  (Generally only advanced control systems like real-time traffic adaptive control collect the type of traffic data useful for traditional applications.)  While ITS deployments will continue to grow, they will still tend to be concentrated on congested freeway corridors because these are the ones in need of operational control strategies.  Thus, the data needs of Planning-oriented traffic monitoring programs can never be fully replaced by ITS sources, but ITS can supply information in areas that are historically difficult to place portable equipment.

 

Vehicle Classification Definitions.  It is possible that length-based vehicle classifications will become more prominent in ITS installations.  While the length-based bins are useful on their own for a variety of purposes, locally-developed procedures for translating length-based classes and both axle/power unit/trailering (FHWA) and weight class/fuel type (EPA) classification schemes may be possible.

 

Institutional and Data Sharing Issues.  As ITS deployments advance throughout the country, traffic management centers and traditional traffic departments are pursuing innovative approaches to collect, share and disseminate data that is better in quality, more reliable, and easily available.  Quality of data is critical, especially when sharing data between regions or jurisdictions, and when this data is made available to the public to make better informed decisions (mostly applicable to ITS generated data).  A recent report, addressed specific issues on data sharing techniques, mechanisms, and policies that public agencies use to share data among other public agencies or private agencies.  The report collected information from a literature search and enhanced it by conducting a total of 34 telephone interviews with the public sector.  Some of the salient features regarding data sharing and its applicability to data quality include:

 

·        Most of the agencies that were interviewed are concerned with collecting traffic data and in some cases multi-modal data.

 

·        When asked what was the main reason for sharing data, most agencies responded that they were motivated to share public travel data to enhance coordination among the region’s transportation agencies and to improve overall travel conditions. 

 

 

·        Agencies did not distinguish what types and form of data was shared based on who was receiving it.  Public agencies shared similar types of data with other public agencies and private enterprises. 

 

·        But, when the public agencies were asked whom they share the data with the most, of the 33 agencies that answered this question, 31 share data with other public agencies.  The category “other public agencies” is followed by, in order of frequency mentioned, local TV, traffic reporting organizations, local radio, Internet service providers, other organizations, and local newspapers.  About a third of the data providers supply local newspapers with information.

 

·        In terms of the types of public sector organizations data was shared with, the most frequently cited were other local jurisdictions such as counties and cities and more specific departments such as the department of public works.  Other organizations frequently mentioned include the state police, 911 systems, the State DOT, and transit agencies.  Mentioned less frequently were emergency management departments, an airport, a university, and a state parks agency.

 

·        Addressing the need for data quality while data sharing, one public agency respondent mentioned that having a common format and protocol along with data consistency and reliability is necessary.

Recommendations:  Possible Solutions

Sampling of ITS Locations and Data Streams

Planning-oriented traffic monitoring programs have begun to recognize the value of ITS-generated traffic data.  However, the number of locations where ITS data are collected is quite large.  States accustomed to roughly 100 continuous count locations statewide can have that number doubled or tripled if they accepted data from all ITS sensor locations in a single urban area.  To get around this problem, some states have identified selected ITS sensor locations where they accept continuous data.  The feeling is that for the time being, continuous data collected at ½-mile intervals is not necessary for characterizing traffic in a corridor – short counts at other locations can suffice, especially if they can be adjusted with facility-specific factors from the continuous locations.  An extension of this strategy would be to take samples from the remaining ITS sensor locations (say, 48-hour counts once a month or season), but this has not been tested to our knowledge.

Shared Resources

Operations personnel are generally aware of data quality problems but routinely cite the lack of funding for maintenance as a barrier to correcting them, especially in light of the fact that most of their current applications do not require highly accurate data.  (As discussed later, this situation may be changing.)  Conversely, Planning-oriented traffic monitoring programs generally follow rigorous maintenance schedules when equipment produces data of poor quality.  The difference is due to the missions of each group and the level of redundancy in equipment.  Traffic monitoring units are in business to collect data while data collection for operations personnel is a tool used to implement operational response strategies.  Also, the high density of ITS equipment placement means there is a high degree of redundancy – if a sensor goes down, there are others located close by.  This is not a luxury for traffic monitoring activities where permanent equipment is highly isolated.

 

Given these facts, the potential exists for sharing maintenance resources.  Traffic monitoring units have accumulated a long history of maintenance experience that could be tapped by operations personnel if appropriate institutional and funding arrangements can be negotiated.  The data quality control methods used by traffic monitoring units is another potential shared resource that can be tapped, although the time scales for ITS-generated data (1- to 15-minute intervals) are typically much smaller than those used for Planning purposes (typically 1 hour).

Maintenance, Calibration, and Performance Standards

Data quality issues are increasingly creeping into the mindset of Operations personnel.  Part of the problem is that funding for equipment maintenance was not originally estimated accurately and has not been adequately documented since.  In response, some locations are undertaking formal studies of data quality by setting standards and goals for the quality of data they need to support operational strategies and the funding necessary to achieve these standards and goals.  This formalization of the process provides a basis for operations personnel to request the additional funding. 

 

Calibration methods and benchmarks are another area worth exploring.  Guidance on how to test newly installed equipment – as well as to perform periodic field checking – would be helpful to Operations personnel responsible for detector maintenance.

Contractual Arrangements

A noticeable trend in Planning-oriented traffic monitoring is the outsourcing of data collection activities to private firms.  Under such arrangements, contractors are responsible for maintaining equipment and data quality.  Some ITS deployments also use contractor personnel as staff extensions for data collection and maintenance.  An even more radical model is now being supported by FHWA under the Intelligent Transportation Infrastructure Program (ITIP) where a private firm collects and archives data using their own equipment.  They then build traveler information products for sale in the consumer market.  Presumably these data can also be made available to public agencies for other types of operational strategies.  (ITIP is currently in 2 cities today with 21 more to be added in the next 2 years.)  However, the current ITIP effort is subsidized by FHWA – the long-term independent viability of this business model is problematic.  When the private sector is involved in data collection, there exists a potential for using formal data quality performance standards as an incentive.

More Sophisticated Operations Applications as a Data Quality Leader

Perhaps the best way to influence the quality of ITS-generated traffic data is to foster the development of more sophisticated operational response strategies that require more accurate and timely data.  In truth, the current generation of operational strategies do not require extremely accurate data – operators typically need to know where the big problems are and their responses are geared to this. 

 

However, there are indications that the situation is changing.  Information on system performance in real-time is at the core of implementing Operational strategies.  As recently noted in an FHWA-sponsored effort:  “As more transportation agencies move aggressively toward system operations and performance measurement, the need for comprehensive quality data becomes imperative”.  In addition to Operations, the same information can also be used in a historical sense to develop performance monitoring statistics.  Recent Federal efforts on specifying the so-called INFOstructure and the “data gap” for traveler information systems have taken a big step toward identifying data requirements for Operations.  Performance monitoring has also been advanced by efforts such as FHWA’s Mobility Monitoring Program.  However, it is clear that these efforts are built around the current state of the practice.  The Future Strategic Highway Research Program (F-SHRP), a proposed multiyear effort that has improved Operations as one of its four focus areas (under the heading of “travel time reliability”) offers the potential for advancing Operations practice significantly.  The Reliability portion of F-SHRP includes several proposed projects on performance monitoring, improved data use, and advanced data collection technologies that if implemented, will improve the long-term prospectus for data quality.

 

Even without the benefit of F-SHRP, other Federal and state efforts are considering more advanced forms of Operational control strategies.  As Operational strategies become more sophisticated – and performance monitoring becomes more detailed – data requirements are expected to increase.  Specifically, several applications on the short-term horizon can be identified as driving the need for more intricate and accurate data:

 

·        Posting estimated travel times to common destinations on dynamic message signs (DMSs).

·        Real-time predictive models that forecast short-range traffic conditions rather than just simply providing a snapshot of current conditions (e.g., the expected queue build-up in 15 minutes from an incident that just occurred).

·        Customized traveler information, including alternative and dynamic route guidance.

·        Decomposition of delay into its component sources for performance monitoring purposes.

·        Integrated freeway/arterial traffic control as well as cross-jurisdictional traffic control.

·        Advanced forms of evacuation and military deployment routing.

 

The recent field operational test on TMC use of archived data is seen as a mechanism for highlighting many of these emerging applications.  This operational test is an excellent opportunity to promote data quality, especially with regard to TMC applications, and should be monitored closely.

New Technologies

Monitoring of traffic conditions in real-time is a crucial component of Operational response strategies.  When ITS deployment originally was initiated, inductive loop detectors imbedded in pavement were the predominant technology used to monitor vehicle speeds, volumes, and (indirectly) roadway density.  In the past decade, increasing use has been made of “non-intrusive” technologies such as video image processing, radar, and acoustic devices to collect the same data.  These are termed “non-intrusive” because the devices are mounted on the side of the roadway or overhead, thus avoiding the damaging effects of traffic and the maintenance difficulties with loops.  Some areas are using data from probe vehicles (usually toll-tag equipped) to generate travel times.  Despite these advances, a number of issues still remain that must be addressed if Operational strategies are to reach their full potential:

 

·        Capital, installation, and maintenance costs – there is a need to reduce these costs so that greater deployment can be achieved.  A better understanding/documentation of these costs would also lead to better deployments.

·        Coverage – instrumentation is usually done on only roadways of great interest.  However, knowledge of traffic conditions on alternative routes as well as the entire system is necessary for sophisticated Operational strategies to have an effect.

·        Signalized highway conditions – point-based detectors provide adequate data for freeway performance but are not very useful on signalized highways where most delay occurs at the signal itself.

·        Data types – point-based detectors provide spot speeds yet travel times over roadway segments are more useful for many Operational strategies (e.g., traveler information)

·        Probe vehicle shortcomings – unless a substantial portion of the fleet is equipped as probes, accuracy may be a problem; roadside readers need to be placed at relatively short distances to provide the level of detail required; volumes are not collected (these are expected to be required for advanced short-term predictive algorithms).


BIBLIOGRAPHY

 

1.   MNDOT and SRF Consulting Group, NIT Phase II:  Evaluation of Non-Intrusive Technologies for Traffic Detection, Final Report, September 2002.

 

2.   Hu, Pat et al., Proof of Concept of ITS as an Alternate Data Source:  A Demonstration Project of Florida and New York Data, prepared for FHWA, September 30, 2001, http://www-cta.ornl.gov/Publications/Proof_of_Concept.pdf

 

3.   Battelle Memorial Institute and Cambridge Systematics, Inc., Potential Use of Archived Intelligent Transportation Systems Data for Government Reporting, prepared for FHWA, September 2002.

 

4.      Tarnoff, P.J. Getting to the INFOstructure.  White Paper prepared for the TRB Roadway INFOstructure Conference, August 2002.

 

5.   Conversation with Karl Wunderlich.  Publication of results is forthcoming under the

HOWLATE series of documents developed by Mitretek.

 

6.   Closing the Data Gap:  Guidelines for Quality ATIS Data, Prepared for:  ITS America and the United States Department of Transportation, April 2000.

 

7.   Battelle Memorial Institute, Sharing Data for Traveler Information:  Practices and Policies of Public Agencies, prepared for USDOT, July 2001.

 

8.   Schumann, Rick, Summary Of Transportation Operations Data Issues, PBS&J, August 2001.

 

9.   Cambridge Systematics, Inc. et al., Research Plan for Providing a Highway System With

Reliable Travel Times (Draft), prepared for NCHRP 20-58(3), December 2002.

 


“Advances in Traffic Data Collection and Management”

By Dan Middleton, Deepak Gopalakrishna, and Mala Raman

Introduction

Since the first known vehicle detector was introduced in 1928 at a signalized intersection, there have been hundreds of attempts to improve and create systems that monitor vehicle presence and passage at strategic locations on the nation’s streets and highways.  Without accurate and reliable detectors, traffic management decisions based upon real-time or historical data are compromised.  Many agencies use post processing for Quality Assurance as opposed to Quality Control.  Quality Assurance attempts to “fix the data” or identify defective data rather than ensuring the accuracy and reliability of the equipment.  Quality Control emphasizes good data by ensuring selection of the most accurate detector then optimizing detector system performance.  This white paper identifies innovative approaches for improving data quality through Quality Control.  It includes innovative contracting methods, standards, training for data collection, data sharing between agencies and states, and advanced traffic detection techniques. 

Background

The first known installation of a vehicle detection device occurred at a Baltimore intersection, forming the first semi-actuated signal installation.  The detector required drivers on the side street to sound their horn to activate the device, which consisted of a microphone mounted in a small box on a nearby utility pole.  Another device introduced at about this same time was a pressure-sensitive pavement detector using two metal plates acting as electrical contacts forced together by the weight of a vehicle passing.  This treadle-type detector proved more popular than the horn-activated detector, enjoying widespread use for over 30 years and becoming the primary means of vehicle detection at actuated signals (1).

 

Ongoing problems with the contact plate detector led to the introduction of an electro-pneumatic detector.  It was not a final solution either because of its cost to install.  Also, it was only capable of passage or motion detection.  Inductive loops were introduced as a vehicle detection system in the early 1960s and have become the most widespread detection system to date (1).  However, the well-documented problems with inductive loops have led to the introduction of numerous non-intrusive devices utilizing a variety of technologies to replace many of the failing inductive loops. 

 

By the late 1980s, video imaging detection systems were marketed in the U.S. and elsewhere, generating sufficient interest to warrant research to determine their viability as an inductive loop replacement.  In 1990, the California Polytechnic State University began testing 10 commercial or prototype video image processing systems that were available in the United States.  Evaluation results indicated that most systems generated vehicle count and speed errors of less than 20 percent over a mix of low, moderate, and high traffic densities under ideal conditions.  However, occlusion, transitional light conditions, and high-density, slow-moving traffic further reduced the accuracy of these new systems (2).

 

Hughes Aircraft Company conducted an extensive test of non-intrusive sensors for the Federal Highway Administration (FHWA).  The objectives of the study, Detection Technology for IVHS (3), included determining traffic parameters and accuracy specifications, performing laboratory and field tests of non-intrusive detector technologies, and determining the needs and feasibility of establishing permanent vehicle detector test facilities.  This research went beyond testing of video imaging systems, testing a total of nine detector technologies and including both freeway and surface street test sites in a variety of climatic and environmental conditions.  Conclusions indicated that video imaging systems were not one of the better performers in inclement weather. 

 

In another study sponsored by FHWA, the Jet Propulsion Laboratory (JPL) conducted research to identify the functional and technical requirements for traffic surveillance and detection systems in an Intelligent Transportation System (ITS) environment.  The report entitled Traffic Surveillance and Detection Technology Development, Sensor Development Final Report (4), published in 1997, presented details on the development and performance capabilities for seven detection systems.  JPL focused on video imaging, radar, and laser detection systems and utilized the work performed by Hughes (3, 5) to assess current technology capabilities.

 

The Minnesota DOT and SRF Consulting conducted a two-year test of non-intrusive traffic detection technologies.  This test, initiated by FHWA, had a goal of evaluating non-intrusive detection technologies under a variety of conditions.  The researchers tested 17 devices representing eight technologies.  The test site was an urban freeway interchange in Minnesota that provided signalized intersection and freeway main lane test conditions.  Inductive loops were used for baseline calibration.  The test consisted of two phases, with Phase 1 running from November 1995 to January 1996 and Phase 2 running from February 1996 to January 1997 (6, 7, 8).  This paper provides more details on Phase 2 in another section.

 

A critical finding of this research was that mounting video detection devices is a more complex procedure than that required for other types of devices.  Camera placement is crucial to the success and optimal performance of this detection device.  Lighting variations were the most significant weather-related condition that impacted the video devices.  Shadows from vehicles and other sources and transitions between day and night also impacted count accuracy (8).

 

The Texas Transportation Institute (TTI) has been involved in detector research for more than 10 years, with early research addressing inductive loops and more recent research emphasizing non-intrusive detectors.  Most of the research included field investigations, and some also included a state-of-the practice review to identify success stories.  Even though installation and maintenance practice for inductive loops should be well established due to product maturity, performance and service life attributes were still deficient at the outset of this series of research activities.  One of the early detector research projects developed a Traffic Signal Detector Manual primarily for inductive loop installers.  The manual presents:  1) installation procedures that ensure reliable performance, and 2) suggested practices to reduce loop installation time and maintenance costs (9).  Other TTI research investigated the use of acoustic and active infrared detectors at traffic signals for reducing stops and delays to trucks, finding that inductive loops were still more reliable for these applications, (10, 11).

 

More recent TTI research projects investigated the accuracy, reliability, cost, and user-friendliness of various non-intrusive detectors in seeking viable replacements for inductive loops (12, 13, 14).  TTI tested the Autoscope Solo Pro video image detection system (VID), Iteris Vantage (VID), SAS-1 by SmarTek (acoustic), and RTMS by EIS (radar).  TTI initially field-tested devices in low-volume conditions at one of its testbeds in College Station with subsequent more demanding tests at another testbed on I-35 in Austin.  More information is available on results of the latest tests in the Advanced Traffic Detection Techniques section of this paper.

Innovative Contracting Methods

A few agencies around the country have already invested resources in developing new contracting methods as a means of ensuring data quality at its source.  Performance criteria in contracts, while not common, are beginning to be considered by DOTs as a method to transfer some of the risk and maintenance requirements to contractors.  The following text provides examples from Virginia and Ohio showing the potential that can be tapped though innovative contracting methods.

 

The Virginia Department of Transportation (VDOT) at the Hampton Roads Traffic Management Center uses contractors for support of its day-to-day operations.  The TMC monitors 19 centerline miles (soon to be 50 centerline miles) of freeway and collects all the data in-house for its own use for freeway operations.  The TMC accomplishes the necessary maintenance on the detection system through hiring contractor personnel who are supervised by VDOT personnel.  The contractor staff answers to the TMC director and two other VDOT personnel to conduct field maintenance and operations and maintaining detection equipment.  VDOT plans to continue using its own staff for maintaining some items like surveillance cameras.  VDOT makes the determination of when maintenance is needed, using both a preventive maintenance program and a reactive maintenance program for detectors and related equipment (15). 

 

For the reactive maintenance mode, identification of problems occurs in various ways.  A few problem notifications come from motorists, but the more common method of identifying problems is through an alarm system built into the TMC that calls attention to a problem.  That alarm alerts “controllers” in the TMC that are monitoring the system health in real time.  If a camera fails, for example, controllers notice it first.  For the routine maintenance mode, VDOT goes through comprehensive diagnostic checks in the field when contractor personnel visit a site. 

 

VDOT treats contractor personnel as an extension of its own staff, apparently giving the TMC director even more latitude to add or remove contractor personnel compared to VDOT staff.  If contractor personnel are not performing to VDOT’s expectations, they can be removed immediately.  By the same token, VDOT also recognizes above average contractor performance by acknowledging them, as they do VDOT employees, in their periodic newsletters.  VDOT offers no cash incentives, however, for good performance (15).

 

Training of contractor personnel is accomplished in different ways.  For field maintenance, VDOT provides training to both its own and contractor personnel with no distinction.  The contractor is also responsible to provide a staff that is technically competent.  Sometimes the training provided by VDOT comes from the Virginia Transportation Research Council (VTRC) as workshops are made available or from other organizations such as FHWA that make training available in the local area.  They also occasionally send people out-of-state for training.  The TMC operation does not borrow from others within VDOT (e.g., traditional data collection) for maintenance needs (15). 

 

The second example in Virginia is the VDOT Mobility Management Section (traditional data collection), which leases its traffic counters and modems from Digital Traffic Systems (DTS).  However, VDOT owns the sensors such as inductive loops and piezoelectric sensors.  Since 1996, VDOT has contracted the data collection activity, and leased data collection equipment.  The current maintenance agreement with DTS is carefully written to assign responsibilities and minimize “finger pointing.” There are cases where difficulties might otherwise arise, such as with traffic counters that did not work due to faulty piezoelectric sensors.  A state inspector checks the equipment once a year, but if there are substantial errors in the data, the contractor has to re-collect the data (16). 

 

VDOT has established performance based lease criteria for payment of data collection services.  Contractor compensation is based on the amount of acceptable data being submitted by the contractor.  Furthermore, VDOT requires a certain quantity of acceptable data from each site to be able to use that site for traffic factor creation.  The list below summarizes some key elements of the agreement (16). 

 

·        There will be full payment for all Automatic Traffic Recorders (ATRs) and modems at sites with 25 or more days of useable classification and volume data (for factor creation) during a calendar month.

·        There will be 75 percent payment for 15 or more days and lesser payment for fewer days of acceptable data except that monthly payment will not be made for sites that have less than 15 days of volume data only available during a calendar month. 

·        For service calls for maintenance purposes, the contractor will not be reimbursed a separate charge (pay item) for the service calls related to ATR/modem equipment problems, telephone line problems, or failed sensors, as costs associated with the service calls are included in the price of the monthly lease charge. 

·        The contractor is given seven calendar days to investigate, make site visits, make repairs and respond back to VDOT after notification/receipt of a service call.

 

Another example of an innovative contracting method is with the Ohio Department of Transportation’s Office of Technical Services, Traffic Monitoring Section.  In the past, ODOT has used small personnel service contracts to maintain pavement sensors.  Now, ODOT is in the process of executing a task order type contract for maintenance to have contractors on board for anticipated and unanticipated maintenance requirements of the traditional data collection equipment statewide.  The contract is expected to begin in the summer of 2003 and will cover a time period of two years (17). 

Standards

Standards development is still at an early stage in the United States.  The U.S. DOT ITS Standards Program is working toward the widespread use of standards to encourage the interoperability of ITS systems, including traffic data collection systems (18).  The National Transportation Communication for ITS Protocol (NTCIP) committee is the Standards Development Organization (SDO) for traffic data collection and sensor standards.  NTCIP 1201 to NTCIP 1209 are standards documents that deal with roadside traffic data collection and traffic sensors.  These standards are at various stages in the development process.  More information on the NTCIP standards can be found at the NTCIP website (19).

 

There is also a draft standard being developed by the American Society for Testing and Materials (ASTM), entitled “Standard Specification and Test Methods for Highway Traffic Monitoring Devices,” which will be available soon (20).  In its current form, the standard includes, among other items, device classifications, performance requirements, user requirements for tests, and test methods.  Devices are classified by the functions they perform and the data required to carry out those functions.  The seven primary functions are 1) traffic counting, 2) traffic counting/ classifying, 3) incident detection, 4) speed monitoring, 5) metering (ramp, mainline, or freeway-to-freeway), 6) signal control, and 7) enforcement.

 

Based on an FHWA Scan Tour of European countries (21), standardization has occurred in Germany, the Netherlands, and France, where national standards for data collection equipment have been developed.  All equipment purchased for national traffic data collection will utilize the same formats and protocols for communication purposes.  The process has increased the quality and accuracy of the data collected, decreased the effort needed to transfer data between agencies or offices, and increased the reliability of field equipment.  The down side is the increased initial cost of the equipment when compared to non-standard equipment. 

Training for Data Collection

Training of personnel on the intricacies of the equipment is an essential part of ensuring data quality.  With improvements in non-intrusive detector hardware and software occurring at a rapid pace, maintenance personnel must be computer literate and must maintain an awareness of the latest changes for a variety of detection systems.  Initial training of new systems is often available through the vendor, but turnover in maintenance staff and new models require an ongoing training program. 

 

If data sharing is to be effective, the training program must also encourage employees to develop positive relationships and a sharing attitude with agencies that need data and those serving as resources.  The goal is to explain the synergism of sharing data with others, rather than simply looking at ones own needs.  Familiarity with the equipment will be critical to achieving success.  Troubleshooting techniques must include training on the right equipment along with ways of immediately identifying problems. 

Advanced Traffic Detection Techniques

Quality Control emphasizes data quality by ensuring selection of the most accurate detector then optimizing detector system performance.  Most evaluations of advanced or newer non-intrusive detectors compare with inductive loops because loops are a mature technology and, when properly installed, serve as an adequate benchmark for test purposes.  In other words loops are being replaced in the U.S. due to factors other than their accuracy such as the high expense of traffic control, the danger in exposing installation crews to traffic, and excess motorist delay and fuel consumption.  Several studies conducted in the 1980s found that most failures originate in the loop wire, but the wire itself is not necessarily the initiating cause of failure.  Results from studies conducted in Minnesota, New York, Oregon, and Washington indicate that improper sealing, pavement deterioration, and foreign material in the saw slot were most prominent in explaining loop failure (22).

 

Even though most U.S. jurisdictions are seeking non-inductive loop solutions to fill the traffic monitoring need, that is not true of European countries.  According to findings of a scanning tour sponsored by the Federal Highway Administration, while each of the five countries visited is conducting research into new detection systems, none is seeking to replace inductive loops as the primary means of traffic data collection.  The main reason is that inductive loops continue to adequately serve their needs (21).

 

Now that decision-makers have a choice in detectors, they must know the performance, cost, and user interface characteristics of the alternatives in order to choose wisely.  Many agencies purchase new and unfamiliar detectors based on limited knowledge of these factors because they lack resources for testing (sometimes relying on vendor claims) and/or an immediate need for detection at a critical location.  Two recent research initiatives described below provide useful input for this process.

 

The most recent research into the performance attributes of advanced detection techniques has occurred at the Texas Transportation Institute (14) and in Phase II of the MinnDOT Non-Intrusive Tests (23).  As noted in the Background section of this paper, TTI tested the Autoscope Solo Pro, Iteris Vantage, SAS-1 by SmarTek, and RTMS by EIS.  In its Phase II tests, MinnDOT evaluated the Autosense II by Swartz Electro-Optics (active infrared), 3M Microloops (magnetic), ECM Loren (radar), SAS-1 by SmarTek (acoustic), IR 254 by ASIM (passive infrared (PIR)), DT 272 by ASIM (PIR/ultrasonic), TT 262 by ASIM (PIR/ultrasonic/radar), the Autoscope Solo by ISS (VID), and VIP by Traficon (VID).  The text that follows summarizes findings, organized alphabetically by detector name. 

ASIM IR 254

The IR 254 is a passive infrared sensor made by ASIM Technology Ltd of Switzerland.  The sensor only monitors one lane, and it can be mounted either over the lane or slightly to the side of the roadway but it must face oncoming traffic.  Its alignment needs cause problems in obtaining optimum performance, so installations should prefer overhead mounting.  MinnDOT tests found that the IR 254 use was simple, straightforward, small and easy to mount.  Detection accuracy was better during free-flow conditions, but it undercounted by 10 percent during heavy traffic.  The device consistently underestimated speed by 10 percent on average (23). 


ASIM DT 272 Passive IR/Pulse Ultrasonic

This sensor incorporates two technologies:  pulse ultrasonic and passive infrared.  It is a single lane detector that can be installed either overhead or in sidefire, and is designed to detect vehicles at a short distance (no more than 39 ft).  This requirement is met by installing it at 20 ft above the lane and 20 ft to the side.  MinnDOT 24-hour test findings indicate that its absolute percent difference compared to loops was 8.7 percent for overhead mounting and 0.8 percent sidefire.  It demonstrated unstable performance during parts of the sidefire testing.  Test documents did not show speed comparisons (23). 

ASIM TT 262 PIR/Pulse Ultrasonic/Doppler Radar

This sensor incorporates three technologies:  passive infrared, ultrasonic, and Doppler radar.  For this test, MinnDOT mounted the detector overhead with its orientation downward and tilted 5 degrees toward oncoming traffic.  The detector is not intended for sidefire orientation.  The setup was straightforward, requiring only 30 minutes.  The count results were good, showing an absolute percent difference between sensor and baseline of 2.8 percent at 21 ft and 4.9 percent at 17 ft height.  For speed accuracy, its absolute average percent difference between sensor and loops was 4.4 percent at 21 ft and 3 percent at 17 ft mounting height.  In summary, the triple technology detector showed excellent performance, and its installation and calibration were simple (23). 

Autoscope Solo

The Autoscope Solo is a video imaging system whose cameras can be mounted either overhead or to the side of the road.  MinnDOT tests of the Autoscope 30 ft over the center of the lanes indicated excellent performance.  The absolute percent volume difference between the sensor data and loop data were under 5 percent for all three lanes.  The detector also performed well for speed detection.  The absolute average percent difference was 7 percent in lane one, 3.1 percent in lane two, and 2.5 percent in lane three.  For other mounting locations beside the roadway, the detector performed best when mounted high and closest to the roadway (23).

Autoscope Solo Pro

The Autoscope Solo Pro is the latest version of the integrated camera and processor from ISS.  TTI tested this detector both in College Station on S.H. 6 (all low- to moderate-volume free-flow conditions) and in Austin on I-35 (high-volume with some stop-and-go traffic).  The results reported in this paper come from the I-35 testbed and are based primarily on 5-minute samples of count and speed data.  The I-35 site has five southbound lanes with lane 1 (the median lane) being farthest from the detector.  Tests placed the Solo Pro on a pole 35 ft above the pavement and 6 ft from the nearest lane (14). 

 

The Autoscope Solo Pro count accuracy was within 5 to 10 percent of the baseline counts during free flow conditions, but it generally diminished in all lanes when 5-minute interval speeds dropped below 40 mph and especially during stop-and-go conditions.  On all four of the monitored lanes, it overcounted during free flow, but almost always within 10 percent of baseline counts.  During the peak periods, however, it undercounted.  On lane 1, its error was always within 10 percent.  On lane 2, its undercounts were about half within 10 percent and half between 10 and 20 percent.  On lane 3 (closer to the camera), its undercounts were two-thirds within 10 percent and one-third from 10 to 20 percent of baseline counts.  On lane 4, the Autoscope had 9 out of 10 within 10 percent and one out of 10 between 10 and 20 percent.  Speeds were almost always within 0 to 3 mph of the baseline system.  Its 15-minute cumulative occupancy values differed from loops by as much as 3.9 percent, but during most intervals its difference was less than 1 percent (14). 

Autosense II

The Autosense II by SEO is an active infrared sensor that monitors a single lane and must be mounted over the lane at a height between 19.5 and 23 ft.  The MinnDOT tests of volume indicated excellent agreement with the baseline inductive loop system.  The absolute percent difference between sensor data and loop data averaged 0.7 percent, which is within the accuracy level of loops.  The 24-hour tests indicated that its absolute percent difference of average speed between the sensor and the baseline system was 5.8 percent.  The sensor consistently overestimated speed.  The sensor performed consistently during the entire six months of continuous testing (23).

ECM Loren

MinnDOT tests of the ECM Loren microwave detector indicated that it did not function properly.  It is a relatively new detector and needs further development (23). 

Iteris Vantage

The Iteris Vantage had the highest standard deviation of differences in counts between baseline and test device during free flow of all devices tested recently by TTI, indicating that its counts were more erratic than other devices.  Like the Autoscope, the Iteris undercounted during peak periods and overcounted during free flow.  In lane 1, 95 percent its counts were within 12 percent of baseline counts.  In lane 2, three-fourths of its counts were within 20 percent of baseline and one-fourth was between 20 and 40 percent of baseline.  In lane 3, its count performance was better with 95 percent of the count intervals no more than 10 percent different from baseline counts.  It was not monitored in lane 4.  Free flow results were very similar to peak results.  The standard deviation of speed differences between baseline and test device for the Iteris was among the lowest of the devices tested on all but one lane.  The Iteris speed estimates were almost always within 5 mph during both peak and off-peak periods, with a few intervals erring as much as 15 mph on one lane.  The higher errors were hypothesized to be a function of calibration.  Of the three non-intrusive devices tested for occupancy output in lanes 3 and 4, the Iteris Vantage was the second most accurate.  Its 15-minute cumulative occupancy values differed from loops by as much as 8.1 percent, but during most intervals the difference was less than 6 percent (14). 

Other considerations for the Iteris Vantage include its relative newness for freeway detection.  This newness is a factor to consider, since most new devices need modifications following their release for public use.  Therefore, it could be an even better detector as the manufacturer makes more refinements.  One of the specific problems identified in this research is that it loses calibration after a short time (14).

Peek ADR-6000

TTI tested the new Peek ADR-6000 vehicle classification system, partly because of its potential for simultaneously generating classification and speed output.  The ADR-6000 uses inductive loop signatures for its classification algorithm, so its speed, count, and classification results were expected to exceed previous experience.  TTI designed the test site architecture such that the Peek system contact closure output fed into a Local Control Unit (LCU) – a component of TxDOT’s legacy freeway monitoring system – which in turn communicated with the Austin District Traffic Operations Center.  The ADR stored classification data internally to be downloaded later to a site computer or to other computers via the Internet using FTP (14). 

 

The site selected for the test was the same I-35 testbed site noted earlier in downtown Austin that frequently experienced stop-and-go traffic.  TTI developed and equipped a freeway testbed for this and future TxDOT sponsored research with equipment such as equipment cabinets, computers, baseline inductive loops, CCD cameras, Digital Subscriber Line (DSL) communication, and baseline inductive loops. 

 

TTI findings indicated that the ADR-6000 was very accurate as a classifier, counter, and speed detection device and as a generator of simultaneous contact closure output.  However, its recent introduction into the U.S. market and being adapted from a toll application are factors in its need for further refinement.  Table 1 shows the classification result for a dataset of 1,923 vehicles, indicating only 21 errors and resulting in a classification accuracy of 99 percent (ignoring Class 2 and 3 discrepancies).  This data sample occurred during the morning peak and included some stop-and-go traffic.  For count accuracy, the Peek in this same dataset only missed one vehicle (it accurately accounts for vehicles changing lanes).  Figure 1 shows the close agreement of the ADR with two other test systems using one-minute speeds from the Peek, an overhead Doppler radar system, and an Autoscope Solo Pro.  The graphic indicates discrepancies only at slow speeds (below about 15 mph) where the Doppler radar accuracy is known to decline and the Autoscope speed accuracy decreases slightly.  Peek needs to continue refinements to the ADR-6000 to improve its stability in the harsh environment of a field equipment cabinet and to improve its user interface.  Its unit cost for future applications is currently unknown but is expected to be under $10,000, depending on the number of units purchased (14). 

 

The future of the ADR-6000 in Texas and elsewhere in similar applications is expected to be a function of its cost, willingness of agencies to continue installing inductive loops, and multiple agencies being willing to develop agreements to share maintenance responsibilities.  The fact that it can serve the dual role is expected to be a positive factor in its installation, especially at more demanding locations with extremely high volumes and where the traffic operations and traditional data needs can both be served. 

Table 1.  Peek ADR-6000 Classification Accuracy Comparison

 

Vehicle Classification

Errors

1

2

3

4

5

6

7

8

9

10

11

12

Total

Lane 1 Count

0

330

118

1

9

0

0

2

15

0

1

0

476

 

Errors

0

0

0

0

1

0

0

0

2

0

0

0

 

3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lane 2 Count

0

299

84

0

16

3

1

11

23

0

1

0

438

 

Errors

2

1

 

3

1

 

 

 

1

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lane 3 Count

2

306

96

1

11

3

0

7

6

0

0

0

432

 

Errors

 

1

 

 

2

1

 

 

1

 

 

 

 

5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lane 4 Count

0

312

88

1

14

1

0

4

2

0

0

0

422

 

Errors

 

 

1

1

1

1

 

 

 

 

 

 

 

4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lane 5 Count

0

106

36

0

5

3

0

0

5

0

0

0

155

 

Errors

 

1

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Totals

4

1356

423

7

60

12

1

24

55

0

2

0

1923

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Total Errors

2

3

1

4

5

2

0

0

4

0

0

0

 

21

Source:  Reference (14)


Source:  Reference (14)

Figure 1.  Speed Accuracy of the ADR-6000

 

RTMS by EIS

Results of TTI research indicate that the RTMS is more accurate in both counts and speeds in the overhead position although it covers only one lane from overhead.  The more popular application is in sidefire, so the following discussion focuses on its sidefire accuracy.  In sidefire, the RTMS can generate speeds and counts for five or more lanes with reasonable accuracy.  Its advantages also include ease of setup, being mounted only 17 ft above the roadway, and its good user interface.  Its coverage and initial cost make the RTMS an economical means of monitoring several lanes.  In fact, in previous research, TTI found it to have the lowest life cycle cost for freeway applications of those detectors included in that research (13). 

 

TTI findings based on RTMS serial output indicate that the detector’s count accuracy was best on lanes 2, 3, and 4, where its counts were almost always within 5 percent of loop counts.  On lane 1, its counts were always within 10 percent of loops during the off-peak periods.  During peak periods on all lanes, RTMS counts varied more from baseline counts than during off-peak periods, but it was still usually within 10 percent.  Speed estimates by the RTMS in sidefire were usually within 5 to 10 mph of baseline speeds during the off-peak.  This research did not include occupancy tests on the RTMS (14).

 

The RTMS is an even more accurate count device in the overhead position, but it only covers one lane.  In TTI tests, the overhead RTMS generated excellent speeds until prevailing traffic speeds dropped below about 15 mph.  It is a mature product and is not significantly affected by weather or lighting conditions (14).

SAS-1 by SmarTek

The SAS-1 is a passive acoustic detector that monitors vehicular noise (primarily tire noise) as vehicles pass the detection area.  The detector can monitor as many as five lanes and the SAS-1 must be oriented in a sidefire position.  Precise alignment is not critical because the sensor can cover a wide area.  Heights recommended by the vendor range from 25 ft to 40 ft, and the recommended offset range is 10 ft to 20 ft.  Higher mounting positions can reduce the effects of occlusion in multiple lane applications.  MinnDOT tests found that the absolute percent volume differences for lane two and three were under 8 percent at all test heights, and between 12 and 16 percent for lane one with heights less than 30 ft.  It provided good results under free flow traffic, but undercounted during congested flow with slow speeds.  For 15-minute intervals, its free flow absolute percent differences were between 0 percent and 5 percent during off-peak and between 10 percent and 50 percent during congested periods.  For speed accuracy, the SAS-1 showed an absolute average percent difference under 8 percent for most mounting locations and between 12 percent and 16 percent for lane one with heights less than 30 ft.  These tests concluded that the optimal installation position is to have equal distance for both vertical height and horizontal offset between the sensor and centerline of multiple lanes (45 degrees from horizontal) (23).

 

TTI research found that the SAS-1 predominantly undercounted in both peak and off-peak conditions.  In lane 1, all time intervals showed counts less than the baseline system in the range from zero to 20 percent.  In lane 2 during the peak period, two-thirds of its undercounts were between zero and 10 percent below baseline counts, and during the off-peak, 80 percent of its time intervals were undercounts and 20 percent were overcounts by as much as 30 percent over baseline counts.  In lane 3 during the peaks, 80 percent of its time intervals represented undercounts (zero to –10 percent and 20 percent were overcounts (zero to 5 percent).  During the off-peak on lane 3, 95 percent of its time intervals reflected under counts (zero to –25 percent) while 5 percent were overcounts (zero to 30 percent).  Its counts in lane 4 were undercounts in both peak and off-peak periods – ranging from zero to –15 percent in both cases (14). 

 

The SAS-1 speed estimates were within 5 to 10 mph of baseline during some peak periods but as much as 20 to 25 mph different in others.  Free-flow speed estimates were usually within 5 mph of baseline speeds.  Its 15-minute cumulative occupancy values differed from loops by as much as 14.7 percent, but during most intervals its difference was less than 4 percent.  Heavy rain caused significant reduction in the SAS-1 detection accuracy.  In summary, the SAS-1 has undergone many improvements and performed well in free-flowing traffic, but its slow-speed accuracy and its degraded performance in rain need to be addressed (14).

Traficon NV

MinnDOT tests mounted the Traficon video image detector directly over the lanes at heights of 21 ft and 30 ft facing downstream.  The preferred orientation was facing oncoming vehicles, but site features precluded this orientation.  At the 21-ft height, the absolute percent difference between the sensor data and loop volume data was under 5 percent for all three lanes.  At the 30-ft height, its off-peak performance was similar but it undercounted during congested flow showing an absolute percent difference of some 15-minute intervals from 10 percent to as high as 50 percent.  Reasons suspected for the reduced accuracy were snow flurries and sub-optimal calibration.  Its speed accuracy at 21 ft indicated good performance.  Its absolute average percent difference was 3 percent in lane one, 5.8 percent in lane two, and 7.2 percent in lane three.  During the snowfall, its speed accuracy declined to a range of 8.9 percent to 13 percent (23). 

3M Microloops

The 3M system consisted of three components:  Canoga Model 702 Non-Invasive Microloop probes, Canoga C800 series vehicle detectors, and 3M ITS Link Suite application software.  The Microloop probes can monitor traffic from a three-inch non-metallic conduit 18 to 34 inches below the road surface or from underneath a bridge structure.  Installers must use a magnetometer underneath bridges to determine proper placement of the probes; otherwise optimum performance requires trial-and-error.  Probes installed in a “lead” and “lag” configuration under pavements or bridges can monitor speeds by creating speed traps in each lane.  One of the requirements of this system is that the probes remain relatively vertical, so keeping the horizontal bores straight is critical.  Probes placed in a non-vertical orientation can lead to speed errors.  MinnDOT tests under pavement indicated excellent volume and speed results.  The absolute percent volume difference between sensor and baseline was under 2.5 percent, which is within the accuracy capability of the baseline loop system.  For speeds, the test system generated 24-hour test data with absolute percent difference of average speed between baseline and test system from 1.4 to 4.8 percent for all three lanes (23). 

 

At a relatively low to moderate volume site in College Station, Texas, TTI found that, for a six-day count period, 3M Microloops were almost always within 5 percent of baseline counts.  In the right lane, all except two 15-minute intervals out of the 330 total intervals were within 5 percent of baseline counts.  The remaining two were within 10 percent of baseline counts.  Therefore, Microloop counts were within 5 percent of baseline counts 99.4 percent of the time in the right lane (dual probes).  In the left lane (single probes), 94.5 percent of the 15-minute intervals were within 5 percent, 4.5 percent were between 5 and 10 percent, and 1.0 percent there was a more than 10 percent difference from baseline (12). 

 

Table 2 summarizes performance results of MinnDOT’s Phase II tests, while Table 3 is a result of selected TTI data during off-peak, free-flow, daylight, and dry pavement conditions.  TTI took a random single block of time using 5-minute data intervals to develop this summary (except the RTMS count data were from 15-minute intervals).  This analysis took the absolute value of percent differences for the selected 5-minute intervals, summed the 5-minute or 15-minute percent differences, then divided by the total number of intervals.  Table 4 summarizes costs of detectors based on MinnDOT research.

Table 2.  Summary of MinnDOT Detector Test Results1

Sensor

Technology

Mount Location

Lane

Vol. Accuracy2

Speed Accuracy2

ASIM IR 254

PIR

OH

1

10.0%

10.8%

ASIM DT 272

PIR/Ultrasonic

OH

1

8.7%

N/A

Sidefire

1

0.8%

N/A

ASIM TT 262

PIR/Ult/Radar

OH

1

2.8%

4.4%

ISS Autoscope Solo

VID

Sidefire

1

2.3%

5.7%

2

2.7%

6.0%

3

2.0%

7.4%

OH

1

2.2%

7.0%

2

1.5%

3.1%

3

1.6%

2.5%

SEO Autosense II

Active Infrared

OH

1

0.7%

5.8%

SmarTek SAS-1

Acoustic

Sidefire

1

12.0%

5.4%

2

6.7%

6.3%

3

7.3%

4.8%

Traficon NV

VID

Sidefire

1

3.4%

7.7%

2

1.9%

4.4%

3

3.7%

2.3%

OH

1

4.4%

3.3%

2

2.7%

5.8%

3

4.8%

7.2%

3M Microloop

Magnetic

Under Pvmt

1

2.4%

4.9%

2

2.5%

2.2%

3

2.3%

1.4%

Under Bridge

1

1.2%

1.8%

Source:  Reference (23)

1 The results in this table represent a single test at an optimal mounting location for each sensor.

2 Volume and speed accuracy are measured by the absolute percent difference between sensor data and baseline loop data in 15-minute intervals.

Table 3.  Non-Intrusive Detector Test Results Based on Selected TTI Data1

Sensor

Technology

Mount Location

Lane

Vol. Accuracy2

Speed Accuracy2

EIS RTMS

Radar

Sidefire

1

6.1%

5.9%

2

2.0%

3.4%

3

2.0%

2.6%

4

1.3%

4.7%

ISS Autoscope Solo Pro

VID

Sidefire

1

2.7%

0.8%

2

2.8%

1.5%

3

3.5%

1.8%

4

2.1%

3.1%

5

2.8%

2.1%

SmarTek SAS-1

Acoustic

Sidefire

1

6.7%

4.8%

2

5.9%

3.8%

3

6.8%

3.4%

4

5.8%

3.9%

5

4.0%

4.7%

Iteris Vantage Pro

VID

Sidefire

1

12.5%

5.4%

2

5.1%

2.6%

3

7.3%

1.2%

Source:  Reference (14)

1 The results in this table represent a single test at an optimal mounting location for each sensor.

2 Volume and speed accuracy are measured by the absolute percent difference between sensor data and baseline loop data in 5-minute intervals (15-minute vol. intervals for the RTMS).

 

Table 4.  Detector Cost Summary

Vendor

Detector

Unit Cost

Note

ASIM Technologies Ltd

ASIM IR 254

$700

 

ASIM DT 272

$700

 

ASIM TT 262

$1,600

 

ISS, Traffic Control Corp.

Autoscope Solo

$7,000 (Intersection Application)

Cost includes Solo unit, Minihub, interface panel and cable

Autoscope Solo

$6,155 (Freeway Application)

Cost includes Solo unit, interface panel, and cable

Schwartz Electro-Optics, Inc.

Autosense II

$6,000 - $7,500

Depending on configuration/ functionality desired

SmarTek Systems, Inc.

SAS-1

$3,500

$3,080/unit in quantities over 10

Traficon NV

Traficon

Contact vendor

 

3M NIM

Canoga Detector

C822F (2 channel)

$546

Installation Kit $114 each; carriers (50/pkg) $354/pkg; C30003 Home-run cable $390/1000’ spool

Canoga Detector

C824F (4 channel)

$703.50

702 Microloop probe

$159.50/probe (+$0.39/ft for lead-in cable)

701 Microloop probe

$137.50/probe (+$0.39/ft for lead-in cable)

Source:  Reference (23)

Data Sharing Between Agencies and States

Budget cuts are causing agencies to seek alternate means of meeting data quality needs, with one solution being to share data between agencies.  The Hampton Roads TMC currently shares video with the city of Norfolk.  There are also plans to share with other jurisdictions in this seven-city metropolitan area.  Norfolk has a TMC and there is mutual benefit to sharing each other’s data.  Hampton Roads has interfaced with Norfolk and plans to share video, voice, and data with other six cities.  Hampton Roads is investigating sharing traffic data now since it only has a video sharing agreement.  That means that each has access to the other’s camera feeds and to control of the cameras on a priority basis.  If another organization has a higher priority, then they will have control of a camera (9).

 

The New England states of Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont have cooperated to help each other and share transportation data.  Applications are inventory, travel monitoring data, and performance data used by states and reported to FHWA.  By working together for many years, these states have improved data quality in a more efficient and cooperative environment (24). 

 

ARTIMIS supplies data to the following agencies:  planning agencies within the Ohio DOT, the Kentucky Transportation Cabinet, and the FHWA Mobility Monitoring project.  The agencies perform their own analysis of data quality.  The data can be provided in several formats to suit the customer; the formats typically used are ASCII text file format, FHWA Type 3 and C records, and new record type formats developed by ODOT and KYTC (Types S, V, and L).  ARTIMIS also shares data with the local MPO (Ohio-Kentucky-Indiana Regional Council of Governments), the City of Cincinnati Traffic Engineering office, and local FHWA contacts.  The ARTIMIS staff makes the data available on an internal FTP site for their use.  The ASCII text files and the Type 3, S, and V records contain some simple flags that indicate completeness of the data.  There are currently no formal arrangements to share personnel or other resources to fix problems (25). 

Summary

This white paper identifies innovative approaches for improving data quality through Quality Control.  It includes innovative contracting methods, standards, training for data collection, data sharing between agencies and states, and advanced traffic detection techniques. 

 

The states of Virginia and Ohio are utilizing innovative contracting methods to improve data quality.  VDOT at the Hampton Roads Traffic Management Center hires contractor personnel who are supervised by VDOT personnel.  In another example of innovative contracting methods, VDOT has established performance based lease criteria for payment of data collection services for traditional data.  Contractor compensation is based on the amount of acceptable data being submitted by the contractor.  Ohio DOT is planning an innovative venture by executing a two-year statewide task order agreement for maintenance of traffic monitoring equipment for planning or historical data.

 

There are many reasons for adopting data and equipment standards, not the least of which is facilitating sharing of data across agencies.  The U.S. DOT ITS Standards Program is encouraging development of standards to facilitate interoperability of ITS systems, including traffic data collection systems.  In its current form, the forthcoming ASTM standard includes, among other items, device classifications, performance requirements, user requirements for tests, and test methods.  In some European countries, all equipment purchased for national traffic data collection must utilize the same formats and protocols for communication purposes.  The process has increased the quality and accuracy of the data collected, decreased the effort needed to transfer data between agencies or offices, and increased the reliability of field equipment, but the overall standardization effort has increased equipment costs. 

 

Advanced traffic data collection techniques include the oldest technology, inductive loops. Results from studies conducted in Minnesota, New York, Oregon, and Washington indicate that improper sealing, pavement deterioration, and foreign material in the saw slot were most prominent in explaining loop failure.

 

Of the detectors recently tested by TTI and MinnDOT, the multi-lane detectors that are most competitive from a cost and accuracy standpoint are:  Autoscope Solo Pro, Iteris Vantage, RTMS by EIS, SAS-1 by SmarTek, Traficon NV, and 3M Microloops.  Based upon initial cost information, the SAS-1 and RTMS are less expensive than other units, but count and speed accuracies were sometimes inferior to other more expensive devices.  Video imaging systems also provide an image of traffic, which is often useful in spot-checking traffic conditions.  The initial cost of 3M Microloops is relatively expensive (due largely to horizontal boring costs when installed under pavements), but their life-cycle costs should make them competitive with other technologies.  Of the video imaging systems tested, the Iteris Vantage is the newest and has potential but needs further development.  The count accuracy on all non-intrusive devices tested by TTI declined when 5-minute average speeds dropped below about 30 mph (possibly included some stop-and-go conditions).  The Peek ADR-6000 is a high-end classifier that is extremely accurate, but its recent introduction into the U.S. market is a factor in its need for further refinement.

 

 

REFERENCES

      1.     Traffic Detector Handbook, Institute of Transportation Engineers, Washington D.C., 1991. 

 

      2.     C. A. MacCarley, S. L. M. Hockaday, D. Need, and S. Taff, Transportation Research Record 1360 -- Traffic Operations, “Evaluation of Video Image Processing Systems for Traffic Detection,” Transportation Research Board, Washington, D.C., 1992.

 

      3.     L. A. Klein and M. R. Kelley, Detection Technology for IVHS, Volume 1 Final Report, FHWA-RD-96-100, Performed by Hughes Aircraft Company, Turner-Fairbank Research Center, Federal Highway Administration Research and Development, U.S. Department of Transportation, Washington, D.C., 1996.

 

      4.     Jet Propulsion Laboratory, Traffic Surveillance and Detection Technology Development, Sensor Development Final Report, Federal Highway Administration, U.S. Department of Transportation, Washington, D.C., March 1997.

 

      5.     L. A. Klein, Vehicle Detector Technologies for Traffic Management Applications, Part 2, ITS Online, The Independent Forum for Intelligent Transportation Systems, http://www.itsonline.com/, June 1997.

 

      6.     Minnesota Department of Transportation – Minnesota Guidestar and SRF Consulting Group, Field Test of Monitoring of Urban Vehicle Operations Using Non-Intrusive Technologies, Volume 4, Task Two Report:  Initial Field Test Results, Minnesota Department of Transportation - Minnesota Guidestar, St. Paul, MN, and SRF Consulting Group, Minneapolis, MN, May 1996.

 

      7.     Minnesota Department of Transportation - Minnesota Guidestar and SRF Consulting Group, Field Test of Monitoring of Urban Vehicle Operations Using Non-Intrusive Technologies, Volume 5, Task Three Report:  Extended Field Tests, Minnesota Department of Transportation - Minnesota Guidestar, St. Paul, MN, and SRF Consulting Group, Minneapolis, MN, December 1996.

 

      8.     J. Kranig, E. Minge, and C. Jones, Field Test for Monitoring of Urban Vehicle Operations Using Non-Intrusive Technologies, Report Number FHWA-PL-97-018, Minnesota Department of Transportation - Minnesota Guidestar, St. Paul, MN, and SRF Consulting Group, Minneapolis, MN, May 1997.

 

      9.     D. Woods, Texas Traffic Signal Detector Manual, Report No. FHWA/TX-90/1163-1, Texas Transportation Institute, Texas A&M University, College Station, TX, July 1992.

 

  10.     D. Middleton, D. Jasek, H. Charara, and D. Morris, Evaluation of Innovative Methods to Reduce Stops to Trucks at Isolated Intersections, Research Report FHWA/TX-97/2972-1F, August 1997.

 

  11.     J. Bonneson, D. Middleton, K. Zimmerman, H. Charara, and M. Abbas.  Intelligent Detection-Control System for Rural Signalized Intersections, Research Report FHWA/TX-02/4022-2, Texas Transportation Institute, Texas A&M University, College Station, TX, August 2002.

 

  12.     D. Middleton and R. Parker.  Initial Evaluation of Selected Detectors to Replace Inductive Loops on Freeways, Research Report FHWA/TX1439-7, Texas Transportation Institute, College Station, TX, April 2000.

 

  13.     D. Middleton, D. Jasek, and R. Parker, “Evaluation of Some Existing Technologies for Vehicle Detection” Research Report FHWA/TX-00/1715-S, Texas Transportation Institute, College Station, TX, September 1999.

 

  14.     D. Middleton and R. Parker.  Evaluation of Promising Vehicle Detection Systems, Research Report FHWA/TX-03/2119-1, Draft, Texas Transportation Institute, College Station, TX, October 2002.

 

  15.     Telephone interview with Mr. Stephany Hanshaw, Hampton Roads Traffic Management Center, Virginia Department of Transportation, October 18, 2002.

 

  16.     Telephone interview with Mr. Tom Schinkel, Virginia Department of Transportation, October 1, 2002.

 

  17.     Telephone interview with Ohio Department of Transportation, November 2002.

 

  18.     Web Site for ITS Standards, www.its-standards.net, November 2002.

 

  19.     Web Site for NTCIP Standards, www.ntcip.org, November 2002.

 

  20.     Standard Specification and Test Methods for Highway Traffic Monitoring Devices, The American Society for Testing and Materials, Review Copy:  Version C for E17.52, Draft December 2002.

 

  21.     FHWA Study Tour for European Traffic Monitoring Programs and Technologies, FHWA’s Scanning Program, U.S. Department of Transportation, Federal Highway Administration, Washington D.C., August 1997. 

 

  22.     Traffic Detector Handbook, Draft 2, Federal Highway Administration, Washington, D.C., July 2002.

 

  23.     NIT Phase II Evaluation of Non-Intrusive Technologies for Traffic Detection, Final Report, Minnesota Department of Transportation, St. Paul, MN, September 2002. 

 

  24.     F. Orloski, “New England Data Quality Partnerships,” Presented at the North American Travel Monitoring Exhibition and Conference, Orlando, FL, May 2002. 

 

  25.     Interview with Mr. Scott Evans of ARTIMIS, October 2002.

 


APPENDIX B

 

 

 

INTERVIEWEE CONTACT LIST
AND INTERVIEW GUIDE


List of Interviewees

 


Glen Jonas (Operations)

Transportation Technology Group

Arizona DOT (located in traffic operations center)

Phoenix, AZ

Ph: (602) 712-6587

Gjonas@dot.state.az.us

 

Nick Thompson

Operations Manager, TMC

Minnesota Department of Transportation

Ph: 612-341-7269

nick.thompson@dot.state.mn.us

 

David Gardner

Manager, Traffic Monitoring Section

Ohio Department of Transportation

Ph: 614-752-5740

dgardner@dot.state.oh.us

 

Scott Evans

TRW/ARTIMIS

Ph: 513-564-6113

E-mail: scott.evans@trw.com

 

Kevin Barron

Virginia Department of Transportation

804-786-1278 

 

Catherine C. McGhee
Research Scientist Sr.
Virginia Transportation Research Council
Ph: 434-293-1973

Fax: 434-293-1990
Cathy.McGhee@VirginiaDOT.org

 

Kim Ferroni

Traffic Analysis Unit

Ph: 717-214-8685

Fax: 717-783-9152

kferron@dot.state.pa.us


Dennis Starr

Transportation Planning Specialist

Traffic Analysis Unit

Ph: (717) 787-4574

Fax: (717) 783-9152

Dstarr@dot.state.pa.us

 

Martin Knopp

Director, ITS

Utah DOT

Ph: 801-965-4894

Fax: 801-965-4338

E-mail: mknopp@dot.state.ut.us

 

Stephany Hanshaw

Smart Traffic Center Facility Manager

Hampton Roads

Ph: 757-424-9907

Fax: 757-424-9911

E-mail: hanshaw_sd@vdot.state.va.us

 

Tom Schinkel

Virginia DOT, Planning

Ph:  04-225-3123

Fax: 804-371-0190

Tom.Schinkel@VirginiaDOT.org


TRAFFIC DATA QUALITY WORKSHOP PROJECT

Interview Guide

Purpose Of Study And Interview Objectives

Recent research and analysis have identified several issues regarding the quality of traffic data available from intelligent transportation systems (ITS) for transportation operations, planning, or other functions.  The Advanced Traveler Information Systems (ATIS) and Advanced Traffic Management Systems (ATMS) are generating large amounts of traffic data that could be used in other applications, such as performance monitoring.  The ITS Archived Data User Service (ADUS) promotes reuse of traffic data collected for real-time operations for potential transportation planning applications. 

 

However, initial experience with ITS traffic data has identified serious data gaps and data quality deficiencies.  Data can be edited after the fact to remove errors but the problem still remains at the source.  It is recognized that the quality of the traffic data and the information produced from the data are critical factors that affect the abilities of transportation agencies to ensure the security of transportation and the management of the nation’s transportation resources.  The focus of data quality is on establishing a consistent methodology for ensuring that data are managed so that a measure of reliability is sustained. 

 

Various factors affect data quality including coverage deficiencies, data compatibility across different software/hardware platforms, ensuring that data elements are efficiently matched with coordinated location and time elements, installation and maintenance issues, funding constraints etc.

         

The purpose of this interview is to gather information to help address issues associated with traffic data quality and to define an action plan with work items that can be executed through the U.S. Department of Transportation (DOT), stakeholder organizations (e.g., American Association of State Highway and Transportation Officials (AASHTO), ITS America), State agencies, and private industry. 

General

The purpose of this section is to gather background information on the interviewee and types of traffic data used by the organization.

 

      1.            Name:

 

      2.            Official Title/Position:

 

  1. Name of agency

 

  1. What is your agency’s major traffic related activity?

 

  1. Describe the types of traffic data used by your organization. 

 

  1. Describe your sources of traffic data.  Does your agency collect all its needed traffic data?

Traffic Data Collection and Sharing Practices

  1. What types of data do you collect (e.g., volumes, speeds, occupancies, travel times)

 

  1. What traffic monitoring equipment has the potential to be highly accurate and cost-effective, and that you would recommend to other agencies?  Describe the accuracy test and the test outcome for this device.  How many of these units does your agency own?

 

  1. For what applications does your agency use non-intrusive traffic monitoring devices?

 

  1. Describe the inspection and maintenance process for newly installed traffic monitoring equipment in terms of:  How is equipment maintenance handled?  Who does it?  Is there a maintenance contractor?  If so, was maintenance part of the original purchase agreement?  How well is the equipment maintained?  How quickly are problems identified and corrected?  What percent of detectors is down at any point in time?

 

  1. Is there a formal policy of either maintaining equipment to a performance standard or data to quality standard? 

 

  1. Do you have traffic monitoring equipment that can simultaneously serve both real-time and historical monitoring needs in all traffic conditions?  Describe the equipment model number, cost, data output format, performance aspects in different weather and lighting conditions, and any other pertinent information you have discovered. What are its strengths and weaknesses?

 

  1. Is your agency required to purchase on a low-bid basis?  If not, how is it done?

 

  1. Does your agency (or company) require or provide a warranty period to ensure that equipment performs according to your needs?  What is the length of time and stipulations of the warranty?

 

  1. How does your agency check newly purchased traffic monitoring equipment to determine that it meets the purchase specification for vehicle speed, vehicle counts, and lane occupancy (and perhaps other parameters)?

 

  1. What contractor incentives does your agency use to optimize equipment performance?

 

  1. Do you in any way acknowledge and reward excellence in the operation of traffic monitoring equipment?  How?

 

  1. What items are covered in training of agency/contractor personnel to ensure accurate and consistent operation of traffic monitoring equipment?

 

  1. Other than obvious/glaring equipment problems, do you review the data for quality/accuracy?  Describe

 

  1. Is the data from traffic monitoring equipment required to fit a particular data protocol?  Is it the same for real-time data as for historical data?

 

  1. If you share data with other units or agencies:  what is the institutional arrangement?  Is there a process for communicating quality problems they may have with the data?  Is there a provision to share resources, either monetary or personnel, to fix data problems?

 

  1. If you don’t share with other units or agencies:  Have other units or agencies expressed interest?  What are the barriers to sharing?  Technical?  Institutional?

 

  1. For each application, have you ever had to duplicate data collection because the original data were found to be of insufficient quality?  Describe

Defining/Quantifying Traffic Data Quality

  1. Does your agency define traffic data quality, either informally (as implied through certain data collection procedures/sample sizes) or formally (as written in contracts or performance reporting requirements, etc.)?

 

  1. If there has been a formal study of traffic data quality, is it possible to obtain the report?

 

  1. What “attributes” are used to describe data quality?  Examples might include accuracy, timeliness, completeness, coverage, downtime, cost to review/revise, etc.

 

  1. Has your agency developed any measures or methods to quantify data quality?  If so, what measures or methods are used?  Are they different quality measures or standards for different applications?

 

  1. If YES to 27, how does your agency use these quantitative data quality measurements or methods?

 

  1. If YES to 27, has your agency defined acceptable levels for these data quality measures? If so, what are the acceptable levels for the different data quality measures?  How were these “acceptable levels” determined?

 

  1. What data quality control procedures do you apply?  (If not answered above.)  Is software used?
  2. How do you ensure that field equipment is operating properly and generating accurate data?  How often does your agency perform this check?

APPENDIX C

 

 

 

REGIONAL WORKSHOP ATTENDEES


Ohio DOT, Columbus Ohio – March 11, 2003

Chris Allison                             Pennsylvania Department of Transportation

Diane Boso                              Ohio Department of Transportation Technical Services

Rob Bostrom                            Kentucky Transportation Cabinet

Joe Cole                                   Northeast Ohio Areawide Coordinating Agency (NOACA)

Scott Evans                              ARTIMIS (Advanced Regional Traffic Interactive Management & Information System)

Edward Fekpe                         Battelle

Kim Ferroni                              Pennsylvania Department of Transportation

David Franke                           Kentucky Transportation Cabinet, Division of Planning

Ralph Gillmann                         FHWA Office of Policy

Deepak Gopalakrishna             Battelle

Gary Grano                              Northeast Ohio Areawide Coordinating Agency (NOACA)

Dan Inabnitt                              Kentucky Transportation Cabinet, Division of Planning

Steven Jessberger                     Ohio Department of Transportation Technical Services

David Kuebler                          Northeast Ohio Areawide Coordinating Agency (NOACA)

Emiliano Lopez                         FHWA – NRC East

Tony Manch                             Ohio Department of Transportation Technical Services

Kirk Mangold                           Indiana Department of Transportation

Rich Margiotta                          Cambridge Systematics Inc.

Scott McGuire                          FHWA-Tennessee

Jim McQuirt                             Ohio Department of Transportation Technical Services

Dan Middleton                         Texas Transportation Institute (TTI)

Greg Morris                             FHWA – West Virginia Division

Gregory Oliver                         Delaware Department of Transportation, Planning Division

Dennis O’Neil                          Ohio Department of Transportation, District 12

Andrew Pierson                        URS Corporation

James Pol                                 FHWA, ITS Joint Program Office (JPO)

Mala Raman                             Battelle

Stew Sonnenberg                     FHWA-Ohio

Amy Slagle                               AMATS (Akron Metropolitan Area Transportation Study)

Dennis Starr                             Pennsylvania Department of Transportation

Dave Stewart                           Ohio Department of Transportation Technical Services

Darren Swingle                         Ohio Department of Transportation Technical Services

John Tolle                                 FHWA NRC - Midwest

Cheng-I Tsai                            Ohio-Kentucky-Indiana Regional Council of Governments (OKI)

Shawn Turner                           Texas Transportation Institute (TTI)

Debbie Watson                        Kentucky Transportation Cabinet, Division of Planning

Jeff Young                                Kentucky Transportation Cabinet, Division of Planning

 

 

 

 


Utah DOT, Salt Lake City – March 13, 2003

Joe Avis                                   CalTrans

Kelli Bacon                              Utah Department of Transportation

Wayne Bennion                        WFRC

Brian Burk                                Texas Department of Transportation

Stan Burns                                Utah Department of Transportation

Mack Christensen                     Utah Department of Transportation

Dawn Doyle                             Texas Department of Transportation

Edward Fekpe                         Battelle

Michael Forbis                         Washington Department of Transportation

Deepak Gopalakrishna             Battelle

John Grant                                Transcore

Mark Hallenbeck                      TRAC-UW

Blake Hansen                           Transcore

Mike Kaczorowski                   Utah Department of Transportation

Martin Knopp                          Utah Department of Transportation

Gary Kuhl                                Utah Department of Transportation

Sean Lingwall                           Salt Lake City

Richard Manser                        Utah DOT

Rich Margiotta                          Cambridge Systematics Inc.

Peter Martin                             University of Utah

Joe McBridge                           Utah Department of Transportation

Rick McKeague                       Utah Department of Transportation

Bryan Meenen                          Salt Lake City

Dan Middleton                         Texas Transportation Institute (TTI)

Mark Parry                              Utah Department of Transportation

Joseph Perrin                            University of Utah

Karl Petty                                 CCIT

James Pol                                 FHWA, ITS Joint Program Office (JPO)

Mala Raman                             Battelle

Russell Robertson                     FHWA

John Rosen                               Washington Department of Transportation

Aleksander Stevanovic             University of Utah

Robert Stewart                         Utah Department of Transportation

J Max Tate                               FHWA

Lee Thobald                             Utah Department of Transportation

Shawn Turner                           Texas Transportation Institute (TTI)

Raelene Viste                           Idaho Transportation Department

Keith Wilde                              Utah Department of Transportation

Dian Williams                           Utah Department of Transportation

Qing Xia                                   Maricopa Association of Governments, Arizona


APPENDIX D

 

 

 

RELEVANT TRAFFIC DATA
QUALITY LITERATURE


D.1      Status of ITS Traffic Data for HPMS, Memo

 

D.2      Identifying the Scope of State Traffic Monitoring Activities

 

D.3      Memo on Reporting of Length Based Vehicle Classification Data to the Highway Monitoring System (HPMS)

 

D.4      Traffic Data Collection, Management and Reporting from ITS and Traditional Traffic Sites, White Paper

 

D.5      Quality Attributes used by Virginia Department of Transportation

 

D.6      Virginia Department of Transportation – Contracting Agreement for Traffic Data Quality – Excerpt

 

D.7      Inductive Loop Detector Failures, Chapter 5, Traffic Detector Handbook


D.1      Status of ITS Traffic Data for HPMS, Memo,

Ralph Gillmann, HPPI-30, August 2002

 

On June 18, 2002, a memorandum was sent from the FHWA Office of Highway Policy Information to the FHWA Division Offices on “Traffic Data for the Highway Performance Monitoring System (HPMS).”  The body of the memorandum states:

 

At the recent North American Travel Monitoring Exhibition and Conference (NATMEC), we showed a map of traffic detectors used for intelligent transportation systems (ITS) in the Cincinnati, Ohio area.  The map also showed the locations of automatic traffic recorders (ATRs) in the same area.  The point was to demonstrate the opportunity for ITS traffic detectors to provide traffic data for HPMS reporting.  For example, the annual average daily traffic (AADT) on an HPMS segment could be determined from an ITS detector on that segment rather than factoring a short count or previous year figure.  This would improve the quality of HPMS traffic data significantly.  It also would provide cost savings and reduced staffing requirements for the States’ traffic monitoring programs.

 

There are ITS deployments in every State and many could be used for HPMS reporting purposes as well.  While there are concerns about incorporating data from ITS detectors into traditional counting programs, they are a tremendous resource for traffic data collection, especially in urban areas where it is difficult to get traffic counts.  We fully support the use of ITS detectors for multiple purposes which is the goal of the ITS Archived Data User Service (ADUS).  We are asking the Divisions to provide us with the following information about the States’ use of ITS traffic data for HPMS reporting:

 

1.      Is the State traffic monitoring office aware of ITS detectors?

2.      Is the State using ITS detectors for HPMS reporting purposes?

3.      If the State is not yet using ITS detectors for HPMS, why not?

 

If you have any questions about this, please contact Mr. Ralph Gillmann of my staff at 202-366-5042 or Ralph.Gillmann@fhwa.dot.gov.

 

Several respondents asked for clarification about the meaning of “ITS detectors.”  Some thought it referred to the detector technology.  Our response was that they are traffic detectors that are used as part of an ITS project or were paid for by ITS funds.

 

The intent of the first question was to determine whether or not there are ITS traffic detectors in the State, at least as far as the State’s traffic monitoring office is aware.  If the Division said there weren’t any ITS detectors in the State, the answer was recorded as a No.  In four cases, the State answered Yes, but then said that ITS detectors were not available at this time.  These answers were changed to a No since that reflects the intent of the question.  The North Carolina contact wasn’t sure if they had any ITS detectors but since detectors were installed for the CARAT ITS project in Charlotte, the answer was recorded as a Yes.

 

The three questions are of course related:  If the answer to the first question is No, then the answer to the second question must be No and the answer to the third question is that ITS detectors do not exist in the State.  On the other hand, if the answer to the first two questions is Yes, then the third question doesn’t apply.  So there are three cases to consider depending on the answers to the first two questions:  Yes-Yes, Yes-No, and No-No.

 

Answers were available for 43 States.  The results were 14 States Yes-Yes, 16 States Yes-No, and 13 States No-No.  Percentages are shown in Figure D-1.

Figure D-1.  Answers to the First and Second Questions

 

 

So one-third of these States are using some ITS traffic detectors to supply HPMS traffic data.  Several noted that the number of ITS detectors available was currently limited but was expected to increase in the future.

 

A plurality answered Yes-No and their most common reason for not yet using ITS traffic detectors for HPMS was that they’re still working on it.  Other answers were that the data quality was poor or that it’s still under consideration.

 

Thirty percent of these States currently have no ITS traffic detectors.  Several said they were willing to use them or expected to have them in the future.

 

Thus 70 percent of the States have ITS traffic detectors available and almost one-half of these States are currently using some of them for HPMS reporting purposes.

 

Table D-1 gives a summary of all the responses.  Overall, the responses were positive and showed that ITS traffic detectors are being considered for traffic monitoring and HPMS.  There is clearly a trend toward increasing use and this will likely become a standard practice in the future.

 

Table D-1.  State-by-State Summary of Responses

States

Question 1

Question 2

Question 3

Division Contact

Alabama

Yes

No

Interested, under review

Alabama FHWA

Alaska

No*

No

No TMC

Al Fletcher

Arizona

 

 

 

 

Arkansas

 

 

 

Gary DalPorto

California

Yes

No

Under development

 

Colorado

Yes

No

Working on it

Craig Larson

Connecticut

Yes

No

Poor data quality

Michael Chong

Delaware

 

 

 

 

DC

No

No

Willing

Sandra Jackson

Florida

Yes

Yes

District 5

Kwame Arhin

Georgia

Yes

No

Low accuracy

Marcus Wilner

Hawaii

Yes

Yes

One site

Jon Young

Idaho

No

No

Plan to

Scott Frey

Illinois

Yes

Yes

TSC

Janis Piland

Indiana

Yes

Yes

Borman expressway

Clem Ligocki

Iowa

No*

No

Have none

Mark Johnson

Kansas

No

No

Waiting for 2003

Stephen Faust

Kentucky

Yes

 

 

 

Louisiana

 

 

 

 

Maine

No

No

Willing

John Perry

Maryland

 

 

 

 

Massachusetts

No

No

Should in future

Ed Silva

Michigan

Yes

Yes

MITS

 

Minnesota

Yes

Yes

TMC

Gerald Liibbe

Mississippi

Yes

Yes

 

Larkin Wellborn

Missouri

Yes

Yes

Branson; expect more

Jim Radmacher

Montana

 

 

 

Bob Burkhardt

Nebraska

No

No

Don't exist

Stephen Burnham

Nevada 

No

No

Intend to

Randy Bellard

New Hampshire

No

No

Don't exist

Martin Calawa

New Jersey

 

 

 

 

New Mexico

Yes

No

Working on it

Stan Mattingly

New York

Yes

Yes

Limited

Tom Kearney

North Carolina

Yes

No

Funding

Bill Marley

North Dakota

Yes

No

Waiting for ITS plan

Robert Griffith

Ohio

Yes

No

Probably next year

Stew Sonnenberg

Oklahoma

 

 

 

 

Oregon

Yes

No

Working on it

Kim Hoovestol

Pennsylvania

Yes

Yes

 

Eugene Olinger

Puerto Rico

No*

No

None available

Sam Herrera-Diaz

Rhode Island

 

 

 

 

South Carolina

Yes

Yes

 

David Morris

South Dakota

No

No

None available

Mark Hoines

Tennessee

No*

No

Not installed yet

Scott McGuire

Texas

Yes

No

Working on it

Kirk Fauver

Utah

Yes

No

Working on it

Harlan Miller

Vermont

Yes

Yes

One site

Jim Bush

Virginia

Yes

No

Working on it

Jennifer DeBruhl

Washington

Yes

No

Seattle

 

West Virginia

Yes

Yes

 

Greg Morris

Wisconsin

Yes

Yes

 

John Berg

Wyoming

Yes

No

Under consideration

James Bonds

Total

30 Yes, 13 No

14 Yes, 29 No

 

 

* No is recorded even though the State said Yes because no ITS traffic detectors are available at this time.

 

 


Table D-2.  State by State Summary of Responses

State

Is the State traffic monitoring office aware of ITS detectors?

Is the State using ITS detectors for HPMS reporting purposes

If the State is not yet using ITS detectors for HPMS, why not?

Yes /No

Comment

Yes /No

Comment

Comment

Alabama

Yes

 

No

 

The State is interested in potential for use of ITS detectors to contribute to the HPMS traffic data, but plans for installation of detectors are currently under review.

Alaska

No

 

No

 

Currently the MPO Anchorage does not have a Traffic Management Center and are not collecting and archiving data.

The Truck Enforcement Group will begin installing a WIM this summer.  The data is going to be included in the state’s data warehouse for WIM data which is currently under development.

California

Yes

 

No

 

Staff of UC Berkeley are currently developing a program for us to process PEMS data into our standard format for input into our database.  Once the data is our database we will be able to calculate AADTs.

Colorado

Yes

 

No

 

Working on it.

Connecticut

Yes

Traffic Monitoring is aware of the ITS detectors and conducted a thorough investigation of their value to the traffic monitoring program in 1997.

No

 

In 1997 this office compared data from the ITS detectors with similarly positioned ATRs or road tube counters.  The output from the ITS detectors did not correspond closely enough with the ATR or road tube counts (both the ATRs and the tube counters are regularly checked for count accuracy) to lead us to further pursue the use of ITS detectors as an integral part of our counting program.  The office is considering additional data testing on the Departments new ITS software once it is installed and operational.

District of Columbia

No

 

No

 

The State is not yet using ITS detectors for HPMS primarily for the same reason noted in #1; however, willing to use if the technology is made available

Florida

Yes

 

Yes

But only in FDOT District #5 and expanding.

 

Georgia

Yes

 

No

 

A research study has been initiated to determine the accuracy of traffic data from the State DOT's Auto-scope locations.  This study will be completed during FY 2003.  If the results of this study are favorable, the State DOT's ITS data will be used to support the calculation of AADT for the traffic monitoring program, and therefore the HPMS.

Hawaii

Yes

Yes, but they have determined that there is only one site from which they can get useful traffic data.  This is at the Halawa interchange on H-3.  The ITS data storage devices do not work at the site, so the traffic monitoring office disconnected and powered down the site, then hooked up its own portable ATRs to collect data there.  In essence only the sensors of the ITS site were used by the traffic monitoring office.

Yes

There is only one site from which they can get useful traffic data.  This is at the Halawa interchange on H-3.  The ITS data storage devices do not work at the site, so the traffic monitoring office disconnected and powered down the site, then hooked up its own portable ATRs to collect data there.  In essence only the sensors of the ITS site were used by the traffic monitoring office.

 

There is a large live-camera system on the State and county principal arterials in Honolulu, however, these do not collect traffic data.  The camera feeds are live on the Internet and are theoretically used at the County's traffic management center during workdays to monitor traffic conditions.  The public has access and can view traffic conditions before making their trips.  That is about all it is used for at this time.

Idaho

No

 

No

 

We are not yet using any data from ITS traffic detectors for HPMS but we have plans to do so.  Ada County Highway Department has a small traffic management center in Boise.  There is a joint ITS project underway to instrument the I-84/I-184 Flying Wye with traffic detectors.  That project will include some existing ATR sites along with several additional detectors.  The Wye is currently under construction so it will be a while yet before we are able to collect all of the data.  We are also working with the ports of entry to get truck information from their weigh-in-motion sites. 

Illinois

Yes

We do, and have done so long before IVHS, ITS, ATMS, ATIS, ADUS, etc.  In fact, nearly since the start of the real-time data collection using the magnetic induction loops in about 160 centerline miles of the expressway system, which has operation since 1960, by the IDOT Traffic Systems Center (TSC).

Yes

 

This data is "archived" in a ASCII file format after statistical summaries are computed including AADT archived" data is used by IDOT in the Illinois Roadway Information System (IRIS) which includes traffic statistics for roadway segments.  Of course IRIS also includes many other physical, geometric, control, etc. information about the roadway.  The archived traffic data is used to produce the AADT for about 60 HPMS sample sections on the expressways.  Of course this does not include other HPMS traffic data such as truck info, K and D factors for these sections.  To do so would require a complete years worth of the detailed, base data as well as some technological innovations to get classification counts from the single loop stations that predominate in the TSC.  The "archived" is also used by Chicago Area Transportation to produce a "Travel Atlas".

Indiana

Yes

 

Yes

 

Indiana counts the Interstate system every two years.  This year that study utilized ITS sites on the Borman Expressway (I-80/I-94) to obtain 48hour counts.  As other ITS sites come online they will be utilized in the same manner.

 

The ITS operations are utilizing counting equipment and software that meet their specific needs and are not necessarily compatible with the equipment/ software in use in the traffic counting operations.  Currently there are no plans to incorporate ITS data into the traffic counting programs except for use of 48 hour volumes as part of the coverage count and/or bi-annual Interstate counting program.

Iowa

No

 

No

 

 

Kansas

No

The Traffic and Field Operations Unit and the ITS Unit, both part of the KDOT Bureau of Transportation Planning, are working together on the KC Scout ITS project, a freeway management system in the Kansas City bi-state metropolitan area.  The design phase of the project is complete and the construction phase is under way.  The Interstate 435 part of KC Scout is expected to be collecting traffic data in the summer of 2003.  Data from the rest of the project, including Interstate 35, won’t become available until the end of 2003 or the beginning of 2004.

No

 

Data from ITS detectors will not be available until the summer of 2003.  The KDOT Traffic and Field Operations Unit expects the ITS information on I-435 and I-35 to be very useful and to aid in the gathering of data for HPMS purposes.

Maine

No

 

No

 

 

Massachusetts

No

 

No

 

Massachusetts has not yet deployed any ITS projects to the point where we can use the information from ITS detectors for HPMS.  We currently have a couple of big ITS projects under construction, the Route 128 ITS Project and the Central Artery which has a major ITS component to it.  Both of these projects are still 1-2 years from being operational.  When they become operational we will make every effort to get the State to make dual use of the data collected.

Michigan

Yes

 

Yes

Data from the permanent pavement loops that is routinely collected by the MITS Center is summarized into hourly totals and electronically transmitted to Transportation Planning in the central office.  This is the principal means for providing the traffic data used in the estimation of AADT in the Detroit area.

 

Minnesota

Yes

Detectors in Minnesota--have been for 15 years

Yes

We have been using Traffic Management Center (TMC) detector data for many of our ATRs and for short duration sampling on instrumented segments throughout the Minneapolis/Saint Paul metropolitan area.  Data from the detectors is used to estimate AADT for all TMC instrumented segments.  These data supplement other ATR data and other short duration sampling throughout the state.

 

Mississippi

Yes

Most of these sites are WIM sites that can be used for monitoring traffic.

Yes

 

 

Missouri

Yes

The Analysis and Report Unit and the System Analysis Engineer, both part of the M0DOT-Transportation Planning, are aware of ITS detectors.  The KC Scout ITS project is a freeway management system in Kansas City bi-state metropolitan area.  The Gateway Guide in St. Louis and TRIP in Branson round out the ITS projects in Missouri.

Yes

In Branson, TRIP is used for a portion of reporting and the St. Louis Gateway Guide, when operational, will be incorporated into the reporting process.

 

As the ITS becomes operational, traffic data information (including historical data) will be incorporated into the State correlated database.

 

 

 

Nebraska

No

ITS detectors have not been installed

No

 

 

Nevada

No

 

No

 

Nevada intends to use ITS data in development of AADT estimates and ultimately to populate the HPMS.  This may begin as early as next year with the implementation of the FAST project in Las Vegas, and facilitated via the planned ADUS.

 

However, use of this data is contingent upon validation of ITS based count data, with specific regard to accuracy and reliability.  While ITS based sensors do indeed monitor and store traffic data, they are not necessarily placed in locations that capture data needed to derive accurate AADT estimates.  It has also been my experience with arrays in similar systems (LVACTS), that sensors are not maintained, i.e. loops fail or over count in one lane and it is ignored because it is not critical to the operation of the system, or the time stamps are off, etc.  Another potential stumbling block would be the sheer volume of data collected, and the storage interval's that ITS system's provide.  Upon implementation (as I understand it), FAST will spit out 5 minute increments of data for each sensor.  Many of these sensors are redundant from a traffic counting point of view, therefore identifying specific sensors and where they physically exist in the system will be the first challenge.  The second challenge that I foresee is the summarization of this data into meaningful periods (15min or hourly), and groups (by direction or roadway).  All of which require post processing of some form.

New Hampshire

No

 

No

 

 

New Mexico

Yes

 

No

 

A current project will employ ITS detectors and develop a method of providing count/speed data to the Planning Division as well as provide video/incident management data to the ITS Engineer in District 3 (Albuquerque metro area).  The Planning Division and ITS unit are working closely to ensure ITS deployment and HPMS deployments will serve both purposes when and where appropriate.

New York

Yes

The Division has facilitated meeting between ITS Program Managers and Traffic Monitoring staff designed to enhance coordination between the program areas.  The dialogue the Division promotes is two-way, when Traffic Monitoring was expanding its continuous counters by deploying sixty new sites, the locations for the new sites were shared with ITS staff so they were aware of assets in the roadway that could be included in the regional ITS constructs

Yes

To a limited degree.  There has been acoustic sensor based data included in the traffic data sets reported through the State's HPMS.  Also, detector based volume data has been prepared and submitted to Traffic Monitoring by the Albany TMC.  FHWA staff has been briefed by the Albany MPO staff describe accessibility to the TMC traffic volume data and their intention of using the data in their demand modeling. Hopefully, more to come.  NY is currently in the early stages of Regional Architecture development for the non-TMA urban Areas and rural areas.

 

North Carolina

Yes

The ITS Sections and Traffic Control personnel have started investigating different detection technologies used to monitor traffic for incident/congestion management and construction work zones.

Note:  the CARAT project in Charlotte has ITS detectors.

No

 

Funding

North Dakota

Yes

To date, while the traffic data analysis section is aware that there is traffic data collected from ITS detectors throughout the State, the data are not used as input to the HPMS.

No

 

Traffic data is collected by different jurisdictions throughout the State.  Agreements regarding quality and distribution of the data have not been established.  NDDOT has contracted with North Dakota State University to prepare the Statewide ITS plan.  It is anticipated that this plan will provide the architecture for the shared collection of traffic data

Ohio

Yes

 

No

 

Ohio is working jointly with the Kentucky Transportation Cabinet to improve the access to the shared ARTIMIS ITS system.  ARTIMIS has recently implemented an FTP site that allows us to gain access to vehicle volume data in TMG 3 card format.  Ohio is currently working on completing counts for the Hamilton county area in which ARTIMIS is located.  Data from ARTIMIS has been gathered and will be reviewed for incorporation into the counts for this county.  Although the information was not incorporated into this years HPMS submittal, we feel we will be able to better utilize the data for next years submittal.

Oregon

Yes

Inductive Loops, Video Detectors, Weigh-in-Motion Detectors, Acoustical Sensors, and Radar Sensor

 

 

No

 

The traffic monitoring office has used ITS surveillance cameras to perform manual counts.

 

The traffic monitoring office has been working with the ITS offices for approximately the last two years in striving to make use of existing ITS detectors for HPMS purposes. 

 

Hurdles:  The traffic monitoring office has tested and compared ITS ramp meter counts with ATR counts and manual counts in the same location.  While there was nearly exact agreement between the ATR count and the manual count, there were large differences in the ramp meter counts.  The State attributed these differences to inaccurate tuning of the ramp meter loop amplifiers.  It has been a struggle for the traffic monitoring office to obtain adjusted and accurate ITS ramp meter data that is in an easily programmable format.  Another obstacle is that some of the ramp meter inductive loop sensors only collect data in one direction of the highway.  Weigh-in-motion detectors provide an overwhelming amount of data to the mainframe, and the recent conversion to the new Traffic Monitoring Guide has created conflicts in using this data.

 

Endeavors:  In some cases the traffic monitoring office has provided information and technology to the ITS office.  The offices are currently working together in testing RTMS and radar systems.  Also, the traffic monitoring office has been involved in the development of a statewide data clearinghouse project that will consist of data from all State agencies and will be made available for many uses.  The traffic monitoring office is aware of several current and future opportunities for data sharing as well as power and technology sharing.  They are very interested in continuing to work with the ITS office in pursuing and implementing these opportunities, and look forward to FHWA encouragement and/or guidance.

Pennsylvania

Yes

 

Yes

We are using the traffic data being gathered by ITS for HPMS, when we feel that the data is good.  Not all data is automatically accepted.  It is evaluated as is the data collected through more traditional methods.

 

Puerto Rico

No

 

The Commonwealth traffic monitoring office is aware of the ITS projects being planned and implemented by the PR Highway and Transportation Authority (HTA).  However, no ITS detectors that can be used for traffic counting have been installed yet. The ITS projects are under design.

No

 

The Commonwealth is not using ITS detectors for HPMS reporting purposes.  However, there has been coordination between the offices of traffic operations and traffic monitoring to design and install the capability for traffic counting in the ITS detectors that will be constructed in the future

South Carolina

Yes

 

Yes

 

 

South Dakota

No

South Dakota does not have any ITS detectors with the exception of limited Auto-scope at intersections in Sioux Falls.

No

 

None available

Tennessee

No

Don't have any installed yet.  October 2002 earliest implementation.

No

 

 

Texas

Yes

 

No

 

TXDOT is in the process of developing an enterprise software database (Statewide Traffic Analysis & Reporting System) which includes re-engineering of the traffic monitoring program.  The use of ITS data falls into a release later than Release 1.0 (basic traffic analysis functionality).  STARS is broken up into releases to make the work load and production more manageable - to avoid an all or nothing approach.

 

ITS data use falls into a later release to provide time to work with TTI to determine in what format the ITS data is produced; what does it take to bring it over and convert it to XML language and download the data; and how to receive and statistically process it (e.g., 364 days - 15 days per month - one week a quarter?).  Also, the companion functionality - ramp balancing - comes up in a release later than Release 1.0.  The work with TTI is current on-going.

 

STARS Work Program:

Release 1.0 blueprints are currently scheduled to be completed November 2002 with construction completed November 2003.  Sometime in the latter half of 2003 design of Release 2.0 should begin.  It is anticipated, but not STARS Steering Committee approved, that ITS data use and ramp balancing will fall into Release 2.0.

Among the conclusions is the statement:  "As a result of their participation in this research project, the North Central Texas Council of Governments (NCTCOG) has committed to developing a regional data archive in Dallas-Ft. Worth.  As of November 2001, NCTCOG has allocated some of its resources and is preparing a budget and scope for this archive development.

Utah

Yes

UDOT's traffic monitoring office is very much aware of the ITS system and ITS detectors.  The UDOT has developed a comprehensive ITS system for the greater Salt Lake urbanized area that includes ITS detectors on the Interstate System and on many of the major arterial streets.  This system has been under development since 1996 and is now fully operational

No

 

We expect to have selected ATR detector locations and have a systematic process for archiving the data and using it for HPMS within two years.

Vermont

Yes

 

Yes

The only ITS equipment installed in VT as an ITS deployment is one WIM on US 7 in Brandon, VT.  That one WIM together with the other WIMs that were SPR funded (not as ITS deployments) is used for coverage counts used to develop the HPMS traffic information.

 

Virginia

Yes

VDOT'S traffic monitoring office is aware of ITS detectors.

No

 

VDOT remains committed to using ITS data for multiple purposes.  There is currently an effort underway at VDOT to develop a “Mobility Data Store” that is intended to make a variety of data available to many different users with different data needs.

 

While VDOT staff is discussing the idea of coordinating ITS detectors with their traffic monitoring efforts, they are facing obstacles.  Some of the obstacles include:  data quality, usage, format, and transfer issues.  Work on the integration of data from many sources, including ITS continues.

Washington

Yes

 

No

 

Traditionally, the Headquarters Data Office (called the TDO) has sent crews around to set tube counters on the ramps to count in the urban area, and then did ramp balancing.  (Its the usual story of "your counters aren't accurate, ours are, although we've never actually tested ours.")

 

Starting this year, the TDO will use a subset of their normal urban counter setting money to validate (and tune if necessary) a subset of the freeway loops to ensure their accuracy.  These loop locations will then become the primary source for urban freeway HPMS data.  The "loop validation" will be done by video taping the freeway at the loop locations and using that tape to perform short manual counts.  This data will then be compared against the recorded loop volumes.  Bad results will result in a request for loop tuning and/or repair.

 

This change in plans was caused by the confluence of several actions:

 

1)   Because of the Department's budget keeps shrinking, the TDO was looking to save money.

2)   The new Secretary is now heavily using the freeway ops data for his own purposes and wants consistency in reporting

3)   There was a minor controversy when we moved up something like 12 places in the "best DOT performance" report that some North Carolina professor does, thanks in large part to his poor handling of urban freeway HPMS data, and that raised major concerns about the accuracy of the data the Department was using and/or publishing.  (We ran around and figured out what caused the numbers he was using to change so dramatically.  It was a coding change that he didn't handle correctly, more than a major change in reported volumes, but the "run around frantically" exercise brought the whole "why aren't you using the freeway data" and "freeway data quality" issues to a head.)

4)   There was a personnel change in the TDO, as the new secretary works to get better numbers for performance monitoring, and that removed some of the old personnel issues.

West Virginia

Yes

 

Yes

They are used to conduct automatic traffic counts at various locations throughout the state, which are then used for HPMS purposes.

 

Wisconsin

Yes

 

Yes

 

 

Wyoming

Yes

 

No

 

The State is considering this as a possibility with future ITS activities.


D.2      Identifying the Scope of State Traffic Monitoring Activities

Jeff Patten, June 2001, FHWA

 

The attached June 11, 2001 memorandum “Identifying the Scope of State Traffic Monitoring Activities” was used to identify primary organizational units involved in traffic monitoring activities.  As the memorandum states, “Within a State Department of Transportation (DOT), it is not unusual to have many organizational units responsible for various aspects of traffic monitoring in response to a wide variety of needs ranging from policy development to project design and system operations.  In addition, there may be organizational units that have responsibility for traffic monitoring equipment installation or repair.  As an initial step in gaining assurance that traffic monitoring programs are responsive to national performance measurement needs, it is necessary to identify those organizational units within each State DOT, that have a traffic monitoring responsibility.”

 

State responses indicated that there are three primary organizational units involved in the traffic monitoring activity:  Planning, Design, and Intelligent Transportation Systems (ITS) or Traffic Management Centers (TMC).  These units are not the only organizational units involved in traffic monitoring activities, but they are the most frequently identified as being involved in this activity.  The degree of involvement in traffic monitoring can vary from conducting simple road tube counts to operating elaborate ITS / TMC installations.  Since methods, techniques, and equipment for conducting traffic monitoring activities are similar across the three organizational units, there is significant opportunity for partnering between the units.

The following is a description of the responsibilities and activities managed by each organizational unit involved in the traffic monitoring program.

 

Planning Unit

In most States, the Planning Unit has the responsibility for the States’ traffic monitoring programs.  Generally the unit is responsible for:

 

1)      Equipment (either permanent or portable)

a)      Selection

b)      Testing

c)      Deployment

d)      Maintenance

 

2)      Data

a)      Processing

b)      Analyzing

c)      Reporting

d)      Archiving

 

State DOT personnel accomplish the majority of the traffic monitoring activities, but many State DOTs rely on contractors to accomplish this work.  Thirteen of the 40 State respondents use contractors to some extent to carry out activities such as installing and maintaining equipment, and processing permanent automatic traffic counters (PATC) traffic data.  Some other activities supported by contractors are portable traffic counts for statewide traffic count program and special study counts. 

 

The Central Office of the State DOT is responsible for the overall management of the traffic monitoring program.  Many different agencies or combinations of agencies can be involved in the collection of traffic data in the field such as Central Office personnel, state district and division personnel, county or city personnel, or contractors.  The processing, analyzing, reporting, and archiving of this collected traffic data is accomplished by Central Office personnel at the State DOTs.

 

The majority of traffic data are collected by either automatic portable or permanent traffic counters with manual traffic counts being conducted in locations of high volume, congested conditions or on multilane facilities.  The lifting of the Federal requirement for speed data has resulted in speed studies being conducted only when needed to support traffic operations.  Data needs for Highway Performance Monitoring System (HPMS), design and traffic operation projects dictate what types of traffic data are being monitored by the Planning Unit with incident detection or real-time traffic monitoring generally not one of the data types.  Only the Florida DOT’s Planning Unit uses incident detection data to verify traffic density at a few selected traffic monitoring sites for emergency evacuations only. 

Intelligent Transportation System Unit / Traffic Management Center Unit
(ITS / TMC)

ITS / TMC Units have the majority of the responsibility for incident detection and real-time traffic monitoring under the State’s traffic monitoring program.  Sixteen State responses indicated some degree of ITS / TMC activities being conducted by the State DOTs or their metropolitan areas.  The States of Washington, Michigan, Missouri, and Rhode Island are currently archiving and making use of ITS / TMC generated traffic data for planning purposes, and the States of Kansas and Utah are currently developing plans for archiving and using such traffic data for planning purposes.

ITS / TMC Units have been in existence since the 1960’s, but because most of these Units are recently established, State responses indicate that new equipment is being used for incident detection such as improved video cameras, radar and microwave sensors.  All of the equipment is being used as permanent installations.

 

The State responses indicated that either State personnel or contractors are responsible for selection, testing, deployment, and maintenance of traffic monitoring equipment.  Although the processing, analyzing, and reporting of the archived traffic data wasn’t addressed in the memorandum, a few State DOTs volunteered information indicating that the ITS / TMC Unit relies on the Planning Unit to accomplish this activity.

 

 

 

Design Unit

Design Units are established in every State DOT, but out of the 40 responses only nine Design Units are involved in the Traffic Monitoring Program.  The State responses indicated that design or operation engineers use traffic data for signal timing studies, speed studies, capacity analysis, highway design, and signal warrant studies.  In some States, the Design Unit collects traffic data, but the majority of the States use the Planning Unit to supply such data.  None of the Design Unit’s responses indicated that they were involved in incident detection or real-time traffic monitoring.

 

Conclusion

State responses documented that there are many organizational units responsible for various aspects of traffic monitoring with the Planning Unit, Design Unit, and ITS / TMC Unit being the most notable.  The distinction between the three organizational units are that the ITS / TMC Unit uses traffic data primarily in real-time to better operate and manage the system, while the Planning Unit and Design Unit use archived traffic data for project and system designs.  The traffic data needs of the design and operation engineers are critical inputs for the design of a traffic data collection program.  The ability of the three organizational units to share ideas, methods, techniques, and equipment for traffic monitoring will help to insure that the traffic monitoring programs are being managed as cost effectively as possible. 

Partnering can best be accomplished between the Planning and ITS / TMC Units since both have been involved in the selection, testing, deployment and maintenance of traffic monitoring equipment for many years.  Another avenue for partnering would be sharing or using the same State personnel or contractors for installing and maintaining the permanent traffic monitoring equipment.  The ITS / TMC Unit could benefit from the Planning Unit’s knowledge and experiences when it comes to the processing, analyzing, reporting, and archiving of traffic data. Partnering between these organizational units would help to further advance the Archived Data User Service (ADUS).  Partnering is already being conducted by a number of State DOTs and lessons learned from their experiences could be used to help advance or develop partnerships in other State DOTs.  The first step in any of this partnering is to make sure that the ITS and operations engineers, and planners in the FHWA division offices are aware of each other’s activities with regard to traffic monitoring.  The ITS and operation engineers, and planners for the divisions could use this information to develop partnerships between the State’s Planning, Design, and ITS / TMC Units.


D.3      Memo on Reporting of Length Based Vehicle Classification Data
to the Highway Monitoring System (HPMS)

FHWA , February 24, 2000

Director, Office of Highway Policy Information, HPPI-30

 

The HPMS calls for the annual reporting of various types of vehicle classification data.  This reporting ranges from the percent of single-unit and combination trucks on HPMS sample sections to highway functional class level summary reporting of 13 vehicle classes.  Because collecting such data on multi-lane or high volume facilities is difficult, some States have proposed collecting vehicle classification data using a limited number of vehicle length categories.  To date, we have not seen information that objectively compares the data collected through length based methods with that collected through the more traditional methods based on the 13 categories described in the Travel Data by Vehicle Type section of Chapter III of the HPMS field Manual.  Without the review and approval of such information by my office, vehicle length based classification data are not to be reported to the HPMS.

 

The States proposing to report vehicle length based classification data to the HPMS must provide the following information.

 

1.                              A description of the length categories to be used and how they relate to the 13 categories.  For example, if four length categories are used, the description should explain how each of the13 categories relates to a particular length category;

 

2.                              A description of the method used to test how well each of the length categories captures the vehicles classes identified in point 1 and the results of those tests;

 

3.                              If a State intends to disaggregate length based data into the 13 categories the imputation method must also be described and;

 

4.                              Documentation on the situations in which length classification will be used.  For example, a State might propose to such techniques only on high volume urban streets or a State may want to use length based classification for collecting information on percent trucks for reporting on HPMS sample sections, but will use other methods to report the Travel Data by Vehicle Type.


D.4      Traffic Data Collection, Management and Reporting from ITS
and Traditional Traffic Sites, White Paper

John Rosen, Highway Usage Branch Manager, Transportation Data Office,

Planning and Capital Program Management, Washington State Department of Transportation

March 25, 2003

Introduction

Traffic data is collected for a variety of purposes including planning purposes, purposes related to measuring the performance of the transportation system, and for highway operations purposes.  While the use of the data may differ by purpose, there is some data that can be collected for multiple purposes or some resources to collect data that can be shared to more efficiently collect the data.  The traffic volume data collected on some Central Puget Sound freeways is collected by two different organizations and often uses different data collection equipment.  There has been concern about the utility and accuracy of each other’s data.  The barriers that have existed between the planning traffic data collection and traffic operations data collection are now being scrutinized and where appropriate evaluated to see what data can be collected with the appropriate accuracy for both purposes.  Both disciplines have a need for traffic data but until recently technical differences prevented the sharing of equipment and data.  This paper is an attempt to present background on some of the purposes for the data collected, the barriers that have prevented a single data collection system and some possible alternatives for sharing a data collection system.  For some solutions FHWA may need to be contacted and informed of a change in our process or procedure/policy prior to implementation.

Background

For years traffic operations and planning offices have collected various forms of traffic data.  The traffic data is used for a multitude of reasons:

Traffic Operations

·        Ramp metering – For the purposes of ramp metering, traffic volume data is collected in real time to adjust the metering rates.  For this purpose, traffic volumes are not required to have the same high level of accuracy required for calculating volumes on an annual basis.  This is because metering rates are adjusted frequently and errors in data do not compound for the next metering rate.  This data is stored in 5-minute volumes.
For this operational purpose volumes must be measured for each segment of a freeway and each exit and entrance to the freeway.  This requires an extensive system of loops that is not needed for planning purposes as described below.

·        Traffic flow (speed) – To detect traffic flow bottlenecks and possible incidents, traffic speeds are estimated from traffic loops.  Again this data is needed for each segment of roadway to effectively monitor traffic flow.  The estimate of speed for indicating the range of speed (green, yellow, red, or black on a map) does not require the level of accuracy previously required by FHWA in monitoring speeds affecting a national speed limit.  This data is stored in 5-minute average speeds.

·        Travel times – To estimate travel times, spot speeds are estimated from loop occupancy and aggregated to estimate corridor travel time.

·        Arterial traffic signal flow – Controlling traffic at signals requires detecting vehicles at intersection approaches.  This data is not stored since traffic volumes are not collected.  The majority of loops used in Washington are at signalized intersections, which generally do not archive any data.

Planning

·        HPMS reporting – Average daily traffic volumes are collected at sample locations to determine regional and statewide estimates of traffic flow.  The accuracy of this volume data is very important since the purpose is to detect small changes in travel volumes typically less than 3%.  Since this data is aggregated in hourly, daily, and yearly volumes, small inaccuracy in the data from loops will compound causing inaccuracies in the calculation of average daily traffic volumes.
The data needed to estimate regional and statewide traffic volume trends is a statistically valid sampling of certain locations on state and local roadways.  It is a much smaller sample of loop locations needed for this purpose than for operational purposes.  To determine trends in a region like the Puget Sound may require on 20 to 30 locations in the entire region.

·        Speed Monitoring – Quarterly speed monitoring of major freeways was accomplished for many years as a federal mandate to enforce a national maximum speed limit of 55 mph and was tied to federal funding of the highway infrastructure.  To accurately measure small changes in average speeds, strategically placed and accurately calibrated sets of speed loops were built.  This data is primarily used to inform law enforcement on the overall trends of speeds on highways.

·        Vehicle Classification – Sets of loops are used to sort vehicles into “bins” of vehicle classification, determining the percentage and type of different vehicles using the road, which is important for design of future projects.  The technology and accuracy of equipment collecting data from loops to determine vehicle classification is needed for planning purposes rather than for operational purposes.

·        Project forecasts – Project and corridor levels traffic volumes need to be forecast for future projects.  Again, with forecasts being very sensitive to small changes in the rate of growth, very accurate trend data is needed in measuring hourly, daily and yearly volumes.

 

Operations staff are primarily interested in real-time or near real-time applications while planning staff are primarily interested in traffic monitoring and trends over time.  While traffic data are required for each of these offices the type and level of accuracy required may vary.  Because of these differences both disciplines have accomplished their activities independently.

 

Over the last several years the two disciplines have reviewed current practices and requirements.  In the last year the FHWA started placing an emphasis on reduction of redundancy in traffic data collection efforts, which should eliminate some costs where equipment or data can be shared.  I attended a workshop in Salt Lake City on March 13, 2003 to discuss what traffic data quality is and how we might consolidate ITS and planning traffic data collection efforts.  Both ITS and planning were well represented at the workshop and a number of key areas and concerns were discussed.  Based on this and one other workshop, held in Columbus Ohio last week, a finalized set of white papers will be developed by Battelle Corporation and forwarded to FHWA.

Washington State Experiences

Until recently, WSDOT Traffic Operations and Planning Offices had not combined efforts on traffic data collection.  This paper will try to present some of the reasons, review where we can improve our efforts, and suggest how the department can have a win-win situation.  One reason for the data collection being done separately is the different controller equipment used to collect the data.  Equipment and protocols used for and by each office are different.  The NW Region TSMC collects real time traffic data and archives it to a data silo.  The equipment used to collect the traffic data is the 170 controller.  The TDO uses Diamond Traffic Products Phoenix traffic counters and International Road Dynamics 1060 Weigh in Motion (WIM) counters to record traffic (usually in an hour long time period).

Each of these pieces of equipment were designed to receive an input signal and either store it or communicate it to a central location where it is archived.  Neither type of existing equipment will communicate the input signal to more than one recorder/controller.  Because of this, both disciplines have developed their own traffic databases.

 

Starting in 2001 the TDO, in cooperation with Northwest Region Traffic Systems Management Center (TSMC), implemented a strategy to capture (for congestion measurement and other “planning” purposes) the urban traffic data from the TMSC data silo.  For 14 ITS sites a manual count was performed to validate the traffic data.  This validation is required on an annual basis by AASHTO (AASHTO Guidelines for Traffic Data Programs) and the FHWA Traffic Monitoring Guide (TMG).  The sites were reviewed and those that met the tests of accuracy and quantity (2 days of every weekday for each month per AASHTO and TMG recommendations) were included in the department’s annual traffic report and in the department’s Highway Performance Monitoring System (HPMS) submittal.  As additional sites are identified, the same criteria will be used and for those that past the tests they will be included in the department reports and submittals.  The TDO will coordinate with all regions that have or are in the process of establishing TMC’s so the department can minimize costs and still have sufficient traffic data with which staff can make informed decisions.

 

Requirements for loop deployment and maintenance are different for each of discipline, although the sensors installed in the roadway are using the same technology.  This is discussed above.  If a loop becomes disabled or inoperable for operational purposes it is not as imperative to reinstall or attempt to fix quickly.  This is because the flow system can use an upstream/downstream loop to determine occupancy and flow.  The system can also use adjoining lanes to extrapolate or interpolate occupancy/flow.  Also, because of the existing maintenance budget and the high number of loops in a TMC system, replacement of loops that fail and tuning the systems regularly has not been possible.  Traditionally, the TDO collects traffic data that is historical in nature and that requires all loops in all lanes at a particular location to be working full-time.

 

Proposals/Recommendations

To reduce the redundancy of sensors in the roadway and to share more data for enhanced operations and improved trend data, we have been working with our traffic counter vendor to develop new input equipment for the controller that receives the signals from the loops.  This equipment will allow for the split of the signal from the loops.  So for sites TDO currently maintains and reports we will be able to share the sensor input signal with a TMC so they receive the real time data they need.  We have tested a prototype and found it to be successful.  We plan on placing an order in the near future to acquire this equipment and deploy them into field equipment.  This consolidation will eliminate the need for two sets of sensors in the same location.

 

TDO and the Regional Traffic Offices will need to coordinate any new additions to traffic reporting sites.  Where possible, we will pool resources and eliminate redundancy.  We will need to connect the TDO systems into the existing TMC telecommunications system. This will require the installation of some conduit, cable, and communications equipment.  As future projects install ITS communication systems linking the sites will be provided.

 

TDO will continue its efforts to validate existing traffic reporting sites and develop a priority list of existing sites that could be calibrated for HPMS purposes.  We will need to coordinate with regional offices on the priority sites and combine resources to maintain the sites in a good working order.

 

TDO will review the method of collection, estimating and reporting TMC sites traffic data in urban areas.  We will then schedule a meeting to discuss current practices and future endeavors.  Based on the agreement reached at the meeting more of the traffic data collected at the regions may be included in the traditional traffic data reporting (HPMS).  If the TMC traffic data is not indicated as bad or suspect TDO should collect, edit as needed and report the traffic data (assuming there is at least 2 days of each weeks worth of data each month).  As new sites are developed for operational purposes, the data at selected locations will be designed to provide additional traffic planning data for HPMS.

 

 


D.5      Quality Attributes Used by Virginia Department of Transportation

http://www.virginiadot.org/projects/resources/(IAP)AADT.pdf, Glossary of Terms

 

QA:     Quality of AADT:

QW:    Quality of AAWDT:

A Average of Complete Continuous Count Data

A Average of Complete Continuous Count Data

B Average of Selected Continuous Count Data

B Average of Selected Continuous Count Data

F Factored Short Term Traffic Count Data

F Factored Short Term Traffic Count Data

G Factored Short Term Traffic Count Data with Growth Element

G Factored Short Term Traffic Count Data with Growth Element

H Historical Estimate

M Manual Uncounted Estimate

M Manual Uncounted Estimate

N AAWDT of Similar Neighboring Traffic Link

N AADT of Similar Neighboring Traffic Link

O Provided by External Source

O Provided By External Source

 

R Raw Traffic Count, Unfactored

 

QC:     Quality of Classification Data:

QK:     Quality of the Design Hour estimate:

A Average of Complete Continuous Count Data

A 30th Highest Hour Observed During 12 Months of Continuous Traffic Data

B Average of Selected Continuous Count Data

B 30th Highest Hour Observed During Less than 12 Months of Continuous Traffic Data

C Short Term Classified Traffic Count Data

F Factored Highest Hour Collected at in a 48 Hour Weekday Period

F Factored Short Term Traffic Count Data

G Factored Highest Hour Collected at in a 48 Hour Weekday Period with Growth Element

H Historical Estimate

M Manual Estimate of 30th Highest Hour

M Mass Collective Average

N Design Hour of Similar Neighboring Traffic Link

N Classification Estimates of Similar Neighboring Traffic Link

O Provided by External Source

 


D.6      Virginia Department of Transportation – Contracting Agreement
for Traffic Data Quality – Excerpt

VDOT requires a certain quantity of acceptable data from each site to be able to use that site for traffic factor creation.  Lease payments under this contract shall be structured to encourage the contractor to make every effort to insure that the required quantity of data is provided.  The following payment criteria will be followed:

 

a)   Full monthly payment will be made for all ATRs and modems at sites where 25 or more days of useable (for factor creation) classification and volume traffic information are available during a calendar month. 

 

b)   Seventy-five % monthly payment will be made for all ATRs and modems at sites where 15 or more days of useable (for factor creation) classification and volume traffic information are available during a calendar month.

 

c)   Seventy-five % monthly payment will be made for all ATRs and modems at sites where 25 or more days of useable (for factor creation) volume traffic information, but less than 25 days (useable for factor creation) classification data are available.  If the classification data shortfall continues for three months, the % of payment rate will drop to fifty % for the fourth month and the following months until the problem of classifying data is corrected.

 

d)   Fifty % monthly payment will be made for all ATRs and modems at sites where 15 or more days of useable (for factor creation) volume traffic information, but less than 15 days (useable for factor creation) classification data are available.  If the classification data shortfall continues for three months, the % of payment rate will drop to twenty-five % for the fourth month and the following months until the problem of classifying data is corrected.

 

 e) At sites where two ATRs and modems are located, the data from each are considered jointly, and payment will be made on the combined data availability for the entire site.  For example, if ATR number 1 has data available from the 1st through the 15th of the month, and ATR number 2 has data available from the 16th through the 30th of the month, payment will not be authorized as no complete days of data for the entire CCS are available. Exception – if one side of the road has 25 or more days of valid data, while the other side does not have sufficient data to qualify for payment, a ten % payment will be made for the one side that does have data.

 

f)    Monthly payment will not be made for sites that have less than 15 days of volume data available during a calendar month.

 

The Contract Administrator, or his representative, will process the monthly report detailing which sites fall into the various categories for payment within 2 working days of the end of the calendar month and provide that information to the contractor to facilitate invoice preparation.  If data transmission problems exist, and the contractor desires to manually collect and submit data, he may request an extension.  All manually submitted data shall be submitted by the 10th day of the month to be considered for lease payment purposes.

Monthly payment for the terms of the lease portion of the contract is defined as the annual cost bid/proposal divided by twelve. 

Service Call Procedures

As part of the lease agreement and payment, the contractor shall maintain the ATR and modem equipment and respond to VDOT “service calls”.  VDOT will submit a service call to the contractor whenever the data analysis indicates a potential problem exists, a specific problem is discovered during a VDOT site inspection visit and/or a communications/data transmission problem occurs.  There will not be a separate charge (pay item) for the service calls related to ATR/modem equipment problems, telephone line problems, or failed sensors, as costs associated with the service calls shall be included in the price of the monthly lease charge, or in the case of failed sensors that require replacement, in the replacement cost.  A charge will be allowed for service calls that result from VDOT road maintenance (repaving or milling) or damage from vehicle accidents.  This information shall be included in contractor’s response to the VDOT Contract Administrator.

 

The contractor shall have 7 calendar days to investigate, make site visits, make repairs and respond back to VDOT after notification/receipt of a service call.  The response back to VDOT shall include a date and time of on site visits, technician’s name and a summary of the nature of the problem found and action taken.  All lost days of data shall be used to compute the monthly ATR and modem lease payment in accordance with procedures outlined in paragraph 3-10.  If the result of the service call site visit is that sensors require replacement, the contractor shall notify the Contract Administrator who will arrange for verification of the requirement.  After verification, the Contract Administrator will contact the contractor with scheduling instructions.  The sensor shall then be scheduled for replacement as per the paragraph 3-14 of this document. 

 

A log sheet shall be maintained in the cabinet at each CCS.  Each time a site visit is made, the technician shall make a log sheet entry including the technician’s name, date, time, amount of time on site, purpose of the visit and any actions taken.  VDOT technicians will also make entries on this log sheet.  Completed log sheets will be submitted to the Contract Administrator.  A sample log sheet can be found at Appendix H.

 

If the findings of a service call indicate that VDOT road maintenance is the cause for the data problem, i.e. the roadway has been recently paved or sensors destroyed by milling, the VDOT Contract Administrator shall be immediately notified.  Traffic count site "down time" (site is non-operational or produces inaccurate data) resulting from VDOT road maintenance will not be counted against the contractor's operational readiness requirements, as long as repairs are made in a timely manner (within 15 days after direction is received from the Contract Administrator). However, if repairs are not made in a timely manner, all down time will be computed and counted against the days of data requirement (See previous section) for ATR and modem lease payments.


D.7      Inductive Loop Detector Failures, Chapter 5,
Traffic Detector Handbook

The number of inductive loop detector failures nationwide has created deep concern in the traffic engineering community, resulting in an aggressive effort to determine the major causes and elimination or minimization of these failures.  During the l980s, FHWA, in cooperation with various state agencies, funded a number of studies of inductive loop detector failures.  The objectives were to quantify the scope of inductive loop detector failures, identify the causes of failure, and evaluate the various installation procedures (e.g., sawcutting and cleaning slots) and materials (e.g., sealant, conduits, wires and cables).  The results of these studies are briefly discussed below and are presented in more detail in Appendix M.

Causes of Inductive Loop Detector Failures

Inductive loop detector system failures can likely be traced to the in-road loop wire or to the splice between the loop wire lead-in and the lead-in cable.  Since the introduction of the digital self-tuning electronics units, failure attributed to the amplifier/oscillator unit has all but disappeared.  Failures continue to plague agencies using older electronics units, which do not adjust to changes in temperature, moisture, and number of turns or type of loop wire and lead-in cable type and length.

 

Loop failure literature is difficult to synthesize because of the different terminology used to define failures.  For example, one report may categorize a failure as a “break in loop wire.”  This may be caused by crumbling pavements, failure of the sealant, a foreign substance in the slot, or any number of other reasons.  A report from another agency may report this failure as caused by “deteriorated pavement.”

 

No matter how failures are categorized, the inescapable conclusion is that the predominant causes for failures in the inductive loop detector system can be ameliorated by improved installation techniques and vigilant supervision and inspection.

Failure Frequency

 

Inductive loop detector failure rates differ from agency to agency due to the large number of variables that contribute to the failures.  In addition, until recently, very few agencies maintained comprehensive records.  If a loop in a traffic signal control system failed, it was repaired or replaced as a signal maintenance activity.  The cause of the failure, the age of the loop, the condition of the pavement, etc. were not recorded.  Consequently, many of the surveys reported in the literature were based on subjective, after-the-fact judgments.

 

Perhaps the largest of the FHWA studies was conducted by the State of New York.  It was found that of the 15,000 existing inductive loop detectors maintained by the State, 25 percent were not operating at any given time.  It was also found that, on the average, loop installations generally operated maintenance free for only 2 years.  This high failure rate encouraged New York State to develop the improved installation methods described later in the chapter.

 

The failure rate reported by New York is consistent with other failure rate literature.  For example, one district in Minnesota reported an annual failure rate of 24 percent and Cincinnati, Ohio reported 29 percent failures per year.  Although these areas experience cold weather climates, failure rates in the sun-belt states are about the same, but the causal factors differ.

Failure Mechanisms

Although most failures originate in the loop wire, the wire itself is not necessarily the precipitating cause of the failure.  The failure is usually caused by one of several breakdown mechanisms, such as poor pavement or poor installation of sealant, which allows the wire to float to the top and thus become vulnerable to traffic.

 

The following table summarizes the results of an inductive loop detector failure survey of eight western states. 

Table‑D-3.  Summary of Loop Detector Failures

 

 



[1] Experience with incident detection algorithms has been mixed.  Many areas have found that algorithms produce too many “false alarms” and no longer rely on them.  Other areas still use them as a screening mechanism.  In general, incident detection can be efficiently performed by fielding cell phone calls from motorists, especially if a dedicated number for reporting incidents exist.