LEAD & MANAGE MY SCHOOL
Evaluating Online Learning: Challenges and Strategies for Success
July 2008

Solving Data Collection Problems

Evaluators of any kind of program frequently face resistance to data collection efforts. Surveys or activity logs can be burdensome, and analyzing test scores or grades can seem invasive. In the online arena, there can be additional obstacles. The innovative nature of online programs can sometimes create problems: Some online program administrators simply may be struggling to get teachers or students to use a new technology, let alone respond to a questionnaire about the experience. And in the context of launching a new program or instructional tool, program staffers often have little time to spend on such data collection matters as tracking survey takers or following up with nonrespondents.

Still more difficult is gaining cooperation from people who are disconnected from the program or evaluation—not uncommon when a new high-tech program is launched in a decades-old institution. Other difficulties can arise when online programs serve students in more than one school district or state. Collecting test scores or attendance data, for example, from multiple bureaucracies can impose a formidable burden. Privacy laws present another common hurdle when online evaluators must deal with regulations in multiple jurisdictions to access secondary data. The problem can be compounded when officials are unfamiliar with a new program and do not understand the evaluation goals.

When faced with these data collection challenges, how should evaluators respond, and how can they avoid such problems in the first place?

Among the evaluations featured in this guide, data collection problems were common. When study participants did not cooperate with data collection efforts, the evaluators of Chicago Public Schools' Virtual High School and Louisiana's Algebra I Online program handled the problem by redesigning their evaluations. Thinkport evaluators headed off the same problem by taking proactive steps to ensure cooperation and obtain high data collection rates in their study. When evaluators of Washington's Digital Learning Commons (DLC) struggled to collect and aggregate data across multiple private vendors who provided courses through the program, the program took steps to define indicators and improve future evaluation efforts. As these examples show, each of these challenges can be lessened with advance planning and communication. While these steps can be time-consuming and difficult, they are essential for collecting the data needed to improve programs.

Redesign or Refocus the Evaluation When Necessary

In 2004, Chicago Public Schools (CPS) established the Virtual High School (CPS/VHS) to provide students with access to a wide range of online courses taught by credentialed teachers. The program seemed like an economical way to meet several district goals: Providing all students with highly qualified teachers; expanding access to a wide range of courses, especially for traditionally underserved students; and addressing the problem of low enrollment in certain high school courses. Concerned about their students' course completion rates, CPS/VHS administrators wanted to learn how to strengthen student preparedness and performance in their program. Seeing student readiness as critical to a successful online learning experience, the project's independent evaluator, TA Consulting, and district administrators focused on student orientation and support; in particular, they wanted to assess the effectiveness of a tutorial tool developed to orient students to online course taking.

At first, evaluators wanted a random selection of students assigned to participate in the orientation tutorial in order to create treatment and control groups (see Glossary of Common Evaluation Terms, p. 65) for an experimental study. While the district approved of the random assignment plan, many school sites were not familiar with the evaluation and did not follow through on getting students to take the tutorial or on tracking who did take it.

To address this problem, the researchers changed course midway through the study, refocusing it on the preparedness of in-class mentors who supported the online courses. This change kept the focus on student support and preparation, but sidestepped the problem of assigning students randomly to the tutorial. In the revised design, evaluators collected data directly from participating students and mentors, gathering information about students' ability to manage time, the amount of on-task time students needed for success, and the level of student and mentor technology skills. The evaluators conducted surveys and focus groups with participants, as well as interviews with the administrators of CPS/VHS and Illinois Virtual High School (IVHS), the umbrella organization that provides online courses through CPS/VHS and other districts in the state. They also collected data on student grades and participation in orientation activities. To retain a comparative element in the study, evaluators analyzed online course completion data for the entire state: They found that CPS/VHS course completion rates were 10 to 15 percent lower than comparable rates for all students in IVHS in the fall of 2004 and spring of 2005, but in the fall of 2005, when fewer students enrolled, CPS/VHS showed its highest completion rate ever, at 83.6 percent, surpassing IVHS's informal target of 70 percent.10

Through these varied efforts, the researchers were able to get the information they needed. When their planned data collection effort stalled, they went back to their study goals and identified a different indicator for student support and a different means of collecting data on it. And although the experimental study of the student orientation tutorial was abandoned, the evaluators ultimately provided useful information about this tool by observing and reporting on its use to program administrators.

Evaluators very commonly have difficulty collecting data from respondents who do not feel personally invested in the evaluation, and online program evaluators are no exception. In the case of Louisiana's Algebra I Online program, evaluators faced problems getting data from control group teachers. Initially, the state department of education had hoped to conduct a quasi-experimental study (see Glossary of Common Evaluation Terms, p. 65) comparing the performance of students in the online program with students in face-to-face settings. Knowing it might be a challenge to find and collect data from control groups across the state, the program administrators required participating districts to agree up front to identify traditional classrooms (with student demographics matching those of online courses) that would participate in the collection of data necessary for ongoing program evaluation. It was a proactive move, but even with this agreement in place, the external evaluator found it difficult to get control teachers to administer posttests at the end of their courses. The control teachers had been identified, but they were far removed from the program and its evaluation, and many had valid concerns about giving up a day of instruction to issue the test. In the end, many of the students in the comparison classrooms did not complete posttests; only about 64 percent of control students were tested compared to 89 percent of online students. In 2005, hurricanes Katrina and Rita created additional problems, as many of the control group classrooms were scattered and data were lost.

In neither the Chicago nor the Louisiana case could the problem of collecting data from unmotivated respondents be tackled head-on, with incentives or redoubled efforts to follow up with nonrespondents, for example. (As is often the case, such efforts were prohibited by the projects' budgets.) Instead, evaluators turned to other, more readily available data. When planning an evaluation, evaluators are wise to try to anticipate likely response rates and patterns and to develop a "Plan B" in case data collection efforts do not go as planned. In some cases, the best approach may be to minimize or eliminate any assessments that are unique to the evaluation and rely instead on existing state or district assessment data that can be collected without burdening students or teachers participating in control groups. The Federal Education Rights and Privacy Act (FERPA) allows districts and schools to release student records to a third party for the purpose of evaluations.11

Both the Chicago and Louisiana examples serve as cautionary tales for evaluators who plan to undertake an experimental design. If evaluators plan to collect data from those who do not see themselves benefiting from the program or evaluation, the evaluation will need adequate money to provide significant incentives or, at a minimum, to spend substantial time and effort on communication with these individuals.

Take Proactive Steps to Boost Response Rates

In the evaluation of Thinkport's electronic field trip, Maryland Public Television was more successful in collecting data from control group classrooms. This program's evaluation efforts differed from others in that it did not rely on participation from parties outside of the program. Instead, evaluators first chose participating schools, then assigned them to either a treatment group that participated in the electronic field trip right away or to a control group that could use the lessons in a later semester. Teachers in both groups received a one-hour introduction to the study, where they learned about what content they were expected to cover with their students, and about the data collection activities in which they were expected to participate. Teachers were informed on that day whether their classroom would be a treatment or control classroom; then control teachers were dismissed, and treatment teachers received an additional two-hour training on how to use the electronic field trip with their students. Control teachers were given the option of receiving the training at the conclusion of the study, so they could use the tool in their classrooms at a future point. As an incentive, the program offered approximately $2,000 to the social studies departments that housed both the treatment and control classes.

Evaluators took other steps that may have paved the way for strong compliance among the control group teachers. They developed enthusiasm for the project at the ground level by first approaching social studies coordinators who became advocates for and facilitators of the project. By approaching these content experts, the evaluators were better able to promote the advantages of participating in the program and its evaluation. Evaluators reported that coordinators were eager to have their teachers receive training on a new classroom tool, especially given the reputation of Maryland Public Television. In turn, the social studies coordinators presented information about the study to social studies teachers during opening meetings and served as contacts for interested teachers. The approach worked: Evaluators were largely able to meet their goal of having all eighth-grade social studies teachers from nine schools participate, rather than having teachers scattered throughout a larger number of schools. Besides having logistical benefits for the evaluators, this accomplishment may have boosted compliance with the study's requirements among participating teachers.

Evaluators also took the time to convince teachers of the importance of the electronic field trip. Before meeting with teachers, evaluators mapped the field trip's academic content to the specific standards of the state and participating schools' counties. They could then say to teachers: "These are the things your students have to do, and this is how the electronic field trip helps them do it." Finally, evaluators spent time explaining the purpose and benefits of the evaluation itself and communicating to teachers that they played an important role in discovering whether a new concept worked.

The overall approach was even more successful than the evaluators anticipated and helped garner commitment among the participating teachers, including those assigned to the control group. The teachers fulfilled the data collection expectations outlined at the onset of the study, including keeping daily reports for the six days of the instructional unit being evaluated and completing forms about the standards that they were teaching, as well as documenting general information about their lessons. By involving control teachers in the experimental process and giving them full access to the treatment, the evaluators could collect the data required to complete the comparative analysis they planned.

Besides facing the challenge of collecting data from control groups, many evaluators, whether of online programs or not, struggle even to collect information from regular program participants. Cognizant of this problem, evaluators are always looking for ways to collect data that are unobtrusive and easy for the respondent to supply. This is an area where online programs can actually present some advantages. For example, Appleton eSchool evaluators have found that online courses offer an ideal opportunity to collect survey data from participants. In some instances, they have required that students complete surveys (or remind their parents to do so) before they can take the final exam for the course. Surveys are e-mailed directly to students or parents, ensuring high response rates and eliminating the need to enter data into a database. Appleton's system keeps individual results anonymous but does allow evaluators to see which individuals have responded.

With the help of its curriculum provider, K12 Inc., the Arizona Virtual Academy (AZVA) also frequently uses Web-based surveys to gather feedback from parents or teachers about particular events or trainings they have offered. Surveys are kept short and are e-mailed to respondents immediately after the event. AZVA administrators believe the burden on respondents is relatively minimal, and report that the strategy consistently leads to response rates of about 60 percent. The ease of AZVA's survey strategy encourages program staff to survey frequently. Furthermore, K12 Inc. compiles the results in Microsoft Excel, making them easy for any staff member to read or merge into a PowerPoint presentation (see fig. 2, Example of Tabulated Results From an Arizona Virtual Academy Online Parent Survey, p. 38). Because the school receives anonymous data from K12 Inc., parents know they may be candid in their comments. With immediate, easy-to-understand feedback, AZVA staff are able to fine-tune their program on a regular basis.

Web sites and online courses can offer other opportunities to collect important information with no burden on the participant. For example, evaluators could analyze the different pathways users take as they navigate through a particular online tool or Web site, or how much time is spent on different portions of a Web site or online course. If users are participants in a program and are asked to enter an identifying number when they sign on to the site, this type of information could also be linked to other data on participants, such as school records.

Define Data Elements Across Many Sources

Another common problem for online evaluators is the challenge of collecting and aggregating data from multiple sources. As noted earlier, Washington state's DLC offers a wide array of online resources for students and educators. To understand how online courses contribute to students' progress toward a high school diploma and college readiness, the program evaluators conducted site visits to several high schools offering courses through DLC. In reviewing student transcripts, however, the evaluators discovered that schools did not have the same practices for maintaining course completion data and that DLC courses were awarded varying numbers of credits at different schools—the same DLC course might earn a student .5 credit, 1 credit, 1.5 credits, or 2 credits. This lack of consistency made it difficult to aggregate data across sites and to determine the extent to which DLC courses helped students to graduate from high school or complete a college preparation curriculum.

Figure 2. Example of Tabulated Results From an Arizona Virtual Academy Online Parent Survey About the Quality of a Recent Student Workshop*

Title I Student Workshop Survey–Phoenix

At this time 12 responses have been received. 100% returned, 12 surveys deployed via e-mail.

Please select all that apply regarding the Phoenix student workshop. (Select all that apply) Total Count % of Total
My child enjoyed the student workshop. 10 83%
My child enjoyed socializing with peers at the workshop. 12 100%
The activities were varied. 10 83%
My child enjoyed working with the teachers. 11 92%
My child had a chance to learn some new math strategies. 8 67%
My child had the opportunity to review math skills. 10 83%
The activities were too hard. 0 0%
The activities were too easy. 1 8%
 
Please use the space below to let us know your thoughts about the student workshop. (Text limited to 250 characters.)(Text input)
Recipient Response
  All 3 math teachers were very nice and understanding and they did not yell at you like the brick and mortar schools, which made learning fun.
  My son did not want to go when I took him. When I picked him up he told me he was glad he went.
  She was bored, "it was stuff she already knew," she stated. She liked the toothpick activity. She could have been challenged more. She LOVED lunch!
  My child really enjoyed the chance to work in a group setting. She seemed to learn a lot and the games helped things "click."
  She was happy she went and said she learned a lot and had lots of fun. That is what I was hoping for when I enrolled her. She is happy and better off for going.
  THANKS FOR THE WORKSHOP NEED MORE OF THEM
  The teachers were fun, smart and awesome, and the children learn new math strategies.
  Thank you keep up the good work.

Source: Arizona Virtual Academy

* The U.S. Department of Education does not mandate or prescribe particular curricula or lesson plans. The information in this figure was provided by the identified site or program and is included here as an illustration of only one of many resources that educators may find helpful and use at their option. The Department cannot ensure its accuracy. Furthermore, the inclusion of information in this figure does not reflect the relevance, timeliness, or completeness of this information; nor is it intended to endorse any views, approaches, products, or services mentioned in the figure.

The evaluation final report recommended developing guidelines to show schools how to properly gather student-level data on each participating DLC student in order to help with future evaluations and local assessment of the program's impact.

Plan Ahead When Handling Sensitive Data

Any time evaluators are dealing with student data, there are certain to be concerns about privacy. Districts are well aware of this issue, and most have guidelines and data request processes in place to make sure that student data are handled properly. Of course, it is critically important to protect student privacy, but for evaluators, these regulations can create difficulties. For example, in Chicago, Tom Clark, of TA Consulting, had a close partnership with the district's Office of Technology Services, yet still had difficulty navigating through the district's privacy regulations and lengthy data request process. Ultimately, there was no easy solution to the problem: This evaluator did not get all of the data he wanted and had to make do with less. Still, he did receive a limited amount of coded student demographic and performance data and, by combining it with information from his other data collection activities, he was able to complete the study.

District protocols are just one layer of protection for sensitive student data; researchers also must abide by privacy laws and regulations at the state and federal levels. These protections can present obstacles for program evaluations, either by limiting evaluators' access to certain data sets or by requiring a rigorous or lengthy data request process. For some online program evaluators, the problem is exacerbated because they are studying student performance at multiple sites in different jurisdictions.

Of course, evaluators must adhere to all relevant privacy protections. To lessen impacts on the study's progress, researchers should consider privacy protections during the design phase and budget their time and money accordingly. Sometimes special arrangements also can be made to gain access to sensitive data while still protecting student privacy. Researchers might sign confidentiality agreements, physically travel to a specific site to analyze data (rather than have data released to them in electronic or hardcopy format), or even employ a neutral third party to receive data sets and strip out any identifying information before passing it to researchers.

Evaluators also can incorporate a variety of precautions in their own study protocol to protect students' privacy. In the Thinkport evaluation, for example, researchers scrupulously avoided all contact with students' names, referring to them only through a student identification number. The teachers participating in the study took attendance on a special form that included name and student identification number, then tore the names off the perforated page and mailed the attendance sheet to the evaluators. Through such procedures, evaluators can maintain students' privacy while still conducting the research needed to improve programs and instructional tools.

Summary

As these examples show, program leaders and evaluators can often take proactive steps to avoid and address data collection problems. To ensure cooperation from study participants and boost data collection rates, they should build in plenty of time (and funds, if necessary) to communicate their evaluation goals with anyone in charge of collecting data from control groups. Program leaders also should communicate with students participating in evaluation studies, both to explain its goals, and to describe how students will benefit ultimately from the evaluation. These communication efforts should begin early and continue for the project's duration. Additionally, program leaders should plan to offer incentives to data collectors and study participants.

Evaluators may want to consider administering surveys online, to boost response rates and easily compile results. In addition, there may be ways they can collect data electronically to better understand how online resources are used (e.g., by tracking different pathways users take as they navigate through a particular online tool or Web site, or how much time is spent on different portions of a Web site or online course).

When collecting data from multiple agencies, evaluators should consider in advance, the format and comparability of the data, making sure to define precisely what information should be collected and communicating these definitions to all those who are collecting and maintaining it.

Evaluators should research relevant privacy laws and guidelines well in advance and build in adequate time to navigate the process of requesting and collecting student data from states, districts, and schools. They may need to consider creative and flexible arrangements with agencies holding student data, and should incorporate privacy protections into their study protocol from the beginning, such as referring to students only through identification numbers.

Finally, if data collection problems cannot be avoided, and evaluators simply do not have what they need to complete a planned analysis, sometimes the best response is to redesign or refocus the evaluation. One strategy involves finding other sources of data that put more control into the hands of evaluators (e.g., observations, focus groups).


   13 | 14 | 15
TOC
Print this page Printable view Send this page Share this page
Last Modified: 01/02/2009