OFFICES


OPE: Office of Postsecondary Education
Current Section
Lessons Learned from FIPSE Projects III - June 1996 - University of Connecticut

Assessing General Education Outcomes
An Institution-Specific Approach

Purpose

There is much discussion and bewilderment in higher education about how to measure curriculum improvement, especially in general education. The University of Connecticut embarked on an ambitious project to test student performance in each of their six general education areas: science and technology, foreign languages, culture and modern society, philosophy and ethical analysis, social science and comparative analysis, and literature and the arts.

The university wanted answers to questions about whether a student's level of performance in the new general education curriculum improves with time in the program, and whether the number of courses taken in a general education area affects student performance.

Innovative Features

Just a few years after the general education curriculum was installed, the university set out to study its benefits to students and to further refine its goals, structure and course content. From the beginning, this three-year project was viewed not as a one-time activity, but as a continuing faculty-directed process of goal and curriculum improvement. Since the university had no prior institutional structure for assessment, the project required the creation of an ad hoc committee on assessment within the Faculty Senate.

Evaluation

The assessment process stressed the faculty's role in generating clear and concrete general education goals and in deciding what constitutes evidence that they have been achieved. Fifty faculty members working in six different goal committees (each of which included a behavioral measurement expert) examined all the course syllabi and relevant standardized and commercial tests, and arrived at a set of 14 locally-developed assessment instruments that matched the new goals. With two exceptions, each gene ral education area was assigned two forms with complementary content. Another instrument, the Cornell Test, was purchased to measure critical thinking skills.

The major questions that these activities raised involved the validity of individual close-ended test items and the grading reliability of open-ended items. In the first data collection phase, project staff pilot-tested the assessment instruments on 16 94 incoming freshmen (almost the entire class) and on 601 randomly-selected upper division students, and ran a series of videotaped focus groups with a much smaller number of students. Another 724 students were queried as to how they perceived their academic performance in the general education areas. Beyond this, staff surveyed 676 faculty on their level of agreement with the general education goals.

To test the effectiveness of the general education curriculum, statistical controls were used for general ability levels, maturation, attrition, and overall performance in non-general education courses. These controls factored out potentially contamin ating effects from the actual effects of the curriculum. Multiple regression models were constructed with SAT scores, semester standing and grade points in both general education courses and non-general education courses as predictors of student performa nce on the assessment instruments.

Staff distributed summary findings to the six general education committees and to department heads, and these findings were discussed at special evaluation meetings. The faculty committees were charged with converting the results into recommendations f or modifying or creating courses. Project personnel formally presented key results to the University's top administrators for their reactions.

Project Impact

The accumulated testing data from 4500 students showed gains in student performance as they moved through the university, even when controlling for differences in initial abilities (as measured by SAT scores) and the attrition of poorer students. Averaging across all general education areas, upper division students scored significantly higher (53 percent) than did freshmen (45 percent). The better performance of upper division students was statistically significant in five of the six general education areas, the exception being Philosophy and Ethical Analysis. The cumulative number of general education courses taken also produced a modest effect on student performance. After SAT verbal scores, one of the strongest predictors of performance on the tests was the grade point the students earned in general education courses.

Project faculty concluded that the general education curriculum, as a whole, has a modest but positive effect on student performance. Results of the faculty survey showed that the general education goals were essentially noncontroversial, as opposed to the means of implementing and assessing them. Despite the breadth of liberal arts goals, student opinion about the worth of general education commonly noted its practical and potential value to their future work, to selecting a major, and to learning writing and language skills. Students generally held positive views about their general education skills, and felt most competent in social science courses and least competent in philosophy.

Lessons Learned

Some unexpected experiences during the project shed light on faculty reactions and institutional readiness for assessment. According to one of the project directors, the process was an eye-opener for many faculty participants. Not only were they asked to look at courses in their category in relation to a set of "student goals" that they barely knew existed, but they were also supposed to look for commonalities across the courses within a given category -- a daunting task for faculty who had seldom thought beyond individual courses. It was news enough to realize that there were even supposed to be these connections, never mind finding and describing them, and then building an assessment instrument around them.

The project directors believe that the project disproved the commonplace assumption that assessment cannot work in large research universities, and can only be effective in small liberal arts colleges with well-defined goals. In fact, they found that, certain characteristics of their university worked for assessment since it was possible, in a place as large as UConn, to find many individuals with both a strong research interest and a genuine involvement in teaching and learning, people who hungered for a project that legitimized their concern for students.

Project Continuation and Recognition

The university ministration, under a resolution of the Faculty Senate, decided to continue funding assessment of general education and instrument revision through the Provost's Office. The newly-formed University Committee on Assessment continues to coordinate work in this area. The section of the university's five-year assessment plan describing the FIPSE project was singled out for high praise by the Connecticut Board of Governors for Higher Education. Staff have made presentations to the full University Senate, the University Dean's Council and all academic department heads. At least 50 requests from colleges and universities for summary reports and assessment instruments have been filled.

Available Information

Further information about this project may be obtained from:

James Watt
Department of Communication Sciences, U-85
University of Connecticut
Storrs, CT 06269
860-486-4078

Barbara Wright
Department of Modern and Classical Languages, U-57
University of Connecticut
Storrs, CT 06269
860-486-1531

[VII. Assessment] [Table of Contents] [Miami University]

Top Top

FIPSE Home


 
Print this page Printable view Send this page Share this page
Last Modified: 01/26/2007