Jump to main content.


Program Evaluation Glossary

 

A B C D E F G H
I J K L M N O P
Q R S T U V W X, Y, Z


"C"

Case Study
A method for learning about a complex instance, based on a comprehensive understanding of that instance, obtained by extensive description and analysis of the instance, taken as a whole and in its context.
Categorical Measure
A measure that places data into a limited numbers of groups or categories.
Causal Analysis
A method for analyzing the possible causal associations among a set of variables.
Causal Association
A relationship between two variables in which a change in one brings about a change in the other.
Causal Model
A model or portrayal of the theorized causal relationships between concepts or variables.
Causal Relationship
The relationship of cause and effect. The cause is the act or event that produces the effect. The cause is necessary to produce the effect.
Closed Question
A question with more than one possible answer from which one or more answers must be selected.
Closed-Ended Questions
A question that limits responses to predetermined categories.
Cluster Sample
A probability sample for which groups or geographic areas comprising groups were randomly selected
Clustering
Identifying similar characteristics and grouping samples with similar characteristics together.
Codebook
A document which lists the variables in a dataset, possible values for each variable, and the definitions of codes that have been assigned to these values.
Coding
The process of converting information obtained on a subject or unit into coded values (typically numeric) for the purpose of data storage, management, and analysis.
Coefficient
A value expressing the degree to which some characteristic or relation is to be found in specified instances.
Collaborative Evaluation
See participatory evaluation.
Comparative Change Design
The quasi-experimental design known as the comparative change design allows for the measurement of change in relevant outcome factors (using a pre- and post-test) and provides for comparison of this change between a treatment group and a non-random comparison group. Because comparison and treatment groups are not randomly selected, alternate explanations due to prior differences between groups continue to be a threat.
Comparative Post-test Design
The elementary quasi-experimental design known as the comparative post-test design involves the measurement of outcomes for both the test group as well as a comparison group. The selection of participants into the treatment and comparison groups is not done randomly. While such a design to some extent overcomes the issues of a one-shot study by allowing comparisons of success, this design is typically plagued by threats due to selection bias. That is, an alternate explanation for differences between group outcomes is that some alternate factor, which was related to the selection process, has actually caused the differences in outcomes.
Comparative Time-Series Design
The quasi-experimental design known as the comparative time series tracks some outcome of interest for periods before and after program implementation for both the treatment group as well as a non-randomly selected comparison group. Because comparison and treatment groups are not randomly selected, alternate explanations due to prior differences between groups continue to be a threat.
Comparison Group
A group of individuals whose characteristics are similar to those of a program's participants. These individuals may not receive any services, or they may receive a different set of services, activities, or products; in no instance do they receive the same services as those being evaluated. As part of the evaluation process, the experimental group (those receiving program services) and the comparison group are assessed to determine which types of services, activities, or products provided by the program produced the expected changes.
Composite Measure
A measure constructed using several alternate measures of the same phenomenon.
Comprehensive Evaluation
An assessment of a social program that covers the need for the program, its design, implementation, impact, and efficiency.
Concept
An abstract or symbolic tag that attempts to capture the essence of reality. The "concept" is later converted into variables to be measured.
Conditional Distribution
The distribution of one or more variables given that one or more other variables have specified values. For example, a distribution of height might be made conditional on gender and age, allowing you to find the distribution of height for men aged 18-22.
Confidence Interval
An estimate of a population parameter that consists of a range of values bounded by statistics called upper and lower confidence limits, within which the value of the parameter is expected to be located.
Confidence Level
The level of certainty to which an estimate can be trusted. The degree of certainty is expressed as the chance that a true value will be included within a specified range, called a confidence interval.
Confidence Limits
Two statistics that form the upper and lower bounds of a confidence interval.
Confidentiality
Secrecy. In research this involves not revealing the identity of research subjects, or factors which may lead to the identification of individual research subjects.
Confidentiality Form
A written form that assures evaluation participants that information they provide will not be openly disclosed nor associated with them by name. Since an evaluation may entail exchanging or gathering privileged or sensitive information about residents or other individuals, a confidentiality form ensures that the participants' privacy will be maintained.
Confounding
An inability to distinguish the separate impacts of two or more individual variables on a single outcome.
Consensus Building Outcome
The production of a common understanding among participants about issues and programs.
Constraint
A limitation of any kind to be considered in planning, programming, scheduling, implementing, or evaluating programs.
Construct
A concept that describes and includes a number of characteristics or attributes. The concepts are often unobservable ideas or abstractions.
Construct Validity
The extent to which a measurement method accurately represents a construct and produces an observation distinct from that produced by a measure of another construct.
Consultant
An individual who provides expert or professional advice or services, often in a paid capacity.
Contamination
The tainting of members of the comparison or control group with elements from the program. Contamination threatens the validity of the study because the group is no longer untreated for purposes of comparison.
Content Analysis
A set of procedures for collecting and organizing nonstructured information into a standardized format that allows one to make inferences about the characteristics and meaning of written and otherwise recorded material.
Content Validity
The ability of the items in a measuring instrument or test to adequately measure or represent the content of the property that the investigator wishes to measure.
Context (of an evaluation)
The combination of the factors accompanying the study that may have influenced its results. These factors include the geographic location of the study, its timing, the political and social climate in the region at that time, the other relevant professional activities that were in progress, and any existing pertinent economic conditions.
Continuous Variable
A quantitative variable with an infinite number of attributes.
Contract
A written or oral agreement between the evaluator and client that is enforceable by law. It is a mutual understanding of expectations and responsibilities for both parties.
Control Group
A group whose characteristics are similar to those of the program but who do not receive the program services, products, or activities being evaluated. Participants are randomly assigned to either the experimental group (those receiving program services) or the control group. A control group is used to assess the effect of program activities on participants who are receiving the services, products, or activities being evaluated. The same information is collected for people in the control group and those in the experimental group.
Control Variable
A variable that is held constant or whose impact is removed in order to analyze the relationship between other variables without interference, or within subgroups of the control variable.
Convenience Sample
A sample for which cases are selected only on the basis of feasibility or ease of data collection. This type of sample is rarely useful in evaluation and is usually hazardous.
Correlation
A synonym for association or the relationship between variables.
Correlation coefficient
A numerical value that identifies the strength of relationship between variables.
Cost-Benefit
A criterion for comparing programs and alternatives when benefits can be valued in dollars. Cost-benefit is the ratio of dollar value of benefit divided by cost. It allows comparison between programs and alternative methods.
Cost-Benefit Analysis
An analysis that compares present values of all benefits less those of related costs when benefits can be valued in dollars the same way as costs. A cost-benefit analysis is performed in order to select the alternative that maximizes the benefits of a program.
Cost-Effectiveness
A criterion for comparing alternatives when benefits or outputs cannot be valued in dollars. This relates costs of programs to performance by measuring outcomes in nonmonetary form. It is useful in comparing methods of attaining an explicit objective on the basis of least cost or greatest effectiveness for a given level of cost.
Costs
Inputs, both direct and indirect, required to produce an intervention.
Covariation
The degree to which two measures vary together.
Coverage
The extent to which a program reaches its intended target population.
Cross-Sectional Data
Observations collected on subjects or events at a single point in time.
Cues
The alternative responses to questions that increase or decrease in intensity in an ordered fashion. The interviewee is asked to select one answer to the question.
Cultural Competency
A set of academic and interpersonal skills that allow individuals to increase their understanding and appreciation of cultural differences and similarities within, among, and between groups.


Local Navigation


Jump to main content.