A.1: SAMPLING AREAS ARRAYED ALPHABETICALLY
WITHIN TYPES
28 Largest Metropolitan Areas
24 Smaller Metropolitan Areas
24 Rural Areas (Non-Metropolitan Areas)
A program was defined for NSHAPC as a set of services offered to the same group of people at a
single location. To be considered a program, a provider had to offer services or assistance that
were: (1) managed or administered by the agency (i.e., the agency provides the staff and
funding); (2) designed to accomplish a particular mission or goal; (3) offered on an ongoing
basis; (4) focused on homeless persons as an intended population (although not always the only
population); and (5) not limited to referrals or administrative functions.
This definition of "program" was used in metropolitan areas. However, because rural areas often
lack homeless-specific services, the definition was expanded in rural areas to include agencies
serving some homeless people even if this was not a focus of the agency. About one-fourth of
the rural programs in NSHAPC were included as a result of this expanded definition.
NSHAPC covered 16 types of homeless assistance programs, defined as follows:
Emergency shelter programs provide short-term housing on a first-come first-served basis where
people must leave in the morning and have no guaranteed bed for the next night OR provide beds for
a specified period of time, regardless of whether or not people leave the building. Facilities which
provide temporary shelter during extremely cold weather (such as churches) and emergency shelters
or host homes for runaway or neglected children and youth, and victims of domestic violence were
also included.
Transitional housing programs have a maximum stay for clients of two years and offer support
services to promote self-sufficiency and to help them obtain permanent housing. They may target any
homeless subpopulation such as persons with mental illnesses, persons with AIDS, runaway youths,
victims of domestic violence, homeless veterans, etc.
Permanent housing programs for homeless people provide long-term housing assistance with
support services for which homelessness is a primary requirement for program eligibility. Examples
include the Shelter Plus Care Program, the Section 8 Moderate Rehabilitation Program for Single-Room Occupancy (SRO) Dwellings, and the Permanent Housing for the Handicapped Homeless
Program administered by the Department of Housing and Urban Development (HUD). These
programs also include specific set-asides of assisted housing units or housing vouchers for homeless
persons by public housing agencies or others as a matter of policy, or in connection with a specific
program (e.g., the HUD-VA Supported Housing Program, "HUD-VASH"). A permanent housing
program for homeless people does NOT include public housing, Section 8, or federal, state, or local
housing assistance programs for low-income persons that do not include a specific set-aside for
homeless persons, or for which homelessness is not a basic eligibility requirement.
Voucher distribution programs provide homeless persons with a voucher, certificate, or coupon that
can be redeemed to pay for a specific amount of time in a hotel, motel, or other similar facility.
Programs that accept vouchers for temporary accommodation provide homeless persons with
accommodation, usually in a hotel, motel, board and care, or other for-profit facility, in exchange for
a voucher, certificate, or coupon offered by a homeless assistance program.
Food pantry programs are programs which distribute uncooked food in boxes or bags directly to low
income people, including homeless people.
Soup kitchen programs include soup kitchens, food lines, and programs distributing prepared
breakfasts, lunches, or dinners. These programs may be organized as food service lines, bag or box
lunches, or tables where people are seated, then served by program personnel. These programs may
or may not have a place to sit and eat the meal.
Mobile food programs are programs which visit designated street locations for the primary purpose
of providing food to homeless people.
Physical health care programs provide health care to homeless persons, including health screenings,
immunizations, treatment for acute health problems, and other services that address physical health
issues. Services are often provided in shelters, soup kitchens, or other programs frequented by
homeless people.
Mental health care programs provide services for homeless persons to improve their mental or
psychological health or their ability to function well on a day-to-day basis. Specific services may
include case management, assertive community treatment, intervention or hospitalization during a
moment of crisis, counseling, psychotherapy, psychiatric services, and psychiatric medication
monitoring.
Alcohol/drug programs provide services to assist a homeless individual to reduce his/her level of
alcohol or other drug addiction, or to prevent substance abuse among homeless persons. This may
include services such as detoxification services, sobering facilities, rehabilitation programs,
counseling, treatment, and prevention and education services.
HIV/AIDS programs provide services for homeless persons where the services provided specifically
respond to the fact that clients have HIV/AIDS, or are at risk of getting HIV/AIDS. Services may
include health assessment, adult day care, nutritional services, medications, intensive medical care
when required, health, mental health, and substance abuse services, referral to other benefits and
services, and HIV/AIDS prevention and education services.
Drop-in center programs provide daytime services primarily for homeless persons such as television,
laundry facilities, showers, support groups, and service referrals, but do not provide overnight
accommodations.
Outreach programs contact homeless persons in settings such as on the streets, in subways, under
bridges, and in parks to offer food, blankets, or other necessities; to assess needs and attempt to
engage them in services; to offer medical, mental health, and/or substance abuse services; and/or to
offer other assistance on a regular basis (at least once a week) for the purpose of improving their
health, mental health, or social functioning, or increasing their use of human services and resources.
Services may be provided during the day or at night.
Migrant housing is housing that is seasonally occupied by migrating farm workers. During off-season periods it may be vacant and available for use by homeless persons.
Other programs: providers could describe other programs they offered, as long as the programs met
the basic NSHAPC definition of a homeless assistance program. Types of programs actually
identified through the survey include housing/financial assistance (e.g., from Community Action,
county welfare, or housing agencies); Emergency Food and Shelter Program agencies; job training for
the homeless, clothing distribution, and other programs.
National Survey of Homeless Assistance Providers and Clients:
Data Collection Methods
1997
Authors:
Steven Tourkin
Dave Hubble
This paper reports the results of research and analysis undertaken by Census Bureau staff. It has
undergone a more limited review than offical Census Bureau publications. This report is released
to inform interested parties of research and to encourage discussion.
Page
I. Background and Objectives................................................................................... C-4
A. Background................................................................................................... C-4
B. Objectives of this Paper................................................................................. C-5
II. Provider Phase....................................................................................................... C-6
A. PSU Design.................................................................................................... C-7
B. List Building................................................................................................... C-8
C. Provider Data.................................................................................................. C-14
III. Client Phase............................................................................................................... C-17
A. Provider-Program Selection Phase................................................................. C-18
B. Provider Arrangements: MOU/APV Phase................................................... C-19
C. Sampling Procedures...................................................................................... C-20
D. Data Collection............................................................................................... C-25
I. BACKGROUND AND OBJECTIVES
A. Background
The 1996 National Survey of Homeless Assistance Providers and Clients (hereinafter
"survey") was designed to provide information about the providers of homeless assistance and
the characteristics of homeless persons who use services based on a statistical sample of 76
metropolitan and nonmetropolitan areas. Data for the survey were collected by the Census
Bureau between October 1995 and November 1996. Analysis of the data is underway. The
survey is being sponsored by 12 Federal agencies(1) under the auspices of the Interagency
Council on the Homeless, a working group of the White House Domestic Policy Council.
These agencies worked extensively to develop the information requirements and definitions
for the surveys, and to provide the Census Bureau guidance in designing the questionnaires
and survey procedures.
The survey is important because no national studies have been conducted to produce
information on the characteristics of persons participating in homeless assistance programs
since a 1987 study by the Urban Institute. The 1996 survey used a methodology that was
similar to the 1987 study. However, it used a larger sample, included nonmetropolitan areas,
and collected more comprehensive information. The 1996 survey also included a wider
variety of locations than the 1987 study in order to more accurately and fully reflect the
characteristics of homeless people who use services nationwide. The 76 geographic areas that
were included in the national sample in 1996 were comprised of the 28 largest metropolitan
areas, 24 randomly selected medium and small metropolitan areas, and another 24 randomly
selected nonmetropolitan areas (small cities and rural areas).
The survey will not provide a count of the number of people who are homeless. The survey
will provide information about the providers of homeless assistance, the characteristics of the
homeless population who use services, and how this population has changed in metropolitan
areas since the 1987 study. This information is critical for developing effective public policy
responses needed to break the cycle of homelessness.
For example, the survey will:
1. Provide information on the types of programs and services (e.g., housing, food assistance,
health care) available to homeless persons in both metropolitan and nonmetropolitan areas,
including population groups primarily served (e.g., veterans, people with mental illness);
days of operation; occupancy levels; and sources of funding.
2. Provide a comprehensive profile of the homeless population who use services and allow
comparisons of the characteristics of this population with the findings of the 1987 study.
3. Collect additional information related to prevalence of drug use, mental illness, HIV/AIDS,
tuberculosis, and previous episodes of homelessness.
4. Provide information on issues not addressed by the last national study in 1987 such as:
What are the triggering events that precipitate homelessness? Where were homeless
people living before they became homeless?
Methodology
The national survey involved two phases. The first phase--the "provider survey"--was
conducted from October 1995 through October 1996. It involved telephone interviews and a
mail survey of assistance providers in the 76 geographic areas. Included were providers
administering 16 categories of programs, including those that are specifically targeted to
homeless people (e.g., shelters, soup kitchens, and outreach programs), as well as certain
"mainstream" assistance programs which offer programs targeted to homeless persons. The
purpose of this survey of service providers is to identify the types of programs and services
available to homeless persons in metropolitan and nonmetropolitan areas and assess emerging
continuum of care.
The second phase--the "client survey"--was conducted over a four-week period in late October
and early November 1996. It included interviews with a sample of approximately 4,000
persons who were using services in emergency shelters, soup kitchens, outreach programs,
and other locations where assistance is provided. In addition to providing data on
characteristics of the portion of the homeless population who use services, this phase of the
survey will identify population subgroups and help determine their use of various types of
assistance programs. It will also provide limited comparative data on housed persons with
very low incomes who also rely on soup kitchens and other emergency assistance.
The client survey will produce data on client characteristics at the national level and for
metropolitan versus nonmetropolitan populations. The sample size is not large enough to
produce estimates of client characteristics at the regional or local levels, nor was it designed to
produce a count or other estimates of the number of homeless people.
B. Objectives of This Paper
This paper provides a description of the methods and procedures that the Census Bureau
developed and used to conduct Phase 1 and Phase 2 of the survey. Under the direction of the
sponsoring agencies, Census staff was responsible for:
The sponsoring agencies are responsible for data analysis and reports on the survey results.
The issues for analysis discussed among sponsoring agency staff and their contractors
provided direction to the survey design and development of survey definitions.
Census Bureau staff faced many challenges in developing procedures, for example:
While the following sections provide an overview of the entire survey, the discussion focuses
on the difficult procedures in greater detail.
II. PROVIDER PHASE
This section describes the sample design, the development of a complete list of providers in
designated areas, and the collection of data from providers by telephone and mail. Providers
are referred to as agencies and/or organizations. These terms are used interchangeably.
The design of the frame involved five steps:
These steps are discussed in detail below.
A. PSU Design
1. Overview
The sample design for the National Survey of Homeless Assistance Providers and Clients
(NSHAPC) consists of a stratified multistage cluster sample, with 76 geographic areas as
primary sampling units (PSUs) allocated as follows: 52 in metropolitan areas and 24 in
nonmetropolitan areas. The largest 28 metropolitan statistical areas (MSAs) in the
country have been designated to be included in the sample with certainty due to their
large population size. They are thus self-representing (SR). The remaining 48 sample
PSUs (24 MSAs and 24 non-MSAs) are non-self-representing (NSR) since they are
selected from strata representing other PSUs. The strata in the NSR portion of the sample
are groups of homogeneous areas based on geography and population size.
This section summarizes the procedures used for formation of the PSUs for the entire
nation and for the stratification and selection of the 76 sample PSUs.
2. PSU Formation Rules
The NSHAPC PSU is defined in the metropolitan areas to be the MSA, as defined by the
Office of Management and Budget using the 1990 census results. In the nonmetropolitan
areas, the NSHAPC PSU is generally the Community Action Agency (CAA) catchment
area(2), either the whole or a fraction, or a group of CAA catchment areas. In a few
nonmetropolitan areas without CAAs, the PSU is a county or a group of counties.
Sometimes, the counties were grouped with adjacent CAA catchment areas to form PSUs.
All CAA catchment areas containing both MSA and non-MSA areas were redefined as
PSUs containing only the portion outside the MSA boundary. If the population size
(1990 census population) was too small, that is, lower than 10,000, then counties from
more than one CAA catchment area were collapsed to form the PSU. Otherwise, the
CAA catchment areas define the non-MSA PSUs. CAA catchment areas that were
extremely large geographically were broken up into smaller segments for practical field
travel. These smaller areas were then the PSUs.
3. Stratification and Sample Selection
The list of 28 SR MSAs and all NSR PSUs is contained in Appendix A to this report.
Within the NSR portion, geographic strata were formed to create homogeneous groups
where the characteristics with regard to homeless assistance programs are hoped to be
similar. The following is the sample design for the NSR MSA PSUs and NSR non-MSA
PSUs.
a. The NSR MSA PSUs
The assumption in the NSR MSA portion of the sample was: all NSR MSAs in an area
of the country are similar to other NSR MSAs in the same geographical area that are of
similar size with respect to homeless assistance providers and clients. Therefore, the
United States was stratified into 12 geographical areas within which 2 substrata were
formed based on MSA population size for a total of 24 strata. Within each of these 24
strata, one NSR MSA PSU was selected with probability proportional to the population
size within the stratum.
b. The NSR non-MSA PSUs
The assumption in the NSR non-MSA portion of the sample was that geography,
climate of the area and agriculture intensity may have an effect on the distribution of
homeless assistance programs and clients. Therefore, the objective was to sample from
smaller geographical regions of the United States. The country was divided into 23
strata based on a population count of approximately 2 million per stratum in the NSR
non-MSA PSUs and one additional stratum of areas with large Indian reservations. This
was created in light of the differences in the way Indian tribes administer homeless
assistance programs.
Sample selection for the NSR non-MSA PSUs was selected as follows. Within each of
the 24 strata, one NSR non-MSA PSU was randomly selected with probability
proportional to the square root of its population size.
B. List Building
1. Overview
The initial frame of providers was intended to include names of providers and
'knowledgeable persons and organizations'. This universe was developed from extensive
list development procedures that were implemented over several months. Initial lists of
potential providers were obtained from a variety of sources, intending to account for every
program that served homeless people. These were screened and unduplicated to form one
consolidated list of potential providers in the 76 PSUs. As these organizations were
contacted, a survey definition of "provider" was given to the respondent, and on that basis,
the organization was determined to be a provider (or not) for the purposes of the survey.
Organizations continued to be contacted to update and add to the list of potential providers.
By the end of this process, names, addresses, contacts and telephone numbers of potential
providers were identified in the 76 areas.
2. Defining 'Provider'
This study had the objective of focusing on a particular group of the population -- homeless
persons. The goal of list building was to identify all the organizations that have programs
that serve homeless persons. The data are limited to the extent that they will exclude
information for persons who use no services at all; however, survey designers felt that it
would be possible to develop a frame of organizations that cover the vast majority of the
target group. The study also recognized that some suburban, and most rural agencies have
programs which assist poor people, which might include homeless persons, but not
exclusively.
The NSHAPC included 15 program types, such as shelters and soup kitchens (see
Appendix B of the larger report for a complete listing of programs and definitions), and an
'other' category to capture other programs serving homeless persons. Sponsoring agencies
determined which programs would be included in the survey, and provided specific
definitions for each program.
In general, a "program" was defined as the provision of services or assistance which:
One of the programs was Outreach, which provides services to persons at street locations.
In this way, the study included homeless persons who do not go to any organization
directly to receive services.
"Homeless" was defined in accordance with the Stewart B. McKinney Homeless
Assistance Act of 1987, as:
The definitions of "program" and "homeless" were given to every agency and organization
during the course of the list building.
The intent of list building was to identify all the organizations or agencies that had one or
more of the distinct programs which they provided directly to the clients. Administrative
offices, which provide funding or guidance, or agencies and organizations which only refer
clients to providers, but offer nothing directly to clients, would not be included.
3. Initial List Development
Census staff attempted to obtain all existing lists of agencies and organizations that provide
the in-scope programs and/or services, from every known source. Staff also obtained the
names and telephone numbers of 'knowledgeable persons' who would be able to identify
potential providers in their areas. The first lists obtained were national lists from federal
agencies that provide funding for the various program types. Subsequent lists were
received from a variety of sources, including statewide homeless and housing advocacy
organizations, state homeless contact persons, local catchment area contact persons,
advocacy groups and national coalitions, national organizations, and knowledgeable
persons.
The general strategy for compiling a complete initial list of potential providers was
accomplished in several steps.
4. Screening Potential Providers
The potential providers were contacted by telephone for several reasons:
And for agencies who screened-in as providers:
The screening was accomplished by asking questions to determine if the agency offered
one or more of the programs included in the survey. They were provided definitions of
'homeless' and 'program' as part of the questions. Most agencies were able to respond yes
(screen in) or no (screen out) at this point. If they responded yes, then they were asked if
they provided each of 15 program types, being given each program definition. They were
also allowed to report under an 'other' category. Some agencies realized that they did not
offer any of the programs, and screened out.
To better understand the specific reasons that organizations were screening out-of-scope, a
sample of the screen-outs were recontacted. In the first contact, these organizations had
been given a lengthy explanation of all the criteria to determine whether or not they were a
provider, and asked to answer yes or no. When re-contacted, the organizations were asked
the screening criteria questions separately, instead of collectively (as one long question), to
determine whether they had screened out because of one or more of the following reasons:
5. List Updating
The provider universe was updated in stages. Beginning with the initial group of potential
providers on the unduplicated master file, agencies were contacted and screened. Whether
they met the survey definition or not, they were sent a comprehensive list of potential
providers in their PSU to review, along with a list update sheet. As providers returned list
update sheets (or in some cases, new lists of their own), each agency on the list was
matched against the master file. If it was a newly identified potential provider, the
information was keyed into a file. At set stages during the provider screening, the
following actions were performed:
During the list updating process, staff used an automated check-in system to record the
status of all responses (e.g. list with no changes, list with corrections only, list with new
providers). Telephone followup was conducted to agencies that did not respond to the list
update request to ensure adequate response within each PSU.
6. Difficulties Encountered During Listing
The initial sets of lists obtained in mid-1995 were of limited use. Many were out of date or
contained the names of organizations and providers who ultimately did not meet the survey
definition. Many others were duplicates, according to survey definitions. The initial,
unduplicated file that we used to begin the screening operation did not contain many of the
potential providers that were ultimately identified. Most of the potential providers were
identified during the November-December 1995 List Update process. Because of this, staff
spent much more time matching the list update cases against our up-to-date reference lists
than originally anticipated. The update process was more time-consuming than the initial
list preparation because each provider had to be matched against the universe.
Because of the government furlough in December 1995 and January 1996, and the
resulting delay of the Client Survey from February to October 1996, it was necessary to
conduct an additional List Update process in May-July 1996. This operation identified
many additional potential providers, after matching Update cases to the reference lists and
unduplicating them. This showed that the turnover of providers, in the real world, was
greater than anyone expected. This resulted in more work to update the universe. The high
turnover rate among providers again was evident when providers were selected to
participate in the Client Survey. When we contacted those providers in September-October
1996, we found that many of the original and substitute selections were no longer in-scope
as providers for various reasons.
Other difficulties resulted from the real world situations on how agencies are organized.
Specifically, an agency providing one or more programs can be organized in any of the
following ways:
Attempting to apply definitions of the various program types within this framework
resulted in the potential for agencies and/or programs to be reported more than once, under
the same or slightly different names. The fact that many agencies began, ended or changed
their programs over the course of data collection (from October 1995 through November
1996) further complicated efforts. These two problems -- duplicates and program changes
-- confounded and complicated each stage of data collection and data processing.
C. Provider Data
1. Core data on programs
As mentioned, potential providers were contacted by telephone. Staff used a Computer
Assisted Telephone Interview (CATI) instrument to screen providers. Those who screened
in were asked to report which programs they offered, provide core information on each
program, and identify a contact person who could answer more detailed questions about
each program reported. Providers could report information on more than one program run
by their agency (e.g., the same agency might run four programs; an emergency shelter,
soup kitchen, mobile food program, and specialized HIV/AIDS unit).
The basic information available from the CATI interview for each program reported
includes:
Interviewers made many calls to potential providers to screen them and obtain interviews.
By the end of the process, approximately 94 percent of the agencies were contacted and
interviewed. Approximately one percent were non-interviews because they refused to
participate.
2. Detailed data on each program reported by providers
After the CATI interviewing was completed, a questionnaire specific to each program type
reported was mailed to the contact who was designated during the CATI interview. The
questionnaire asked for information on the needs of clients (of that program) for a great
variety of services, and if needed, information about the level of need for the service
among the agency's clients, probable receipt of service, and source of service. Topics
covered included:
For housing programs only, the survey also asked about the following for unaccompanied
persons and families who used the program:
During CATI, a contact person could have been named for more than one program (usually
within one provider, but in some cases, for more than one provider if the agencies were
related). Questionnaires were grouped by contact person prior to mailout, so that the
designated person would receive all the forms together.
Questionnaires were edited by clerks for accuracy and completeness, and staff telephoned
respondents to resolve incomplete questionnaires or inconsistent answers. Initially,
procedures were set-up so that questionnaires would fail the edit check, and require a call,
if approximately ten percent or more of the items were missing and/or inconsistent. Later,
the threshold was relaxed to reduce the number of calls required.
Staff also conducted telephone followup to agency contacts that did not return
questionnaires. Because of a lower than expected response, several thousand cases were
assigned for telephone followup. Census staff ultimately subsampled the non-responding
providers for telephone followup, and did not attempt to obtain responses from
approximately 1,000 of the 6,400 providers (encompassing 2,069 programs). Followup
efforts continued for the remaining providers, who will have weighting adjustments to
represent the providers not selected for followup. When providers were contacted, one of
several outcomes resulted. The provider:
Providers who still had not responded by September 1996 were mailed a letter, signed by
the heads of several homeless advocacy groups, requesting their cooperation. At the end of
this process, the response rate was approximately 70 percent for programs in the mail
survey, excluding those that were found to be out-of-scope.
3. Problems with collecting provider data
Because of many unique situations and the complex way that providers are organized and
related, respondents could self-report themselves as providers (or not) and having one or
more programs (or not) in different ways. Providers also may have reported different
programs during the CATI and mail surveys, as well. In addition, agencies are often
known under more than one name, and change location(s) more often than many other
kinds of organizations.
For these reasons, a tremendous amount of effort was spent determining the inventory of
providers and programs during CATI interviewing and the mail survey. It often was not
clear which agencies or programs might be duplicates of another during these operations.
These efforts were complicated further by enumerating different respondents from the
agencies during list building, CATI and mail data collection.
III. CLIENT PHASE
Client sampling and interviewing for the NSHAPC was conducted during the months of
September, October and November 1996 and resulted in the completion of over 4,000
interviews with persons using assistance from homeless providers. This part of the survey
involved multiple phases: the first phase is the selection of provider programs where client
sampling and interviewing would take place, the second is called the Memorandum of
Understanding (MOU)/Advance Preparation Visit (APV) phase and was conducted from the
beginning of September to mid-October and the third part is called the client interviewing
phase, which was conducted from October 18 to November 14, 1996. These phases are
described in more detail below.
A. Provider-Program Selection Phase
1. In Scope Provider-Programs
For this phase, those types of provider-programs believed to possibly contribute to the
coverage of homeless persons were eligible for client interviewing. Based on this, the
following provider-programs were in scope for the NSHAPC client interviewing:
All of these types of provider-programs were in scope for client interviewing in the 52
metropolitan and 24 nonmetropolitan sample areas, with some exceptions for certain food
pantries and street outreach programs. Food pantries in metropolitan areas, as well as
those that deliver food to people's homes or distribute food stamps or vouchers, were not
in scope. Street outreach programs which serve clients primarily from other programs
(e.g. emergency shelters and soup kitchens) were out of scope, also. These particular
food pantry and street outreach programs were not believed to be helpful in improving the
coverage of homeless persons.
2. Selection of Provider-Programs for Client Interviewing
Within each of the 76 sample areas, the in-scope provider-programs were sorted by type
of program and size, where size was in terms of the number of services provided in a
specified month's time estimated from CATI data. A systematic sample of programs was
selected with probability proportional to size. Due to their sheer size, some programs
were selected multiple times. This resulted in the need to make multiple visits to the
program for client interviewing. However, for logistical and safety reasons the maximum
number of visits was limited to 2. Those programs selected more than twice will have an
additional weighting adjustment applied.
It was not known at what rate selected programs would refuse, be closed, be out of scope,
etc., and therefore, not participate in this phase of the survey. To address this
uncertainty, lists of randomly selected substitute programs were provided to the field
staff. Within each PSU, the next program on the substitution list would be added to the
sample, whenever an originally selected program was dropped from the sample.
B. Provider Arrangements: MOU/APV Phase
The Memorandum of Understanding (MOU)/Advance Preparation Visit (APV) Phase
consisted of visiting the selected survey providers to complete an agreement to participate
in the survey and observe the selected program in operation.
1. Memorandum of Understanding (MOU)
The purpose of the MOU is to have the provider enter into a formal, written agreement of
cooperation between the Census Bureau and the provider. This agreement outlined the
general procedures for interviewing the clients, use of provider staff to pay the clients,
payments to the providers and payments to the clients.
These arrangements included the critical need for a provider staff member to be present at
all times to assist in gaining cooperation with selected clients and to pay the interviewed
clients $10.00 as an incentive. The Census Bureau arranged to use funds from the
sponsoring agencies to pay the provider $200 for the use of their facility and staff, and to
have their staff make the $10.00 payments to the clients.
2. Advance Preparation Visit (APV)
Once there was a completed agreement (MOU), the Senior Field Representative (SFR)
completed an Advance Preparation Visit (APV); this allowed the SFR to see first hand
how the selected program worked on an everyday basis. It was critical to observe the
operation of selected programs in which clients receive services on a flow basis. This
observation provided valuable insights on exactly how clients receive services, answered
operational questions that may not have been clear from the discussion with the provider,
and ultimately provided a foundation for preparing for client sampling and interviewing.
The APV was critical in selecting the sampling procedure to be used for client sampling
and interviewing.
In addition, the APV was used to find out the dates and times that the program would be
operating during the October 18-November 14, 1996 period and how many clients the
program typically served. At this time arrangements for space to interview clients and
any other special procedures needed to conduct client interviewing were also determined.
Using the information on the dates and times that the program would be in operation, a
random day (and a random time, if needed) were selected between October 18 -
November 14. Attempts were made to maintain a uniform distribution of the selected
"sampling reference dates" across the 28 day period.
C. Sampling Procedures
The next phase of the client sampling procedure was to ultimately select a random sample
of clients using the services on the randomly selected date and time. There are many ways
that different types of programs may operate in terms of how their clients come to use their
services. In fact, the flow of clients may differ by provider for the same type of program.
Based on discussions with the provider manager, the APV observations, and information
collected on the selected program, several operational decisions were made concerning the
program configuration and the client sampling (listing and selection) method as to how to
randomly select the sample clients.
1. Program Configurations
In order to determine which method of sampling clients is best, it is necessary to
understand the flow of clients as they receive the services. This is called the program
configuration. Four major categories of programs were defined in the first stage in this
process, based on the type of programs in sample, which could include the following:
Shelter and Housing Programs:
Emergency Shelters
Transitional Housing
Permanent Housing
Migrant Workers' Camp
Voucher for Shelter
Food Programs:
Soup Kitchens
Food Pantries
Outreach Programs:
Street Outreach
Mobile Food
Drop-in Centers:
Drop-in Centers
Within these 4 major categories, a more specific configuration needed to be specified.
This next level of classification differed by the 4 major categories. For example, say an
emergency shelter's program configuration needed to be determined. The additional
pieces of information needed to determine the program configuration include: specific or
open-ended "intake hours" of clients, "first-come, first-served" or "bed-reserved" basis of
admission, etc. Based on this additional information, the type of program configuration
was determined and used as input in deciding on the best method for sampling the clients.
Also, if the program was a shelter or housing program, the provider was asked if a roster
of clients was available. In order for the roster to be usable, it had to be current and
complete.
2. Client Sampling/Listing Methods
Once the appropriate program configuration and roster use, if any, were determined, then
the even more explicit method for sampling the clients was determined. Because there
are many different ways in which clients flow through the provider when receiving
services, many different client sampling methods needed to be developed.
The main distinguishing feature at this stage of the various sampling methods is how the
listing of the clients is accomplished. Listing is the process of accounting for every
eligible client using the program as of the sampling reference date and time for sample
selection purposes. There are 3 major categories of methods: provider listing methods,
SFR listing methods, and special SFR sampling methods.
a. Provider Listing Methods
Typically, provider-generated lists were used for programs where the clients' names can
be recorded either several days prior to client sampling or just before the actual hours of
client sampling. The 3 provider listing methods developed were the overnight roster
method, the advance roster method, and the existing roster method.
i. Overnight Roster Method
This client sampling method was used when:
- clients' names are not necessarily known in advance;
- clients are not necessarily the same each night;
- the facility is open very late into the night or all night.
The purpose of this method is to allow all clients to enter during the night and
conduct client sampling and interviewing very early the following morning.
ii. Advance Roster Method
This client sampling method was used when:
- clients using the program on the sampling reference date are known in
advance;
- clients are the same for at least a week at a time;
- the facility does not keep its own client list or the list is not current or
complete.
iii. Existing Roster Method
This client sampling method was used when:
- clients using the program on the sampling reference date are known in
advance;
- clients are the same for at least a week at a time;
- the facility keeps its own client list, and the list is current, complete and
determined to be usable.
b. SFR Listing Methods
The SFR will conduct the listing in programs where the clients are not necessarily the
same each day or night or where they're not known in advance. The 4 SFR listing
methods developed were the sign-in method, the bed listing method, the chair/seat
listing method, and the line method.
i. Sign-In Method
This client sampling method was used when:
- clients' names are not necessarily known in advance;
- clients are not necessarily the same each day or night;
- there are specific hours during which clients may enter the facility;
- the facility routinely uses a sign-in procedure (or agrees to do so), listing the
clients' names as they enter.
ii. Bed Listing Method
This client sampling method was used when:
- clients' names are not necessarily known in advance;
- clients are not necessarily the same each night;
- the facility has a limited number of beds that may be occupied on any given
night.
iii. Chair/Seat Listing Method
This client sampling method was used when:
- clients' names are not necessarily known in advance;
- clients are not necessarily the same each day;
- the facility has a limited number of chair/seats that may be occupied on any
given time;
- the facility routinely uses a systematic seating procedure.
For example, a soup kitchen that systematically seats a certain number of clients at a
given meal.
iv. Line Method
This client sampling method was used when:
- clients' names are not necessarily known in advance;
- clients are not necessarily the same each day or night;
- there are specific hours during which clients may enter the facility;
- there is not enough time to list the clients' names as they use the service.
For example, most soup kitchens.
c. Special SFR Listing Methods
i. Fixed Route/Stops Method
This client sampling method was used when:
- clients' names are not necessarily known in advance;
- clients are not necessarily the same each time;
- the program operates using fixed stops along fixed routes.
For example, a mobile food van that visits certain stops each day.
ii. Non-fixed Route/Stops Method
This client sampling method was used when:
- clients' names are not necessarily known in advance;
- clients are not necessarily the same each time;
- at least one person travels along the street making spontaneous contact with
clients.
For example, a team which searches for homeless people on the street and attempts
to distribute goods or services to whomever they contact.
The operational difference between these two special SFR sampling methods is that
the Fixed Route/Stops Method confines client sampling to a particular 'stop',
whereas the Non-Fixed Route/Stops Method lists clients for an entire day or block
of time.
iii. Time Interval Method
This client sampling method was used only as a last resort method when:
- it is impossible to predict even a range for the number of clients expected;
- there are specific hours during which clients may receive goods or services or
enter the facility;
- contacts with clients is sporadic.
For example, a food pantry that is open from 8:00am-8:00pm, the selected client
sampling hours are 8:00am-2:00pm, the program serves a minimal number of clients
during that time, and the clients enter sporadically.
3. Client Sampling/Selection Method
Client sample selection was the process of drawing the random sample of clients from
those listed for interviewing purposes. A targeted number of clients were selected from
each provider-program in sample. In the non-MSA PSUs, the target was 8 clients. In
MSA PSUs, the target was 6 clients, except when interviewing was not conducted
immediately following the sampling and a target of 8 clients was used.
On the day of interviewing, the provider was again asked to provide an estimate of the
number of clients that they thought would be using the service that day or night. The
SFR, using the "hit number" worksheet, would then determine which clients were to be
selected for interviewing. The hit numbers were randomly generated. In the event that
more clients showed up than were expected, the worksheets supplied additional hit
numbers. The maximum was set at 9 and 12 sampled clients for the expected 6 and 8
sample clients worksheets, respectively.
As they flow through the program, the clients corresponding to the hit numbers
(including any extra hits) were handed a card indicating that they had been selected to be
interviewed. In most cases, a provider staff member or SFR would briefly explain the
survey and introduce the client to an interviewer.
If substitution is necessary and possible (e.g., the selected client immediately refuses and
the appropriate substitute client is still available), then the client immediately following
the originally selected client is now in sample.
D. Data Collection
Once a client was selected to be interviewed, and explained the purpose of the survey, a
Census field representative (FR) completed an interview that averaged 45 minutes. At the
conclusion of the interview, the respondent was provided $10.00 by the cooperating staff from
the provider.
Census staff were organized into groups to accomplish the sampling and interviewing. A
senior field representative (SFR) managed the selection of each respondent, and up to five
FRs were present to interview selected clients immediately. In this way, the entire visit could
be completed in 1 1/2 to 2 hours. As time permitted, questionnaires were reviewed for
completeness on-the-spot, as there would be no opportunity to recontact the respondent.
One of the most challenging sampling/interview situations involved outreach programs that
served clients during the evening or night. In these cases, the SFR accompanied the outreach
worker(s) to designated stops during the evening and/or night, and the selected clients were
given appointment cards and asked to show up at a designated location to be interviewed in
the morning. The $10.00 incentive worked well in these situations, as most of the selected
clients did show up to be interviewed.
Topics covered in the interview included:
More than 4,200 clients were interviewed nationally, with few substitutions for refusals (or
other reasons) necessary. In metropolitan areas, an average of 6 interviews per program was
completed, and in rural areas, an average of 8 interviews per program was completed.
APPENDIX D: NSHAPC WEIGHTS
This document describes the weighting process for the National Survey of Homeless Assistance
Providers and Clients (NSHAPC). Three major data collection operations and data sets require
weights. These are: (1) the computer-assisted telephone interview (CATI) data set consisting of
service locations offering homeless assistance programs, (2) the mail survey data set containing
detailed information about programs included in NSHAPC, and (3) the client data set based on
in-person interviews with clients of NSHAPC programs and covering characteristics of clients
who use services. The weighting procedures described below produce weights that allow one to
generate nationally representative estimates of homeless assistance service locations (based on
CATI), programs (based on both the CATI and the mail survey), and clients (based on the client
survey).
I. CATI Weights
A. Overview
The NSHAPC sample design started with the selection of 76 primary sampling units (PSUs):
52 in metropolitan areas and 24 in nonmetropolitan areas. In metropolitan areas, the
NSHAPC PSU was the Metropolitan Statistical Area (MSA), as defined by the Office of
Management and Budget using the 1990 Census. In nonmetropolitan areas, the NSHAPC
PSU was usually the Community Action Agency (CAA) catchment area, either the whole or a
fraction, or a group of CAA catchment areas.(3) In a few nonmetropolitan areas without CAAs,
the PSU was a county or a group of counties.
Due to their large population size, the largest 28 MSAs in the country were included in the
sample with certainty, and are therefore self-representing. The remaining 48 PSUs in the
sample (24 MSAs and 24 non-MSAs) were randomly selected in proportion to their
population from strata of PSUs. The strata are groups of homogeneous areas based on region
(northeast, south, midwest, and west) and population size (small and medium sized MSAs).
B. CATI Sample
The objective of the CATI data collection phase was to conduct a census of all providers of
homeless assistance services in the 76 PSUs meeting the criteria set by the sponsors.
Respondents were interviewed by telephone using a CATI protocol. The first two or three
CATI questions determined if the case was in-scope or out-of-scope for the survey. Reasons
for being declared "out-of-scope" all had to do with the organization not operating any
homeless assistance programs at the location called that met the survey's definition. A
program was defined as the provision of services or assistance which: are managed or
administered by the agency (i.e., the agency provides the staff and funding); are designed to
accomplish a particular mission or goal; are offered on an ongoing, regular basis; focus on
homeless persons as an intended target population; and are not a referral service only. These
criteria were used for all cases in MSAs. In the non-MSA PSUs, a less restrictive definition
was used: non-MSA service locations were included if they provided any services to homeless
persons other than referrals.
C. Calculating the CATI Weight
The CATI weight is a service location level weight calculated for each case in the CATI
sample. The factors of this weight were calculated by PSU and/or other cells (i.e., groups of
cases with similar characteristics). Cells were formed to reduce the variance of the weights.
The final CATI weight for each case was calculated as the product of five separate factors: (1)
the CATI base weight, (2) the CATI sub-sampling factor, (3) the CATI self-responders factor,
(4) the CATI undetermined provider status factor, and (5) the CATI in-scope status factor.
Each of these is described below.
1. The CATI Base Weight
Each NSHAPC PSU had a probability of being selected into the sample based on the
population of the PSU. In the self-representing MSAs, the probability of being selected
was one. In the non-self-representing MSAs and non-MSAs, the probability of selecting a
PSU is given by the following ratio:
The CATI base weight for a given PSU is the inverse of the probability of the PSU being
selected. In self-representing MSAs, it is equal to one, and in all other cases it is equal to:
This factor is the first component of the weight assigned to each case within the PSU.
2. The CATI Sub-Sampling Factor
Of the 17,732 cases identified for CATI processing, all but 4,612 were completed before
the government shutdown in December 1995. Due to the schedule of the survey, tight time
constraints were put on the CATI staff when the government re-opened. There was not
enough time to complete all of the 4,612 unresolved cases, therefore a sub-sample of these
cases was selected for follow-up when operations resumed. All unresolved cases from the
non-self-representing MSAs and non-MSAs were selected for follow-up (i.e., sampled at a
rate of one in one). Cases in self-representing MSAs were sampled at a rate of one in four.
Each case not selected for follow-up in the self-representing MSAs was put "on hold."
These procedures resulted in 1,549 cases being selected for follow-up.
The CATI sub-sampling factor is needed because of this requirement to sample the
unresolved cases after the government shutdown. It was calculated as follows: in self-representing MSAs, "unresolved" cases selected for follow-up received a value of four; all
other cases received a value of one for this factor.
3. CATI Self-Responders Factor
The CATI self-responders factor is needed because some of the unresolved cases in the
self-representing MSAs that had not been selected for follow-up ended up being "resolved"
when the CATI operations were completed: these cases are called self-responders.(4) This
factor of the weight adjusts for such self-responder cases.(5) The resolved/unresolved status
of each case was captured in a final outcome code.
To include these additional cases, a factor was computed and applied to the cases selected
for follow-up and to those that are "self-responders." To calculate this factor, the cases are
divided into three groups: (1) completed CATI cases, (2) cases selected for follow-up (all
unresolved cases in non-self-representing PSUs and sampled unresolved self-representing
PSUs), and (3) cases put on hold (unresolved cases from self-representing PSUs that were
not sampled). Cases in group 1 and those in group 2 that were not from self-representing
PSUs received a CATI self-responders factor of one. The selected cases (group 2) and the
"self-respondent" cases (all from self-representing MSAs) receive the following factor:
The "weighted number" is the value of the weight thus far (i.e., the product of the CATI
base weight and the CATI sub-sampling factor). "Unresolved" cases not selected for
follow-up received a factor of zero.
The CATI self-responders factor can be calculated at several different levels (or cells) and
is applied to each provider case. At a minimum, the factor had to be calculated at the PSU
level. Lower levels, such as groups of providers with similar characteristics or a location
in central cities or balance of MSAs, could have also been formed. For this analysis, a
variety of variables were explored, including several geographical variables and two
program-configuration variables. The resulting analyses suggested that some cell
definitions appeared to be important for some sub-groups of cases but not others; therefore,
different cell formation strategies were used for different sub-groups of cases.
Final cells were constructed as follows. The first group was in-scope CATI respondents
who provide homeless assistance services at more than one location and should have
reported CATI information for each location in separate interviews but reported the
information for all locations in a single CATI interview. An example would be a CATI
respondent in a central office where all administrative operations occur and who oversees
numerous satellite locations where services are delivered.(6) The second group contains all
the remaining in-scope CATI respondents. In calculating the CATI self-responders factor,
CATI respondents who reported information on several service locations within a single
interview were classified into cells by urban-rural status (central city/balance MSA/non-MSA). For all other cases, the factor was calculated separately by PSU with some PSU
cells combined due to small sample sizes.
4. CATI Undetermined Provider Status Factor
The CATI undetermined provider status factor was computed using the weighted number
of cases where the case status as in- or out-of-scope is known, plus the weighted number of
cases where the status cannot be determined, all divided by the weighted number of cases
where the status is known. The weighted numbers reflect the product of the previous
adjustments (the CATI base weight, the CATI sub-sampling factor, and the CATI self-responders factor) within each cell.
Final cells for this factor were formed using PSU and urban-rural status (central
city/balance MSA/non-MSA), with some cells combined because of small sample sizes.
5. CATI In-Scope Status Factor
The CATI outcome status factor distinguishes between cases that are considered to be in-scope interviews (and should have a final weight greater than zero) and all other cases.
Cases determined to be in-scope received a value of one for this factor, and all other cases
(those that were out-of-scope, noninterviews, and refusals) received a value of zero.
II. Mail Survey Weights
A. Overview
Detailed program data were collected through a mail survey to programs identified during the
CATI interview. Separate mail survey questionnaires were sent to each homeless assistance
program identified in the CATI survey as of the end of March 1996. Respondents for these
mail surveys were also identified through CATI by asking respondents for the name and
address of the person who could best answer detailed questions about each program at that
service location. The mail surveys were sent to that person.
B. Calculating the Mail Survey Weight
The weight for the NSHAPC mail survey is the product of five factors: (1) the CATI weight,
(2) a non-response follow-up sub-sampling factor, (3) questionnaire self-responders factor, (4)
a questionnaire non-interview adjustment factor, and (5) a questionnaire in-scope status factor.
1. CATI Weight
This factor was described in the previous section.
2. The Non-Response Follow-Up Sub-Sampling Factor
Providers received a questionnaire for each program identified in the CATI interview. Due
to high non-response to the mailed questionnaires, a sample of cases was selected for
follow-up. All service locations with at least one program questionnaire outstanding were
eligible for follow-up selection. Sample selection was done at the service location level
and was independent of the number or type of programs for which questionnaires were
outstanding. All cases from non-self-representing MSAs and non-MSAs were selected for
follow-up (i.e., sampled at a rate of one in one). Programs at service locations in self-representing MSAs that were eligible for selection were sampled at a rate of one in two.
Mail survey cases were classified into three groups: (1) program questionnaire returned,
(2) program questionnaire not returned and not selected into the follow-up sample, and (3)
program questionnaire not returned and selected into the follow-up sample. In self-representing MSAs, selected eligible cases received a factor of two. Eligible cases not
selected for follow-up received a factor of one, as did all programs for which a
questionnaire was returned and those cases in non-self-representing MSAs and non-MSAs
(groups 1 and 2).
3. Questionnaire Self-Responders Factor
As with the CATI survey, a questionnaire self-responders factor was necessary because
some programs not sampled for follow-up ended up being completed and returned at a later
date anyway. To include these additional cases, a factor was computed and applied to
cases selected for sub-sampling and to these "self-responders."(7) To calculate this factor,
the cases were grouped into the same three groups described for the non-response follow-up sub-sampling factor. Programs that returned a questionnaire (group 1) and those that
did not but were selected for follow-up sample (group 3) and were not from self-representing MSAs, received a value of one for this factor. The selected cases from the
providers that did not return their program questionnaire and the "self-respondent" cases
(all group 2 cases from self-representing MSAs who had not returned their questionnaires
by the time follow-up sampling was done) received a factor equal to the following ratio:
All remaining "unresolved" cases not selected for follow-up (remaining cases from group
3) received a factor of zero. Final cells were calculated separately by PSU.
4. Questionnaire Non-Interview Adjustment Factor
The questionnaire non-interview adjustment factor was calculated as follows:(8)
The factor was calculated separately by program type (a variable found to be strongly
related to non-response while allowing for a sufficient number of cases in each cell).
Note that for in-scope cases, this factor is equal to the multiplicative inverse of the
proportion of cases in a cell that are complete interviews. It increases weights for
completed cases to compensate for incomplete cases.
5. Questionnaire In-Scope Status Factor
The questionnaire in-scope status factor distinguishes between cases that are considered to
be in-scope interviews (and should have a final weight greater than zero) and all other
cases. Mail survey cases determined to be in-scope received a value of one for this factor,
and all other cases (those that were out-of-scope, noninterviews, and refusals) received a
value of zero.
III. Client Weights
A. Overview
The basic client weight should be used for all analyses intended to represent the universe of
clients of homeless assistance programs across the country during an average week in the
reference period (last two weeks in October 1996 and the first two weeks in November 1996).
Analyses done without this weight will only describe the sample of clients who were
interviewed for NSHAPC and will not represent the nation as a whole.
It is important to understand that this weight should not be used to estimate either the number
of NSHAPC clients or the characteristics of clients drawn from a subset of sampling frames
(i.e., a subset of the provider-programs where NSHAPC identified and interviewed clients).
Analysts interested in characterizing clients who use certain types of programs (e.g.,
emergency shelter stayers) should identify such clients based on their responses to survey
questions, rather than the frame from which they were sampled.
B. Client Sample
The sampling frame for the NSHAPC client interviews (see Appendix A) was developed
based on estimates of the number of service units delivered through ten distinct types of
homeless assistance programs (or frames): emergency shelters, transitional housing,
permanent housing programs for formerly homeless persons, migrant workers camps used to
house homeless persons during the off season, voucher distribution programs for temporary
housing, soup kitchens/meal distribution programs, food pantries (in nonmetropolitan areas
only), drop-in centers, mobile food programs, and street outreach programs. A critical step in
developing a client-based weight involves converting base factor units of services use to units
of service users. This is described in more detail below.
C. Calculating the Client Weight
The NSHAPC client weight is based on five factors: (1) a base weight that reflects the
probability of selecting a provider to visit for client interviewing (based on an estimated
number of service units of a given program type in a given PSU in February 1996 and the
probability of selecting a client to interview), (2) a factor that adjusts for client interviews
having taken place in late October/early November rather than February 1996, as had
originally been planned (a measure of size adjustment), (3) a factor that converts the estimate
of service units in a month to service units in a week (also a measure of size adjustment), (4) a
factor that converts service units to service users (based on the number of service units a client
reports using during the seven-day period preceding his or her interview), and (5) final
adjustment factors.
1. Client Service Base Weight
The Client Service Base Weight was given to the Urban Institute by the Census Bureau.
This base weight takes into account the probability of selecting a program to visit for client
interviewing and the probability of selecting a client to interview. The weight reflects the
total number of service units delivered in February 1996 irrespective of when the client
was actually interviewed.
2. Client Interviewing Timing Adjustment Factor
The purpose of the Client Interviewing Timing Adjustment Factor was to adjust the
relative measure of size of the program. In the CATI phase of the survey, respondents
were asked to estimate the number of clients who used each program at the service location
on an average day in February. The client survey occurred in late October/early November
rather than in February of 1996. On the day the program was visited to conduct client
interviews in late October/early November, therefore, a follow-up question was asked
about the estimated number of people who were going to use the program that day and at
that location (if multiple locations of service distribution were involved). This factor is
equal to:
3. Month-to-Week Adjustment Factor
The original Client Service Base Weight reflected the total estimated number of service
units in a month (February). To convert this figure to a weekly estimate, the weight was
adjusted by a factor of 7/29 (29 because February 1996 was a leap year).(9)
4. Unduplicating the Client Base Weight
The basic unit of the original Client Service Base Weight was a service unit, rather than a
service user (or client). To the extent that clients of homeless assistance programs use
more than one program during a week, or use the same program more than once, a weight
that reflects service units rather than service users will overrepresent the characteristics of
frequent service users and underrepresent the characteristics of infrequent service users.
To generate an accurate picture of client characteristics, one must convert (reduce) the
client base weight from service units to service users. This is also referred to as
"unduplicating" since several service units may belong to a single service user and are,
therefore, "duplicates" if one is really interested in service users.
To unduplicate the weight, the weight attached to each client in the sample is divided by
the total number of services used during the week prior to their interview.(10) Thus weighted
client responses using this weight represent clients using homeless assistance programs
during an average week in mid-fall 1996.
A potential problem arose when a client reported not having used any homeless assistance
programs in the prior week, requiring one to divide the weight by zero (which is
mathematically impossible). A decision was needed about how to treat these particular
cases. One option was simply to assign them an overall weight of zero. This obviously
produces a lower numerical estimate (compared to a scenario where their weights are
nonzero) but it effectively excludes from the estimate all clients who are infrequent users
of homeless assistance programs. It also affects estimates of users' characteristics if the
infrequent users differ in some systematic fashion from other clients in the survey.
An alternative approach to is to preserve infrequent users in the sample by assigning them
a value of one for the number of service uses in the prior week. Doing so means making
the assumption that each of the service units (reflected in the original Client Service Base
Weight) associated with that client was used by another distinct individual who is also an
infrequent user of program services. Obviously, this yields different characteristics (and a
higher numerical estimate) but one which is likely to reflect more accurately the true
universe of NSHAPC clients.(11) In summary, in generating the service user NSHAPC
weights, infrequent service users have been retained in the sample.
5. Final Adjustment Factors Applied to Client Weights
A number of special adjustments were made to the client weights. The first adjustment
was confined to some clients interviewed at transitional housing and permanent housing
programs. A number of these clients did not report using any transitional/permanent
housing programs in the week prior to the interview but reported staying in their "own
house, apartment, or room" (including foster and adult group homes) all seven days. This
discrepancy may have resulted from (a) clients in transitional/permanent housing programs
considering their living accommodations to be "their own house, apartment, or room,"or
(b) including in the sampling frame housing programs that were not really transitional/
permanent housing programs for homeless or formerly homeless people. In either case,
each night of housing is treated as a "service use" and the base weight for each of these
clients was reduced by an additional factor of one-seventh.
Additional adjustments to the weights were necessary because preliminary analyses revealed that a small number of cases were driving the results for sub-populations of interest. To mitigate the effects of unusually high weights, all weights were capped at 3,000. As a last step, the weights were rescaled to match the final client sample size of 4,207. This ensures that the sample size yields appropriate statistical significance tests.
1. 1The 12 Federal agency sponsors include the Departments of Housing and Urban Development, Health and Human Services, Veterans Affairs, Agriculture, Commerce, Education, Energy, Justice, Labor, and Transportation as well as the Social Security Administration and the Federal Emergency Management Agency.
2. 2CAAs are public agencies administering services to the community. They are knowledgeable about service options for the homeless and often offer those services themselves. Typically, a CAA covers a multi-county "catchment" area or jurisdiction.
3. 1 CAAs are public agencies administering services to the community. They are knowledgeable about service options for the homeless and often offer those services themselves. Typically, a CAA covers a multi-county "catchment" area or jurisdiction.
4. Several scenarios account for this possibility. In some cases, service providers had received a message from the Census Bureau (prior to the government shutdown) asking them to call back and they did (after the government shutdown); in some cases, an interview may have been initiated prior to the government shutdown but completed afterwards; and finally, as explained later, some CATI respondents reported information for more than one service location in a single CATI interview. In these cases, if one of the "sibling" cases was selected for follow-up, the other siblings were classified as self-responders.
5. One way of handling these cases would have been simply to ignore them and treat them as if no data had been collected because the Census Bureau did not actively pursue them. This option was rejected and the data from these cases are included in the NSHAPC database.
6. Note that the CATI file is constructed in such a way that there is a CATI case record for each distinct service location even though the data for two or more service locations may be combined within a single "parent" record. Thus, tabulations of numbers of service locations and numbers of services of various types are unbiased but data on the number of unique service locations and specific combinations of services at single locations may be affected by these parent-child situations.
7. These self-responding cases could have been dropped. This option was rejected, however, and the data preserved by including this factor of the mail survey weight.
8. Only in-scope CATI cases were included in this calculation.
9. There are several reasons why the final client weight reflects a seven-day period rather than a single day. First, as the next section on unduplicating the weights makes clear, it takes advantage of the program use data collected from clients as part of the NSHAPC interview. Doing so assures that no single day of idiosyncratic program use exercises an undue influence on the weight. Second, it recognizes and compensates for the fact that some people who are homeless on a given day might not be represented if they did not use a program on that day, but are more likely to be represented if a longer time frame is used. One does not therefore underrepresent homeless clients who use services infrequently. Finally, the construction of a seven-day weight parallels the approach taken in the 1987 Urban Institute study of national homelessness (Burt and Cohen 1989).
10. This could be done because respondents were asked about the exact number of service units they used during the preceding seven days for each type of service used in the client frames (breakfast, lunch, and dinner meals at soup kitchens were treated as distinct services).
11. Under this option, one does not assume that the volume of service delivery is equal across all locations (such an assumption would grossly overestimate the number of service users). Geographic (and other) differences in the number of service units delivered are reflected in the original Census Base Weights. In other words, if users of soup kitchens in central cities have many more opportunities to use soup kitchens compared to their counterparts in rural areas, then the Census Base Weight attached to a respondent found in a central city soup kitchen will be larger than the weight attached to a respondent found in a rural soup kitchen.