Coastal Services Center

National Oceanic and Atmospheric Administration

[Skip Navigation]

General FAQs


Note that glossary terms on this page will be followed by a [defined term] icon, which is linked to a definition of the term.


What is remote sensing?

Remote sensing is the measurement or acquisition of information about the Earth by a recording device that is not in physical contact with the Earth. Remote sensing information can be collected in many different ways, including optical (sun's reflected light), photography, laser, RADAR, acoustic, and fluorescent sensors. See the Committee on Earth Observation Satellites Glossary for more information.

Return to Top


What is the difference between land use and land cover?

Land cover is the natural landscape recorded as surface components: forest, water, wetlands, urban, etc. Land cover can be documented by analyzing spectral signatures of satellite and aerial imagery.

Land use is the documentation of human uses of the landscape: residential, commercial, agricultural, etc. Land use can be inferred but not explicitly derived from satellite and aerial imagery. There is no spectral basis for land use determination in satellite imagery.

Return to Top


What is the difference between film and digital sensors?

The difference between film and digital sensors is almost the same as the difference between traditional film cameras and newer digital cameras of today. The major difference between the two is that film capture is limited to the visible and near infrared portions of the electromagnetic spectrum (EMS), ultraviolet, visible, infrared, and thermal energy, because it is a photochemical process. In a digital sensor, the light is never exposed to film or chemicals. Light is recorded directly as a digital value, making it ideal for working in the computer environment. The digital sensor uses a prism to split light into its individual components, or bands, and records the amount of photons (packets of light) in each band. Therefore, digital sensors can collect light from the upper end of the ultraviolet to the thermal portions of the spectrum.

Return to Top


What are the different platforms for sensors?

Two of the most common sensor platforms are satellites and airplanes. Satellites orbit around the Earth and are able to return to the same point on a regular schedule. Airplanes can fly to specific areas and regions at a specific time to capture imagery. Sensors mounted onto an airplane can be digital or photographic.

Return to Top


What is a pixel?

A pixel is a term derived from the phrase "picture element" and represents the smallest unit of information in an image. Different sensors have different pixels sizes. In geographic terms, a pixel represents an area of ground. The pixel size is governed by the characteristics of the sensor, or instantaneous field of view (IFOV) defined term.

The spatial resolution, or pixel size, of an image is the smallest object or feature detectable by the sensor. Landsat Thematic Mapper (TM) collects 30 meter x 30 meter pixels. This means that anything on the landscape that is smaller than 30 meters in diameter, or 900 square meters, will be generalized and unable to be discerned from other small features in the same area. For example, an image from a Landsat TM image shows much less detail than a scanned aerial photograph image with a 1-meter pixel.

Graphic comparing resolutions of satellite imagery and aerial photography

Return to Top


How is a vector different from a raster data format?

A vector data model is based on points, lines, and areas defined by drawing boundaries around features and labeling or attributing them. The National Wetlands Inventory (NWI) is an example of a vector database. Environmental scientists manually delineate wetlands from aerial photography and attribute them with the type of wetland for use in vector-based geographic information system (GIS).

The vector data format is well suited for representing linear features and the human landscape. A road is a linear feature that can easily be attributed with size, condition, pavement type, etc. as a vector. Political boundaries are also well handled in a vector data format.

A raster data format divides up the Earth's surface into evenly spaced and equally sized units (normally squares) representing an area. Each unit, or cell, or pixel, has attributes such as elevation, or land cover.

The raster data format is well suited for representing the natural environment. There are very few natural linear boundaries. Therefore, the raster format is better able to document the natural landscape because each pixel can have a different value associated with it.

At one time, the use of vector or raster defined which software and analysis the user was able to employ, but that is no longer the case. Most major GIS software packages use vector and raster data formats fluidly today.

Return to Top


Where can I find remote sensing tutorials and information?

There are a number of Web-based tutorials and information pages available on remote sensing and geographic information systems (GIS).

Return to Top


What sensors are available commercially? Government sources? Free sources?

There are a number of commercial and noncommercial sensors and data sources. Here are a few sites to begin your search:

  • Landsat Thematic Mapper (TM) - commercial 30-meter multispectral resolution
  • Landsat 7 - United States Geological Survey (USGS) government 30-meter multispectral and 15-meter panchromatic resolution
  • SPOT - commercial 20-meter multispectral and 10-meter panchromatic
  • RADARSAT - commercial RADAR sensor with resolutions from 5 to 70 meters
  • IKONOS - commercial 4-meter multispectral and 1-meter panchromatic
  • India Remote Sensing - commercial 25-meter multispectral and 5-meter panchromatic imagery
  • SeaWiFs - commercial 1-kilometer multispectral imagery
  • GOES - NOAA government 4-kilometer multispectral imagery
  • MODIS - government 250- and 500-meter multispectral (free) imagery
  • Hyperion - government 30-meter hyperspectral (220 bands from .4 to 2.4 micrometer imagery)
  • USGS Earth Resources Observation Systems (EROS) Data Center - Satellite and aerial imagery

Return to Top


How is land cover derived from satellite imagery?

All surfaces reflect, absorb, or transmit incident light. Different materials reflect and absorb different amounts and wavelengths of light along the electromagnetic spectrum. This is the basis for identifying surface components with remote sensing.

High-resolution analog aerial photography can be used to delineate geographic themes of information, such as wetlands. The National Wetlands Inventory (NWI) Program is an example of using aerial photography to delineate wetlands in the landscape by manual interpretation and delineation.

Digital sensors, such as most satellite-based sensors, can collect multiple wavelengths of light in regions of the electromagnetic spectrum not possible with an analog photographic medium. It is possible to manipulate and statistically analyze these wavelengths of light to determine unique characteristics of the landscape and ground surface. These characteristics can be turned into information such as land cover.

Digital raster images are analogous to spreadsheets. They are cells filled with meaningful numeric observations and in this case, digital values representing the intensity of light reflected from surface materials. Therefore, it is possible to statistically analyze digital imagery to determine land cover. The spectral signatures are developed from the reflective characteristics of each cell and the cells are compared and clumped together to form land cover classes. In a perfect world, each spectral signature would represent an unique landscape component. This is rarely the case. So, many processes and applications have been designed to extract information from digital imagery.

Return to Top


What does georeferencing or rectification mean?

Rectification or georeferencing is the process of assigning real-world coordinates (projection and datum) to geographic data to tie it to the Earth. When data are collected by cameras or digital sensors, they have inherent forms of distortion from the sensor and terrain. These various forms of distortion must be removed before images will spatially match other geographically referenced data sets. Once the distortion is removed and real-world coordinates are assigned, the data should accurately represent a given portion of the Earth, allowing for analysis with other properly referenced themes, layers, and images.

Return to Top


What does multispectral mean? hyperspectral?

Multispectral refers to the sensor's spectral resolution. Sensors collecting between 2 and 16 portions (bands) along the electromagnetic spectrum (EMS) are typically considered multispectral. Landsat and SPOT are examples of multispectral sensors. Both Landsat and SPOT contain discrete observations over ranges of reflected light, or bands. Landsat collects one observation in each of the blue, green, red, near infrared, two shortwave infrared, and thermal wavelengths.

Hyperspectral is a term which typically denotes a continuous sampling along the EMS with greater than 16 bands. Hyperspectral sensors collect a set of contiguous observations across a large range of reflected light. Hyperion is an example of a hyperspectral imaging sensor (based upon Airborne Visible Infrared Imaging Spectrometer [AVIRIS]). Hyperion collect 224 bands from the blue to the short wave infrared wavelengths in equally spaced steps, including the areas of the EMS where light is completely absorbed by atmospheric components. These areas are used to help correct atmospheric effects in the reflected portions of the EMS.

Return to Top


What does spectral resolution mean?

Spectral resolution is defined as the number and width (wavelength) of bands (meaningful portions) of electromagnetic energy detectable by a given sensor. For analog photography, it is defined as the number of emulsion layers in the photographic film (i.e., one for black/white photographs and three for color). Spectral resolution is limited to the visible and low end of the near infrared portion of the EMS for photographic film, which is limited by photochemical processes. However, spectral resolution can range from one to hundreds of channels in a digital sensor, from the upper end of the ultraviolet light to the thermal infrared for optical sensors.

Return to Top


What does temporal resolution mean?

Temporal resolution refers to how often the same geographic area is revisited by a sensor. Temporal resolution is governed by the orbital characteristics of the satellite vehicle. A sun-synchronous orbit means that the satellite travels on multiple pole to pole orbits coordinated with the circle of illumination (hemisphere of the sun's incident light), such that the satellite collects at approximately the same time of day at every point on the earth for every orbit.

  • National Oceanic and Atmospheric Administration's (NOAA) Advanced Very High Resolution Radiometer (AVHRR) revisits the same geographic point two times daily at a 1.1-kilometer resolution.
  • Landsat Thematic Mapper revisits the same point every 16 days at a 30-meter resolution.
  • SPOT revisits the same point every 14 days collecting both 20-meter multispectral and 10-meter panchromatic resolutions.
  • Space Imaging's IKONOS revisits the same point every 16 days collecting both 4-meter multispectral and 1-meter panchromatic wavelengths.
  • USGS National Aerial Photography Program (NAPP) is flown every five years and scanned at a 1-meter resolution for green, red, and near infrared or panchromatic bands.

There is a general trend in optical remote sensing: the higher the resolution, the longer the revisit time. In general this is true with some notable exceptions. Geostationary satellites are satellites with an orbit fixed over a portion of the Earth's surface. Additionally, some satellites have pointable sensors, such as IKONOS and SPOT. IKONOS can point off perpendicular, or nadir, to revisit any point every three-and-a-half days.

Return to Top


What are the advantages of airborne sensors?

Aerial platforms can be modified quickly and easily and are valued for this flexibility. The spatial resolution of aerial photography is governed by the film grain size and the flight altitude. The lower the altitude, the higher the resolution. Aerial sensors can collect higher spatial resolution imagery than currently allowed by the governments of the world for commercial satellite-based sensors. However, this is rapidly changing.

Return to Top


What are the advantages of space-based sensors?

Depending on their orbital characteristics, space-based sensors can collect data anywhere on the earth. They collect data in specific wavelengths at specific times at specific resolutions. Because they are primarily digital sensors, they can also be calibrated to allow for atmospheric correction, change detection analysis, statistical analysis, and time series analysis.

Return to Top


What is the difference between active and passive sensors?

• Active sensors pulse energy at a target and measure the return, or reflection. Light Detection and Ranging (LIDAR) and Radio Detection and Ranging (RADAR) are two examples of active sensors. Example illustrating an active sensor
• Passive sensors collect the sun's ambient light energy reflected from a surface with no active pulse of energy from the sensor. Film-based and digital cameras, both aerial and handheld, as well as digital satellite sensors, such as Landsat Thematic Mapper, are examples of passive sensors. Example illustrating a passive sensor

Return to Top


What are some useful conversion factors in remote sensing?

  • 3.2808 feet = 1 meter
  • 30 meters = 98.424 feet
  • 1 kilometer = .62 miles
  • 30 meter pixel = .248 acres

Return to Top