Land Cover Analysis
General FAQs
Note that glossary terms on this page will be followed by a icon, which is linked to a definition of the term. What is remote sensing?Remote sensing is the measurement or acquisition of information about the Earth by a recording device that is not in physical contact with the Earth. Remote sensing information can be collected in many different ways, including optical (sun's reflected light), photography, laser, RADAR, acoustic, and fluorescent sensors. See the Committee on Earth Observation Satellites Glossary for more information. What is the difference between land use and land cover?Land cover is the natural landscape recorded as surface components: forest, water, wetlands, urban, etc. Land cover can be documented by analyzing spectral signatures of satellite and aerial imagery. Land use is the documentation of human uses of the landscape: residential, commercial, agricultural, etc. Land use can be inferred but not explicitly derived from satellite and aerial imagery. There is no spectral basis for land use determination in satellite imagery. What is the difference between film and digital sensors?The difference between film and digital sensors is almost the same as the difference between traditional film cameras and newer digital cameras of today. The major difference between the two is that film capture is limited to the visible and near infrared portions of the electromagnetic spectrum (EMS), ultraviolet, visible, infrared, and thermal energy, because it is a photochemical process. In a digital sensor, the light is never exposed to film or chemicals. Light is recorded directly as a digital value, making it ideal for working in the computer environment. The digital sensor uses a prism to split light into its individual components, or bands, and records the amount of photons (packets of light) in each band. Therefore, digital sensors can collect light from the upper end of the ultraviolet to the thermal portions of the spectrum. What are the different platforms for sensors?Two of the most common sensor platforms are satellites and airplanes. Satellites orbit around the Earth and are able to return to the same point on a regular schedule. Airplanes can fly to specific areas and regions at a specific time to capture imagery. Sensors mounted onto an airplane can be digital or photographic. What is a pixel?A pixel is a term derived from the phrase "picture element" and represents the smallest unit of information in an image. Different sensors have different pixels sizes. In geographic terms, a pixel represents an area of ground. The pixel size is governed by the characteristics of the sensor, or instantaneous field of view (IFOV) . The spatial resolution, or pixel size, of an image is the smallest object or feature detectable by the sensor. Landsat Thematic Mapper (TM) collects 30 meter x 30 meter pixels. This means that anything on the landscape that is smaller than 30 meters in diameter, or 900 square meters, will be generalized and unable to be discerned from other small features in the same area. For example, an image from a Landsat TM image shows much less detail than a scanned aerial photograph image with a 1-meter pixel. How is a vector different from a raster data format?A vector data model is based on points, lines, and areas defined by drawing boundaries around features and labeling or attributing them. The National Wetlands Inventory (NWI) is an example of a vector database. Environmental scientists manually delineate wetlands from aerial photography and attribute them with the type of wetland for use in vector-based geographic information system (GIS). The vector data format is well suited for representing linear features and the human landscape. A road is a linear feature that can easily be attributed with size, condition, pavement type, etc. as a vector. Political boundaries are also well handled in a vector data format. A raster data format divides up the Earth's surface into evenly spaced and equally sized units (normally squares) representing an area. Each unit, or cell, or pixel, has attributes such as elevation, or land cover. The raster data format is well suited for representing the natural environment. There are very few natural linear boundaries. Therefore, the raster format is better able to document the natural landscape because each pixel can have a different value associated with it. At one time, the use of vector or raster defined which software and analysis the user was able to employ, but that is no longer the case. Most major GIS software packages use vector and raster data formats fluidly today. Where can I find remote sensing tutorials and information?There are a number of Web-based tutorials and information pages available on remote sensing and geographic information systems (GIS).
What sensors are available commercially? Government sources? Free sources?There are a number of commercial and noncommercial sensors and data sources. Here are a few sites to begin your search:
How is land cover derived from satellite imagery?All surfaces reflect, absorb, or transmit incident light. Different materials reflect and absorb different amounts and wavelengths of light along the electromagnetic spectrum. This is the basis for identifying surface components with remote sensing. High-resolution analog aerial photography can be used to delineate geographic themes of information, such as wetlands. The National Wetlands Inventory (NWI) Program is an example of using aerial photography to delineate wetlands in the landscape by manual interpretation and delineation. Digital sensors, such as most satellite-based sensors, can collect multiple wavelengths of light in regions of the electromagnetic spectrum not possible with an analog photographic medium. It is possible to manipulate and statistically analyze these wavelengths of light to determine unique characteristics of the landscape and ground surface. These characteristics can be turned into information such as land cover. Digital raster images are analogous to spreadsheets. They are cells filled with meaningful numeric observations and in this case, digital values representing the intensity of light reflected from surface materials. Therefore, it is possible to statistically analyze digital imagery to determine land cover. The spectral signatures are developed from the reflective characteristics of each cell and the cells are compared and clumped together to form land cover classes. In a perfect world, each spectral signature would represent an unique landscape component. This is rarely the case. So, many processes and applications have been designed to extract information from digital imagery. What does georeferencing or rectification mean?Rectification or georeferencing is the process of assigning real-world coordinates (projection and datum) to geographic data to tie it to the Earth. When data are collected by cameras or digital sensors, they have inherent forms of distortion from the sensor and terrain. These various forms of distortion must be removed before images will spatially match other geographically referenced data sets. Once the distortion is removed and real-world coordinates are assigned, the data should accurately represent a given portion of the Earth, allowing for analysis with other properly referenced themes, layers, and images. What does multispectral mean? hyperspectral?Multispectral refers to the sensor's spectral resolution. Sensors collecting between 2 and 16 portions (bands) along the electromagnetic spectrum (EMS) are typically considered multispectral. Landsat and SPOT are examples of multispectral sensors. Both Landsat and SPOT contain discrete observations over ranges of reflected light, or bands. Landsat collects one observation in each of the blue, green, red, near infrared, two shortwave infrared, and thermal wavelengths. Hyperspectral is a term which typically denotes a continuous sampling along the EMS with greater than 16 bands. Hyperspectral sensors collect a set of contiguous observations across a large range of reflected light. Hyperion is an example of a hyperspectral imaging sensor (based upon Airborne Visible Infrared Imaging Spectrometer [AVIRIS]). Hyperion collect 224 bands from the blue to the short wave infrared wavelengths in equally spaced steps, including the areas of the EMS where light is completely absorbed by atmospheric components. These areas are used to help correct atmospheric effects in the reflected portions of the EMS. What does spectral resolution mean?Spectral resolution is defined as the number and width (wavelength) of bands (meaningful portions) of electromagnetic energy detectable by a given sensor. For analog photography, it is defined as the number of emulsion layers in the photographic film (i.e., one for black/white photographs and three for color). Spectral resolution is limited to the visible and low end of the near infrared portion of the EMS for photographic film, which is limited by photochemical processes. However, spectral resolution can range from one to hundreds of channels in a digital sensor, from the upper end of the ultraviolet light to the thermal infrared for optical sensors. What does temporal resolution mean?Temporal resolution refers to how often the same geographic area is revisited by a sensor. Temporal resolution is governed by the orbital characteristics of the satellite vehicle. A sun-synchronous orbit means that the satellite travels on multiple pole to pole orbits coordinated with the circle of illumination (hemisphere of the sun's incident light), such that the satellite collects at approximately the same time of day at every point on the earth for every orbit.
There is a general trend in optical remote sensing: the higher the resolution, the longer the revisit time. In general this is true with some notable exceptions. Geostationary satellites are satellites with an orbit fixed over a portion of the Earth's surface. Additionally, some satellites have pointable sensors, such as IKONOS and SPOT. IKONOS can point off perpendicular, or nadir, to revisit any point every three-and-a-half days. What are the advantages of airborne sensors?Aerial platforms can be modified quickly and easily and are valued for this flexibility. The spatial resolution of aerial photography is governed by the film grain size and the flight altitude. The lower the altitude, the higher the resolution. Aerial sensors can collect higher spatial resolution imagery than currently allowed by the governments of the world for commercial satellite-based sensors. However, this is rapidly changing. What are the advantages of space-based sensors?Depending on their orbital characteristics, space-based sensors can collect data anywhere on the earth. They collect data in specific wavelengths at specific times at specific resolutions. Because they are primarily digital sensors, they can also be calibrated to allow for atmospheric correction, change detection analysis, statistical analysis, and time series analysis. What is the difference between active and passive sensors?
What are some useful conversion factors in remote sensing?
|