11
Remote Sensing Satellites

Remote sensing is a technology used for obtaining information about the characteristics of an object through the analysis of data acquired from it at a distance. Satellites play an important role in remote sensing. Some of the important and better known applications of satellites in the area of remote sensing include providing information about the features of the Earth's surface, such as coverage, mapping, classification of land cover features such as vegetation, soil, water, forests, etc. In this chapter, various topics related to remote sensing satellites will be covered, including their principle of operation, payloads on board these satellites and their use to acquire images, processing and analysis of these images using various digital imaging techniques and finally interpreting these images for studying different features of Earth for varied applications. The chapter gives a descriptive view of the above-mentioned topics with relevant illustrations wherever needed. Some of the major remote sensing satellite systems used for the purpose will also be introduced towards the end of the chapter.

11.1 Remote Sensing – An Overview

Remote sensing is defined as the science of identifying, measuring and analysing the characteristics of objects of interest without actually being in contact with them. It is done by sensing and recording the energy reflected or emitted by these objects and then processing and interpreting that information. Remote sensing makes use of the fact that every object has a unique characteristic reflection and emission spectra that can be utilized to identify that object. Sometimes, the gravitational and magnetic fields are also employed for remote sensing applications. One of the potential advantages that this technology offers is that through it various observations can be made, measurements taken and images of phenomena produced that are beyond the limits of normal perception. Remote sensing is widely used by biologists, geologists, geographers, agriculturists, foresters and engineers to generate information on objects on Earth's land surface, oceans and atmosphere. Applications include monitoring natural and agricultural resources, assessing crop inventory and yield, locating forest fires and assessing the damage caused, mapping and monitoring of vegetation, air and water quality, and so on.

A brief look into the history of remote sensing suggests that this technology dates back to the early 19th century when photographs of Earth were taken from the ground. The idea of aerial remote sensing emerged in the early 1840s when the pictures were taken from cameras mounted on balloons, pigeons, and so on. During both World Wars, cameras mounted on airplanes and rockets were used to provide aerial views of fairly large surface areas that were invaluable for military reconnaissance. Until the introduction of satellites for remote sensing, aerial remote sensing was the only way of gathering information about Earth. The idea of using satellites for remote sensing applications evolved after observing the images taken by the TIROS weather forecasting satellite. These images showed details of the Earth's surface where the clouds were not present. The first remote sensing satellite to be launched was Earth Resources Technology Satellite (ERTS-A) in the year 1972. Today satellites have become the main platform for remote sensing applications as they offer significant advantages over other platforms, which include radio-controlled aeroplanes and balloon kits for low altitude applications as well as ladder trucks for ground-based applications. In the paragraphs to follow, a brief comparison will be given of aerial and satellite remote sensing.

11.1.1 Aerial Remote Sensing

In aerial remote sensing, as mentioned before, sensors are mounted on aircraft, balloons, rockets and helicopters. Cameras mounted on aircraft have been used to monitor land use practices, locate forest fires and produce detailed and accurate maps of remote and inaccessible locations of the planet. Weather balloons and rockets are used for obtaining direct measurements of the properties of the upper atmosphere. Aerial systems are less expensive and more accessible options as compared to the satellite systems and are mainly used for one-time operations. The advantage of aerial remote sensing systems as compared to the satellite-based systems is that they have a higher spatial resolution of around 20 cm or less. However, they have a smaller coverage area and higher cost per unit area of ground coverage as compared to the satellite-based remote sensing systems. They are generally carried out as one-time operations whereas satellites offer the possibility of continuous monitoring of Earth.

11.1.2 Satellite Remote Sensing

Satellites are the main remote sensing platforms used today and are sometimes referred to as ‘eyes in the sky'. Use of satellites for remote sensing applications has brought a revolution in this field as they can provide information on a continuous basis of vast areas on the Earth's surface day and night. Constellations of satellites continuously monitor Earth and provide even minute details of the Earth's surface. This ever-expanding range of geographical and geophysical data help people to understand their planet better, monitor various parameters more minutely and hence manage and solve the problems related to Earth more efficiently. Satellites have become the main platform for carrying out remote sensing activities as they offer a number of advantages over other platforms. Some of these advantages include:

  1. Continuous acquisition of data
  2. Frequent and regular re-visit capabilities resulting in up-to-date information
  3. Broad coverage area
  4. Good spectral resolution
  5. Semi-automated/computerized processing and analysis
  6. Ability to manipulate/enhance data for better image interpretation
  7. Accurate data mapping

A point worthy of a mention here is that the aerial and satellite-based remote sensing systems use those regions of the electromagnetic spectrum that are not blocked by the atmosphere. These regions are referred to as ‘atmospheric transmission windows'. In this chapter satellite remote sensing will be discussed; hereafter, whenever remote sensing is mentioned, it will refer to satellite remote sensing unless otherwise stated.

11.2 Classification of Satellite Remote Sensing Systems

The principles covered in this section apply equally well to ground-based and aerial platforms but here they will be described in conjunction with satellites. Remote sensing systems can be classified on the basis of (a) the source of radiation and (b) the spectral regions used for data acquisition.

As mentioned before, satellite remote sensing is the science of acquiring information about the Earth's surface by sensing and recording the energy reflected or emitted by the Earth's surface with the help of sensors on board the satellite. Based on the source of radiation, they can be classified as:

  1. Passive remote sensing systems
  2. Active remote sensing systems

Passive remote sensing systems either detect the solar radiation reflected by the objects on the surface of the Earth or detect the thermal or microwave radiation emitted by them. Active remote sensing systems make use of active artificial sources of radiation generally mounted on the remote sensing platform. These sources illuminate the objects on the ground and the energy reflected or scattered by these objects is utilized here. Examples of active remote sensing systems include microwave and laser-based systems. Depending on the spectral regions used for data acquisition, they can be classified as:

  1. Optical remote sensing systems (including visible, near IR and shortwave IR systems)
  2. Thermal infrared remote sensing systems
  3. Microwave remote sensing systems

11.2.1 Optical Remote Sensing Systems

Optical remote sensing systems mostly make use of visible (0.3–0.7μm), near IR (0.72–1.30 μm) and shortwave IR (1.3–3.0 μm) wavelength bands to form images of the Earth's surface. The images are formed by detecting the solar radiation reflected by objects on the ground (Figure 11.1) and resemble the photographs taken by a camera. However, some laser-based optical remote sensing systems are also being employed in which the laser beam is emitted from the active sources mounted on the remote sensing platform. The target properties are analysed by studying the reflectance and scattering characteristics of the objects to the laser radiation. Optical remote sensing systems employing solar energy come under the category of passive remote sensing systems and the laser-based remote sensing systems belong to the category of active remote sensing systems. Passive optical remote sensing systems work only during the day as they rely on sensing reflected sunlight. This phenomenon makes them weather dependent because during cloudy days the sunlight is not able to reach Earth.

img

Figure 11.1 Optical remote sensing

Solar energy based optical remote sensing systems work on the principle that different materials reflect and absorb differently at different wavelengths in the optical band; hence the objects on the ground can be differentiated by their spectral reflectance signatures (Figure 11.2) in the remotely sensed images. As an example, vegetation has a very strong reflectance in the green and the near IR band and it has strong absorption in the red and the blue spectral bands. Moreover, each species of vegetation has a characteristic different spectral reflectance curve. Vegetation under stress from disease or drought also has a different spectral reflectance curve as compared to healthy vegetation. Hence, the vegetation studies are carried out in the visible and the near IR bands. Minerals, rocks and soil have high reflectance in the whole optical band, with the reflectance increasing at higher wavelengths. Their studies are carried out in the 1.3 to 3.0 μm band. Clear water has no reflectance in the IR region but has very high reflectance in the blue band and hence appears blue in colour. Water with vegetation has very high absorption in the blue band and high reflectance in the green band and hence it appears greenish in colour. Muddy water has a yellow-brownish colour due to the strong reflectance in that band. Hence, water studies are made in the visible band. Table 11.1 enumerates the optical bands employed for various applications.

img

Figure 11.2 Spectral reflectance signatures of different objects

Table 11.1 Optical bands employed for various applications

0.45–0.52 μm Sensitive to sedimentation, deciduous/coniferous forest colour discrimination, soil vegetation differentiation
0.52–0.59 μm Green reflectance by heavy vegetation, vegetation vigour, rock–soil discrimination, turbidity and bathymetry in shallow water
0.62–0.68 μm Sensitive to chlorophyll absorption, plant species discrimination, differentiation of soil and geological boundary
0.77–0.86 μm Sensitive to the green biomass and moisture in the vegetation, land and water studies, geomorphic studies

In the optical band, panchromatic or black and white images can also be taken, where different shades of grey indicate different levels of reflectivity. The most reflective surfaces are light or nearly white in colour while the least reflective surfaces are represented as black.

11.2.2 Thermal Infrared Remote Sensing Systems

Thermal infrared remote sensing systems employ the mid wave IR (3–5 μm) and the long wave IR (8–14 μm) wavelength bands. The imagery here is derived from the thermal radiation emitted by the Earth's surface and objects. As different portions of the Earth's surface are at different temperatures, thermal images therefore provide information on the temperature of the ground and water surfaces and the objects on them (Figure 11.3). As the thermal infrared remote sensing systems detect the thermal radiation emitted from the Earth's surface, they come under the category of passive remote sensing systems. The 10 μm band is commonly employed for thermal remote sensing applications as most of the objects on the surface of the Earth have temperatures around 300 K and the spectral radiance for a temperature of 300 K peaks at a wavelength of 10 μm. Another commonly used thermal band is 3.8 μm for detecting forest fires and other hot objects having temperatures between 500 K and 1000 K.

img

Figure 11.3 Thermal remote sensing

Colder surfaces appear darker in the raw IR thermal images, but the general remote sensing concept for IR images is to invert the relationship between brightness and the temperature so that the colder objects appear brighter as compared to the hotter ones. Hence, clouds that are colder than the Earth's surface appear in darker shades against the light background of the Earth in raw thermal images, but appear in lighter shades against the darker background in the processed thermal images. Also, low lying clouds appear darker as compared to clouds at high altitude in the processed image as the temperature normally decreases with height. Thermal systems work both during the day and night as they do not use solar radiation, but they suffer from the disadvantage that they are weather-dependent systems. Other than remote sensing satellites, weather forecasting satellites also make extensive use of the thermal IR bands.

11.2.3 Microwave Remote Sensing Systems

Microwave remote sensing systems generally operate in the 1 cm to 1 m wavelength band. Microwave radiation can penetrate through clouds, haze and dust, making microwave remote sensing a weather independent technique. This feature makes microwave remote sensing systems quite attractive as compared to optical and thermal systems, which are weather dependent. Microwave remote sensing systems work both during the day as well as at night as they are independent of the solar illumination conditions. Another advantage that a microwave remote sensing system offers is that it provides unique information on sea wind and wave direction that cannot be provided by visible and infrared remote sensing systems. However, the need for sophisticated data analysis and poorer resolution due to the use of longer wavelength bands are the disadvantages of microwave remote sensing systems.

Shorter microwave wavelength bands are utilized for the analyses of hidden mineral resources as they penetrate through the Earth's surface and the vegetation, whereas longer wavelength bands are utilized for determining the roughness of the various features on the Earth's surface. Microwave remote sensing systems can be both passive as well as active. Passive microwave remote sensing systems work on a concept similar to that of thermal remote sensing systems and detect the microwave radiation emitted from the objects. The characteristics of the objects are then formed on the basis of the received microwave power as the received power is related to their characteristics, such as temperature, moisture content and physical characteristics.

Active microwave remote sensing systems provide their own source of microwave radiation to illuminate the target object (Figure 11.4). Images of Earth are formed by measuring the microwave energy scattered by the objects on the surface of the Earth. The brightness of every point on the surface of the Earth is determined by the intensity of the microwave energy scattered back to the radar receiver on the satellite from them. The intensity of this backscatter is dependent on certain physical properties of the surface such as slope, roughness and the dielectric constant of the surface materials (dielectric constant depends strongly on the moisture content), on the geometric factors such as surface roughness, orientation of the objects relative to the radar beam direction and the types of land cover (soil, vegetation or man-made objects). The backscatter radiation is also dependent on the incident microwave beam parameters like frequency, polarization and angle of incidence.

img

Figure 11.4 Active microwave remote sensing

Examples of passive microwave systems include altimeters and radiometers. Real aperture and synthetic aperture radar are active microwave remote sensing systems.

11.3 Remote Sensing Satellite Orbits

Remote sensing satellites have sun-synchronous subrecurrent orbits at altitudes of 700–900 km, allowing them to observe the same area periodically with a periodicity of two to three weeks. They cover a particular area on the surface of the Earth at the same local time, thus observing it under the same illumination conditions. This is an important factor for moni-toring changes in the images taken at different dates or for combining the images together, as they need not be corrected for different illumination conditions. Generally the atmosphere is clear in the mornings; hence to take clear pictures while achieving sufficient solar illumination conditions, remote sensing satellites make observations of a particular place during morning (around 10 a.m. local time) when sufficient sunlight is available. As an example, the SPOT satellite has a sun-synchronous orbit with an altitude of 820 km and an inclination of 98.7°. The satellite crosses the equator at 10:30 a.m. local solar time.

Satellite orbits are matched to the capabilities and objectives of the sensor(s) they carry. Remote sensing satellites generally provide information either at the regional level or at the local area level. Regional level remote sensing satellite systems have a resolution of 10 m to 100 m and are used for cartography and terrestrial resources surveying applications, whereas local area level remote sensing satellite systems offer higher resolution and are used for precision agricultural applications like monitoring the type, health, moisture status and maturity of crops, and so on, for coastal management applications like monitoring photo planktons, pollution level determination, bathymetry changes, and so on.

As the satellite revolves around the Earth, the on board sensors it see a certain portion of the Earth's surface. The area imaged on the surface is referred to as the swath. The swath width for space-borne sensors generally varies between tens of kilometres to hundreds of kilometres. The satellite's orbit and the rotation of Earth work together to allow the satellite to have complete coverage of the Earth's surface.

11.4 Remote Sensing Satellite Payloads

11.4.1 Classification of Sensors

The main payloads on board a remote sensing satellite system are sensors that measure the electromagnetic radiation emanating or reflected from a geometrically defined field on the surface of the Earth. Sensor systems on board a remote sensing satellite can be broadly classified as:

  1. Passive sensors
  2. Active sensors

A passive system generally consists of an array of sensors or detectors that record the amount of electromagnetic radiation reflected and/or emitted from the Earth's surface. An active system, on the other hand, emits electromagnetic radiation and measures the intensity of the return signal. Both passive and active sensors can be further classified as:

  1. Scanning sensors
  2. Non-scanning sensors

This mode of classification is based on whether the entire field to be imaged is explored in one take as in the case of non-scanning sensors or is scanned sequentially with the complete image being a superposition of the individual images, as in the case of scanning sensors. Figure 11.5 enumerates the various types of sensors on board remote sensing satellites.

img

Figure 11.5 Various types of sensors on board remote sensing satellites

Scanning sensors have a narrow field of view and they scan a small area at any particular time. These sensors sweep over the terrain to build up and produce a two-dimensional image of the surface. Hence they take measurements in the instantaneous field-of-view (IFOV) as they move across the scan lines. The succession of scan lines is obtained due to the motion of the satellite along its orbit. It may be mentioned here that the surfaces are scanned sequentially due to the combination of the satellite movement as well as that of the scanner itself (Figure 11.6). The scanning sensors can be classified as image plane scanning sensors and object plane scanning sensors depending upon where the rays are converged by the lens in the optical system.

img

Figure 11.6 Scanning satellite remote sensing system

A non-scanning sensor views the entire field in one go. While the sensor's overall field-of-view corresponds to the continuous movement of the instantaneous field-of-view in the case of scanning sensors, for non-scanning sensors the overall field-of-view coincides with its instantaneous field-of-view. Figure 11.7 shows the conceptual diagram of a non-scanning satellite remote sensing system.

img

Figure 11.7 Non-scanning satellite remote sensing system

The sensors can be further classified as imaging or non-imaging sensors. Imaging sensors build up images of Earth using substances like silver in the film or by driving an image producing device like a TV or a computer monitor, whereas non-imaging sensors do not build images of Earth in any literal sense as they measure the radiation received from all points in the sensed target area and integrate it.

Before the working principle of various types of sensors is discussed, various sensor parameters will be described to help to give a better understanding of the working fundamentals of sensors.

11.4.2 Sensor Parameters

Sensor parameters briefly described in the following paragraphs include:

  1. Instantaneous field-of-view (IFOV)
  2. Overall field-of-view
  3. S/N ratio
  4. Linearity
  5. Wavelength band
  6. Swath width
  7. Dwell time
  8. Resolution
  1. Instantaneous field-of-view (IFOV). This is defined as the solid angle from which the electromagnetic radiation measured by the sensor at a given point of time emanates.
  2. Overall field-of-view. This corresponds to the total size of the geographical area selected for observation. In the case of non-scanning sensors, the instantaneous and the total field-of-view are equal and coincide with one another, whereas for scanning sensors, the overall field-of-view is a whole number multiple of the instantaneous field-of-view.
  3. S/N ratio. This defines the minimum power level required by the sensor to identify an object in the presence of noise.
  4. Linearity. Linearity refers to the sensor's response to the varying levels of radiation intensity. The linearity is generally specified in terms of the slope of the sensor's response curve and is referred to as ‘gamma'. A gamma of one corresponds to a sensor with a linear response to radiation. A gamma that is less than one corresponds to a sensor that compresses the dark end of the range, while a gamma greater than one compresses the bright end. Sensors based on solid state circuitry like CCDs are linear over a wide range as compared to other sensors like vidicon cameras.
  5. Wavelength band. Sensors employ three wavelength bands for remote sensing applications: the optical band, the thermal band and the microwave band.
  6. Swath width. The swath width of the sensor is the area on the surface of the Earth imaged by it.
  7. Dwell time. The sensor's dwell time is defined as the discrete amount of time required by it to generate a strong enough signal to be detected by the detector against the noise.
  8. Resolution. Resolution is defined as the ability of the entire remote sensing system (including the lens, antenna, display, exposure, processing, etc.) to render a sharply defined image. Resolution of any remote sensing system is specified in terms of spectral resolution, radiometric resolution, spatial resolution and temporal resolution. These are briefly described as follows:
    1. Spectral resolution. This is determined by the bandwidth of the electromagnetic radiation used during the process. The narrower the bandwidth used, the higher is the spectral resolution achieved. On the basis of the spectral resolution, the systems may be classified as panchromatic, multispectral and hyperspectral systems. Panchromatic systems use a single wavelength band with a large bandwidth, multispectral systems use several narrow bandwidth bands having different wavelengths and hyperspectral systems take measurements in hundreds of very narrow bandwidth bands. Hyperspectral systems are the ones that map the finest spectral characteristics of Earth.
    2. Radiometric resolution. Radiometric resolution refers to the smallest change in intensity level that can be detected by the sensing system. It is determined by the number of discrete quantization levels into which the signal is digitized. The larger the number of bits used for quantization, the better is the radiometric resolution of the system.
    3. Spatial resolution. Spatial resolution is defined as the minimum distance the two point features on the ground should have in order to be distinguished as separate objects. In other words, it refers to the size of the smallest object on the Earth's surface that can be resolved by the sensor. Spatial resolution depends upon the instantaneous field-of-view of the sensor and its distance from Earth. In terms of spatial resolution, the satellite imaging systems can be classified as: low resolution systems (1 km or more), medium resolution systems (100 m to 1 km), high resolution systems (5 m to 100 m) and very high resolution systems (5 m or less). It should be mentioned here that higher resolution systems generally have smaller coverage areas.
    4. Temporal resolution. This is related to the repetitive coverage of the ground by the remote sensing system. It is specified as the number of days in which the satellite revisits a particular place again. Absolute temporal resolution of the satellite is equal to the time taken by the satellite to complete one orbital cycle. (The orbital cycle is the whole number of orbital revolutions that a satellite must describe in order to be flying once again over the same point on the Earth's surface in the same direction.) However, because of some degree of overlap in the imaging swaths of adjacent orbits for most satellites and the increase in this overlap with increasing latitude, some areas of Earth tend to be re-imaged more frequently. Hence the temporal resolution depends on a variety of factors, including the satellite/sensor capabilities, the swath overlap and latitude.

11.5 Passive Sensors

Passive sensors, as described in earlier paragraphs, record the amount of electromagnetic radiation reflected and/or emitted from the Earth's surface. They do not emit any electromagnetic radiation and hence are referred to as passive sensors. Passive sensors can be further classified as scanning and non-scanning sensors.

11.5.1 Passive Scanning Sensors

The multispectral scanner (MSS) is the most commonly used passive scanning sensor. It operates in a number of different ways and can be categorized into three basic types depending upon the mechanism used to view each pixel. These include optical mechanical scanners, push broom scanners and central perspective scanners.

11.5.1.1 Optical Mechanical Scanner

This is a multispectral radiometer (a radiometer is a device that measures the intensity of the radiation emanating from the Earth's surface) where the scanning is done in a series of lines oriented perpendicular to the direction of the motion of the satellite using a rotating or an oscillating mirror (Figure 11.8). They are also referred to as across-track scanners. As the platform moves forward over the Earth, successive scans build up a two-dimensional image of the Earth's surface. Hence optical mechanical scanners record two-dimensional imagery using a combination of the motion of the satellite and a rotating or oscillating mirror scanning perpendicular to the direction in which the satellite is moving.

img

Figure 11.8 Optical mechanical scanner

These scanners comprise the following subsystems:

  1. An optical system for collecting the radiation from the ground. It comprises reflective telescope systems such as Newton and Cassegrain telescopes.
  2. A spectrographic system comprising a dichroic mirror, grating prisms or filters for separating the incoming radiation into various wavelength bands.
  3. A scanning system comprising a rotating or an oscillating mirror for scanning the ground.
  4. A detector system comprising various kinds of photodetectors for converting the optical radiation into an electrical signal.
  5. A reference system for the calibration of electrical signals generated by the detectors. It comprises light sources or thermal sources with a constant intensity or temperature.

After passing through the optical system, the incoming reflected or emitted radiation is separated into various wavelength bands with the help of grating prisms or filters. Each of the separated bands is then fed to a bank of internal detectors, with each detector sensitive to a specific wavelength band. These detectors detect and convert the energy for each spectral band in the form of an electrical signal. This electrical signal is then converted to digital data and recorded for subsequent computer processing.

Referring to Figure 11.8, the IFOV (C) of the sensor and the altitude of the platform determine the ground resolution of the cell (D), and thus the spatial resolution. The swath width (F) is determined by the sweep of the mirror (specified as the angular field of view E) and the altitude of the satellite. Remote sensing satellites orbiting at a height of approximately 1000 km, for instance, have a sweep angle of the order of 10° to 20° to cover a broad region. One of the problems with this technique is that as the distance between the sensor and the target increases towards the edges of the swath, the cells also become larger and introduce geometric distortions in the image. Moreover, the dwell time possible for these sensors is small as they employ a rotating mirror, and this affects the spatial, spectral and radiometric resolution of the sensor.

The multispectral scanner (MSS), the thematic mapper (TM) and the enhanced thematic mapper (ETM) of Landsat satellites and the advanced very high resolution radiometer (AVHRR) of the NOAA satellites are examples of optical mechanical scanners.

11.5.1.2 Push Broom Scanners

A push broom scanner (also referred to as a linear array sensor or along-track scanner) is a scanner without any mechanical scanning mirror but with a linear array of semiconductor elements located at the focal plane of the lens system, which enables it to record one line of an image simultaneously (Figure 11.9). It has an optical lens through which a line image is detected simultaneously perpendicular to the direction of motion of the satellite. Each individual detector measures the energy for a single ground resolution cell (D). Thus the distance of the satellite from the Earth's surface and the IFOV of the detectors determines the spatial resolution of the system. A separate linear array is required to measure each spectral band or channel. For every scan line, the energy detected by individual detectors of the linear array is sampled electronically and then digitally recorded. Charged coupled devices (CCDs) are the most commonly used detector elements for linear array sensors. HRV (high resolution vidicon) sensor of SPOT satellites, MESSR (multispectral electronic self-scanning radiometer) of MOS-1 satellite (Marine observation satellite) are examples of linear CCD sensors. MESSR of MOS-1 satellite has 2048 elements with an interval of 14 mm.

img

Figure 11.9 Push broom scanner

As compared to push broom scanners, optical mechanical scanners offer narrower view angles and small band-to-band registration error. However, push broom scanners have a longer dwell time and this feature allows more energy to be detected, which improves the radiometric resolution. The increased dwell time also facilitates a smaller IFOV and narrower bandwidth for each detector. Thus, a finer spatial and spectral resolution can be achieved in case of push broom scanners without impacting the radiometric resolution. Moreover, they are cheaper, lighter and more reliable as they do not have any moving part and also require less power. However, calibration of detectors is very crucial in the case of push broom sensors to avoid vertical stripping in the images.

11.5.1.3 Central Perspective Scanners

These scanners employ either the electromechanical or linear array technology to form image lines, but images in each line form a perspective at the centre of the image rather than at the centre of each line. In this case, during image formation, the sensing device does not actually move relative to the object being sensed. Thus all the pixels are viewed from the same central position in a manner similar to a photographic camera. This results in geometric distortions in the image similar to those that occur in photographic data. In satellite-derived imagery, however, radial displacement effects are barely noticeable because of the much smaller field-of-view relative to the orbital altitude. The early frame sensors used in vidicon cameras such as the return beam vidicon in Landsat-1, 2 and 3 satellites operated from a central perspective.

It may be mentioned here that some scanners have the capability of inclining their viewing axes to either side of the nadir, which is referred to as oblique viewing. In the case of oblique viewing, the median line corresponding to the centre of the field (referred to as the instrumentation track) is offset from the satellite ground track. This concept is further explained in Figure 11.10.

img

Figure 11.10 Oblique viewing

11.5.2 Passive Non-scanning Sensors

These sensors are further subdivided into imaging and non-imaging sensors depending upon the methodology by which the image is acquired.

Passive non-scanning non-imaging sensors include microwave radiometers, magnetic sensors, gravimeters, Fourier spectrometers, etc. A microwave radiometer is a passive device that records the natural microwave emission from Earth. It can be used to measure the total water content of the atmosphere within its field-of-view.

Passive non-scanning imaging sensors include still multispectral and panchromatic cameras and television cameras. Camera systems employ passive optical sensors, a lens system comprising of a number of lenses to form an image at the focal plane and a processing unit. The ground coverage of the image taken by them depends on several factors, including the focal length of the lens system, the altitude of the satellite and the format and size of the film. Some of the well-known examples of space cameras are the metric camera on board the Space Shuttle by ESA (European space agency), the large format camera (LFC) also on board the Space Shuttle by NASA (National aeronautics and space administration) and the KFA 1000 on board Cosmos by Russia. Figure 11.11 shows the LFC camera. It comprises a film magazine, a device for fixing the camera level and for forward motion compensation, a lens and filter assembly and a thermal exposure system.

img

Figure 11.11 LFC camera

Multispectral cameras take images in the visible and the reflective IR bands on separate films and are mainly used for photointerpretation of land surface covers. Panchromatic cameras have a wide field-of-view and hence are used for reconnaissance surveys, surveillance of electrical transmission lines and supplementary photography with thermal imagery.

11.6 Active Sensors

Active sensor systems comprise both a transmitter as well as a receiver. The transmitter emits electromagnetic radiation of a particular wavelength band, depending upon the intended application. The receiver senses the same electromagnetic radiation reflected or scattered by the ground. Similar to passive sensors, active sensors can also be further categorized into two types, namely the scanning and non-scanning sensors.

11.6.1 Active Non-scanning Sensors

Active non-scanning sensor systems include microwave altimeters, microwave scatterometers, laser distance meters and laser water depth meters. Microwave altimeters or radar altimeters are used to measure the distance between the satellite and the ground surface by measuring the time delay between the transmission of the microwave pulses and the reception of the signals scattered back from the Earth's surface. Some applications of microwave altimetry are in ocean dynamics of the sea current, geoid and sea ice surveys. Microwave scatterometers are used to measure wind speed and direction over the ocean surface by sending out microwave pulses along several directions and recording the magnitude of the signals backscattered from the ocean surface. The magnitude of the backscattered signal is related to the ocean surface roughness, which in turn is dependent on the surface wind condition. Some of the satellites having scatterometers as their payloads include Sesat, ERS-1 and 2 and QuickSat. Laser distance meters are devices having the same principle of operation as that of microwave altimeters except that they send laser pulses in the visible or the IR region instead of the microwave pulses.

11.6.2 Active Scanning Sensors

The most common active scanning sensor used is the synthetic aperture radar (SAR). In synthetic aperture radar imaging, microwave pulses are transmitted by an antenna towards the Earth's surface and the energy scattered back to the sensor is measured (Figure 11.12). SAR makes use of the radar principle to form an image by utilizing the time delay of the backscattered signals.

img

Figure 11.12 Principle of operation of synthetic aperture radar

Generally, oblique viewing or side-looking viewing is used in the case of SAR and is often restricted to one side of the satellite trajectory, either left or right (Figure 11.13). The microwave beam is transmitted obliquely at right angles to the direction of flight and hence the swath (C) covered by the satellite is offset from the nadir. The SAR produces a two-dimensional image of the object. The range (D) refers to the across-track dimension perpendicular to the flight direction, while the azimuth (E) refers to the along-track dimension parallel to the flight direction. The across-track resolution or the ground swath resolution depends upon the length of the microwave pulse and the angle of incidence and hence the slant range. Two distinct targets on the surface will be resolved in the range dimension if their separation is greater than half the pulse length. The azimuth or along-track resolution is determined by the angular width of the radiated microwave beam and the slant range distance. The narrower the beam, the better is the ground resolution. As the beam width is inversely proportional to the size of the antenna, the longer the antenna the narrower is the beam. It is not feasible for a spacecraft to carry a very long antenna, which is required for high resolution imaging of the Earth's surface. To overcome this limitation, the SAR capitalizes on the motion of the satellite to emulate a large antenna (1–5 km) from the small antenna (2–10 m) it actually carries on board. Some of the satellites having SAR sensors include Radarsat, ERS and Sesat.

img

Figure 11.13 Oblique viewing for synthetic aperture radar

SAR can make use of vertical (V) or horizontal (H) polarization for emission and reception. Four types of images can thus be obtained, two with parallel polarization (HH and VV) and two with cross-polarization (HV and VH). It is also possible to use the phase difference between two radar echoes returned by the same terrain and received along two trajectories very close together for reconstructing terrain relief. This technique is referred to as interferometry.

11.7 Types of Images

The data acquired by satellite sensors is processed, digitized and then transmitted to ground stations to construct an image of the Earth's surface. Depending upon the kind of processing used, the satellite images can be classified into two types, namely primary and secondary images. Secondary images can be further subcategorized into various types. In this section, various types of satellite images will be discussed in detail.

11.7.1 Primary Images

The raw images taken from the satellite are referred to as primary images. These raw images are seldom utilized directly for remote sensing applications but are corrected, processed and restored in order to remove geometric distortion, blurring and degradation by other factors and to extract useful information from them.

11.7.2 Secondary Images

As mentioned above, the primary images are processed so as to enhance their features for better and precise interpretation. These processed images are referred to as secondary images. Secondary images are further classified as monogenic images and polygenic images, depending upon whether one or more primary images have been used to produce the secondary image.

11.7.2.1 Monogenic Secondary Images

Monogenic image, also referred to as panchromatic image, is produced from a single primary image by applying changes to it like enlargement, reduction, error correction, contrast adjustments, etc. Some of the techniques used for producing monogenic images are changing the pixel values in the primary image with either the difference in values between the adjacent pixels for gradient images or by the pixel occuring most commonly for smoothened images. Monogenic images are mostly displayed as a grey scale image or sometimes as a single colour image.

In the case of optical images, the brightness of a particular pixel in the image is proportional to the solar radiation reflected by the target. For the thermal and microwave images, the brightness of the pixel in the image is proportional to the thermal characteristics of the object and the microwave radiation scattered or emitted by the object respectively. Figures 11.14 (a) and (b) respectively show an image and its corresponding monogenic secondary image in which the grey scale adjustment has been made in such a way that the darkest grey is seen as black and the lightest grey is seen as white.

img

Figure 11.14 (a) Primary image and (b) corresponding monogenic secondary image

11.7.2.2 Polygenic Secondary Images

Polygenic secondary images are composite images formed by combining two or three primary images in order to make the extraction of information from the image easier and more meaningful. The images used can be either optical, thermal or microwave images. Polygenic images are further classified as multispectral images or multitemporal images depending upon whether the primary images used for generating the polygenic image are acquired simultaneously or at different times.

In multispectral images, three images taken in different spectral bands are each assigned a separate primary colour. Depending upon the assignment of the primary colours to these images, they can be further classified as natural colour composite images, false colour composite images and true colour composite images.

In a true colour composite image, the spectral bands correspond to the three primary colours and are assigned a display colour the same as their own colour. The R colour of the display is assigned to the red band, the G colour to the green band and the B colour to the blue band, resulting in images similar to that seen by a human eye. Hence, a true colour composite image always uses the red, green and blue spectral bands and assigns the same display colour as the spectral band. For example, the image in Figure 11.15 (a) is a true colour composite image of San Francisco Bay taken by the Landsat satellite. This image is made by assigning R, G and B colours of the display to the red, green and blue wavelength bands (bands 3, 2 and 1) respectively of the Landsat thematic mapper. Hence, the colours of the image resemble what is observed by the human eye. The salt pond appears green or orange-red due to the colour of the sediments they contain. Urban areas in the image, Palo Alto and San Jose, appear grey and the vegetation is either dark green (trees) or light brown (dry grass).

img

Figure 11.15 (a) True colour composite and (b) False colour composite images of San Francisco Bay taken by the Landsat satellite (Reproduced by permission of © NPA Ltd., www.npagroup.com). (c) Natural colour composite image taken by the SPOT satellite (Reproduced by permission of SPOT Image – © CNES). The images shown in Figure 11.15 are the grey scale versions of the original images and therefore various colours have been reproduced in corresponding grey shades. Original images are available on the companion website at www.wiley.com/go/maini

If the spectral bands in the image do not correspond to the three primary colours, then the resulting image is called a false colour composite image. Hence, the colour of an object in the displayed image has no resemblance to its actual colour. It may be mentioned here that microwave radar images can also be used here. There are many schemes for producing false colour composite images with each one best suited for a particular application. Figure 11.15 (b) shows a false colour composite image corresponding to the true colour composite image shown in Figure 11.15 (a). Here the red colour is assigned to the near IR band, the green colour to the red band and the blue colour to the green band. This combination is best suited for the detection of various types of vegetation. Here the vegetation appears in different shades of red (as it has a very strong reflectance in the near IR band) depending upon its type and condition. This image also provides a better discrimination between the greener residential areas and the other urban areas.

In case of sensors not having one or more of the three visible bands, the optical images lack these visual bands. In this case, the spectral bands may be combined in such a way that the appearance of the displayed image resembles a visible colour photograph. Such an image is termed the natural colour composite image. As an example, the SPOT HRV multispectral sensor does not have a blue band. The three bands correspond to green, red and near IR bands. These bands are combined in various proportions to produce different images. These images are then assigned the red, green and blue colours to produce natural colour images as shown in Figure 11.15 (c).

Multitemporal images, as mentioned earlier, are secondary images produced by combining two or more primary images taken at different times. Each primary image taken at a particular time is assigned one colour. Then, these various primary images are combined together. The resultant image is referred to as a multitemporal image. Multitemporal images are particularly helpful in detecting land cover changes over a period of time. Figure 11.16 shows a multi-temporal image formed from two images of the River Oder in Frankfurt taken by the SAR sensor on the ERS satellite on 21 July 2001 and 6 August 2001. The picture highlights the flooded areas on 6 August 2001 as compared to those on 21 July 2001.

img

Figure 11.16 Multitemporal image (Reproduced by permission of © ESA 1997. Processed by Eurimage ESA ESRIN Earth Watching Team). The image shown in Figure 10.16 is the greyscale version of the original image in colour. The original image is available on the companion website at www.wiley.com/go/maini

11.8 Image Classification

The processed satellite images are classified using various techniques in order to categorize the pixels in the digital image into one of the several land cover classes. The categorized data is then used to produce thematic maps of the land cover present in the image. Normally, multispectral data are used to perform the classification and the spectral pattern present within the data for each pixel is used as the numerical basis for categorization. The objective of image classification is to identify and portray, as a unique grey level (or colour), the features occurring in an image in terms of the object or type of land cover they actually represent on the ground. There are two main classification methods, namely:

  1. Supervised Classification
  2. Unsupervised Classification

With supervised classification, the land cover types of interest (referred to as training sites or information classes) in the image are identified. The image processing software system is then used to develop a statistical characterization of the reflectance for each information class. Once a statistical characterization has been achieved for each information class, the image is then classified by examining the reflectance for each pixel and making a decision about which of the signatures it resembles the most.

Unsupervised classification is a method that examines a large number of unknown pixels and divides them into a number of classes on the basis of natural groupings present in the image values. Unlike supervised classification, unsupervised classification does not require analyst-specified training data. The basic principle here is that data values within a given class should be close together in the measurement space (that is have similar grey levels), whereas for different classes these values should be comparatively well separated (that is have very different grey levels). Unsupervised classification is becoming increasingly popular with agencies involved in long term GIS (geographic information system) database maintenance. Unsupervised classification is useful for exploring what cover types can be detected using the available imagery. However, the analyst has no control over the nature of the classes. The final classes will be relatively homogeneous but may not correspond to any useful land cover classes.

11.9 Image Interpretation

Extraction of useful information from the images is referred to as image interpretation. Interpretation of optical and thermal images is more or less similar. However, interpretation of microwave images is quite different. In this section various techniques used to interpret different types of images are described.

11.9.1 Interpreting Optical and Thermal Remote Sensing Images

These images mainly provide four types of information:

  1. Radiometric information
  2. Spectral information
  3. Textural information
  4. Geometric and contextual information

Radiometric information corresponds to the brightness, intensity and tone of the images. Panchromatic optical images are generally interpreted to provide radiometric information. Multispectral or colour composite images are the main sources of spectral information. The interpretation of these images requires understanding of the spectral reflectance signatures of the objects of interest. Different bands of multispectral images may be combined to accentuate a particular object of interest. Textural information, provided by high resolution imagery, is an important aid in visual image interpretation. The texture of the image may be used to classify various kinds of vegetation cover or forest cover. Although all of them appear to be green in colour, yet they will have different textures. Geometric and contextual information is provided by very high resolution images and makes the interpretation of the image quite straightforward. Extraction of this information, however, requires prior information about the area (like the shape, size, pattern, etc.) in the image.

11.9.2 Interpreting Microwave Remote Sensing Images

Interpretation of microwave images is quite different from that of optical and thermal images. Images from active microwave remote sensing systems images suffer from a lot of noise, referred to as speckle noise, and may require special filtering before they can be used for interpretation and analysis. Single microwave images are usually displayed as greyscale images where the intensity of each pixel represents the proportion of the microwave radiation backscattered from that area on the ground in the case of active microwave systems and the microwave radiation emitted from that area in the case of passive microwave systems. The pixel intensity values are often converted to a physical quantity called the backscattering coefficient, measured in decibel (dB) units, with values ranging from +5 dB for very bright objects to −40 dB for very dark surfaces. The higher the value of the backscattering coefficient, the rougher is the surface being imaged. Flat surfaces such as paved roads, runways or calm water normally appear as dark areas in a radar image since most of the incident radar pulses are specularly reflected away. Trees and other vegetations are usually moderately rough on the wavelength scale. Hence, they appear as moderately bright features in the image. Ships on the sea, high rise buildings and regular metallic objects such as cargo containers, built-up areas and many man-made features, and so on, appear as very bright objects in the image. The brightness of areas covered by bare soil may vary from very dark to very bright depending on their roughness and moisture content. Typically, rough soil appears bright in the image. For similar soil roughness, the surface with a higher moisture content will appear brighter.

Multitemporal microwave images are used for detecting land cover changes over the period of image acquisition. The areas where no change in land cover occurs will appear in grey while areas with land cover changes will appear as colourful patches in the image.

11.9.3 GIS in Remote Sensing

The geographic information system (GIS) is a computer-based information system used to digitally represent and analyse the geographic features present on the Earth's surface. Figure 11.17 shows the block diagram of a typical GIS system. The GIS is used to integrate the remote sensing data with the geographic data, as it will help to give a better understanding and interpretation of remote sensing images. It also assists in the automated interpretation, detecting the changes occurring in an area and in map revision processes. For example, it is not enough to detect land cover change in an area, as the final goal is to analyse the cause of the change or to evaluate the impact of the change. Hence, the remote sensing data should be overlaid on maps such as those of transportation facilities and land use zoning in order to extract this information. In addition, the classification of remote sensing imagery will become more accurate if the auxiliary data contained in the maps are combined with the image data.

img

Figure 11.17 Block diagram of a typical GIS system

The history of the GIS dates back to the late 1950s, but the first GIS software came in the late 1970s from the laboratory of the Environmental Systems Research Institute (ESRI), Canada. Evolution of the GIS has transformed and revolutionized the ways in which planners, engineers, managers, and so on, conduct database management and analysis.

The GIS performs the following three main functions:

  1. To store and manage geographic information comprehensively and effectively.
  2. To display geographic information depending on the purpose of use.
  3. To execute query, analysis and evaluation of geographic information effectively.

The GIS uses the remote sensing data either as classified data or as image data. Land cover maps or vegetation maps classified from remote sensing data can be overlaid on to other geographic data, which enables analysis for environmental monitoring and its change. Remote sensing data can be classified or analysed with other geographic data to obtain a higher accuracy of classification. Using the information available in maps like the ground height and slope gradient information, it becomes easier to extract relevant information from the remote sensing images.

11.10 Applications of Remote Sensing Satellites

Data from remote sensing satellites is used to provide timely and detailed information about the Earth's surface, especially in relation to the management of renewable and non-renewable resources. Some of the major application areas for which satellite remote sensing is of great use are the assessment and monitoring of vegetation types and their status, soil surveys, mineral exploration, map making and revision, production of thematic maps, planning and monitoring of water resources, urban planning, agricultural property management planning, crop yield assessment, natural disaster assessment, and so on. Some of the applications are described in detail in this section.

11.10.1 Land Cover Classification

Land cover mapping and classification corresponds to identifying the physical condition of the Earth's surface and then dividing the surface area into various classes, like forest, grassland, snow, water bodies, and so on, depending upon its physical condition. Land cover classification helps in the identification of the location of natural resources. Figure 11.18 (a) is a digital satellite image showing the land cover map of Onslow Bay in North Carolina taken by Landsat's thematic mapper (TM) in February 1996. Figure 11.18 (b) is the land cover classification map derived from the satellite image shown in Figure 11.18 (a) using a variety of techniques and tools, dividing the area into 15 land cover classes.

img

Figure 11.18 (a) Digital satellite image showing the land cover map of Onslow Bay in North Carolina taken by Landsat's thematic mapper (TM) in February 1996 and (b) the land cover classification map derived from the satellite image in Figure 11.18 (a) (Courtesy: National Oceanic and Atmospheric Administration (NOAA) Coastal Services Center, www.csc.noaa.gov, USA). These images are greyscale versions of original colour images. Original images are available on the companion website at www.wiley.com/go/maini

11.10.2 Land Cover Change Detection

Land cover change refers to the seasonal or permanent changes in the land cover types. Seasonal changes may be due to agricultural changes or the changes in forest cover and the permanent changes may be due to land use changes like deforestation or new built towns, and so on. Detection of permanent land cover changes is necessary for updating land cover maps and for management of natural resources. Satellites detect these permanent land cover changes by comparing an old image and an updated image, with both these images taken during the same season to eliminate the effects of seasonal change.

Figure 11.19 shows three photographs of Kuwait taken by the Landsat satellite. Figures 11.19 (a), (b) and (c) show Kuwait City before, during and after the Gulf War respectively. The red part of the images show the vegetation and the bright areas are the deserts. Clear, deep water looks almost black, but shallow or silty water looks lighter. The Landsat image during the war shows that the city is obscured by smoke plume from burning oil wells. There were around 600 oil wells that were set on fire during the war. The third image was acquired after the fires had been extinguished. It shows that the landscape had been severely affected by the war. The dark grey patched areas in the third image are due to the formation of a layer of hardened ‘tarcete’ formed by the mixing of the sand and gravel on the land's surface with oil and soot. Black pools within the dark grey tarcrete are the oil wells that were formed after the war. It was detected from the satellite images that some 300 oil wells were formed. Satellite images provided a cost effective method of identifying these pools and other changes to quantify the amount of damage due to the war, in order to launch an appropriate clean-up programme.

img

Figure 11.19 Image of Kuwait City taken by Landsat satellite (a) before the Gulf War in August 1990, (b) during the Gulf War in February 1991 and (c) after the Gulf War in November 1991 (Data available from US Geological Survey). These images are greyscale versions of original colour images. Original images are available on the companion website at www.wiley.com/go/maini

11.10.3 Water Quality Monitoring and Management

Satellite imagery helps in locating, monitoring and managing water resources over large areas. Water resources are mapped in the optical and the microwave bands. Water pollution can be determined by observing the colour of water bodies in the images obtained from the satellite. Clear water is bluish in colour, water with vegetation appears to be greenish-yellow while turbid water appears to be reddish-brown. Structural geographical interpretation of the imagery also aids in determining the underground resources. The changing state of many of the world's water bodies is monitored accurately over long periods of time using satellite imagery. Figure 11.20 shows a false colour composite image taken from the IKONOS satellite, displaying the water clarity of the lakes in Eagan, Minnesota. Scientists measured the water quality by observing the ratio of blue to red light in the satellite data. Water quality was found to be high when the amount of blue light reflected off the lakes was high and that of red light was low. Lakes loaded with algae and sediments, on the other hand, reflect less blue light and more red light. Using images like this, scientists created a comprehensive water quality map for the water bodies in the region.

img

Figure 11.20 False colour composite image taken by the IKONOS satallite displaying the water clarity of the lakes in Eagan, Minnesota (Reproduced by permission of the University of Minnesota, Remote Sensing and Geospatial Analysis Laboratory). The image is the greyscale version of the original colour image. Original image is available on the companion website at www.wiley.com/go/maini

11.10.4 Flood Monitoring

Satellite images provide a cost effective and potentially rapid means to monitor and map the devastating effects of floods. Figures 11.21 (a) and (b) show the false colour composite images of the Pareechu River in Tibet behind a natural dam forming an artificial lake, taken by the advanced space-borne thermal emission and reflection radiometer (ASTER) on NASA's Terra satellite on 1 September 2004 and 15 July 2004 respectively. From the two images it is evident that the water levels were visibly larger on 1 September 2004 than they were on 15 July 2004. The lake posed a threat to communities downstream in northern India, which would have been flooded if the dam had burst.

img

Figure 11.21 Flood monitoring using remote sensing satellites (Courtesy: NASA). These images are greyscale versions of original colour images. Original images are available on the companion website at www.wiley.com/go/maini

11.10.5 Urban Monitoring and Development

Satellite images are an important tool for monitoring as well as planning urban development activities. Time difference images can be used to monitor changes due to various forms of natural disasters, military conflict or urban city development. Figures 11.22 (a) and (b) show Manhattan before and after the 11 September 2001 attacks on the World Trade Center. These images have a resolution of 1 m and were taken by the IKONOS satellite. Remote sensing data along with the GIS is used for preparing precise digital basemaps of the area, for formulating proposals and for acting as a monitoring tool during the development phase. They are also used for updating these basemaps from time to time.

img

Figure 11.22 Images of Manhattan (a) before and (b) after the 11 September 2001 attacks, taken by the IKONOS satellite (Satellite imagery courtesy of GeoEye). These images are greyscale versions of original colour images. Original images are available on the companion website at www.wiley.com/go/maini

11.10.6 Measurement of Sea Surface Temperature

The surface temperature of the sea is an important index for ocean observation, as it provides significant information regarding the behaviour of water, including ocean currents, formation of fisheries, inflow and diffusion of water from rivers and factories. Satellites provide very accurate information on the sea surface temperatures. Temperature measurement by remote sensing satellites is based on the principle that all objects emit electromagnetic radiation of different wavelengths corresponding to their temperature and emissivity. The sea surface temperature measurements are done in the thermal infrared bands. Figure 11.23 shows the sea surface temperature map derived from the thermal IR image taken by the GMS-5 satellite.

img

Figure 11.23 Sea surface temperature map derived from the thermal IR image taken by the GMS-5 satellite (Reproduced by permission of © Japan Meteorological Agency). The image is the greyscale version of the original colour image. Original image is available on the companion website at www.wiley.com/go/maini

11.10.7 Deforestation

Remote sensing satellites help in detecting, identifying and quantifying the forest cover areas. This data is used by scientists to observe and assess the decline in forest cover over a period of several years. The images in Figure 11.24 show a portion of the state of Rondônia, Brazil, in which tropical deforestation has occurred. Figures 11.24 (a) and (b) are the images taken by the multispectral scanners of the Landsat-2 and -5 satellites in the years 1975 and 1986 respectively. Figure 11.24 (c) shows the image taken by the thematic mapper of the Landsat-4 satellite in the year 1992. It is evident from the images that the forest cover has reduced drastically.

img

Figure 11.24 Images taken by the multispectral scanners of (a) Landsat-2 satellite in the year 1975 and (b) Landsat-5 satellite in the year 1986. (c) Image taken by the thematic mapper of the Landsat-4 satellite in the year 1992 (Data available from US Geological Survey). These images are greyscale versions of original colour images. Original images are available on the companion website at www.wiley.com/go/maini

11.10.8 Global Monitoring

Remote sensing satellites can be used for global monitoring of various factors like vegetation, ozone layer distribution, gravitational fields, glacial ice movement and so on. Figure 11.25 shows the vegetation distribution map of the world formed by processing and calibrating 400 images from the NOAA remote sensing satellite's AVHRR sensor. This image provides an unbiased means to analyse and monitor the effects of droughts and long term changes from possible regional and global climate changes. Figure 11.26 shows the global ozone distribution taken by the global ozone monitoring experiment (GOME) sensor on the ERS-2 satellite. Measurement of ozone distribution can be put to various applications, like informing people of the fact that depletion of the ozone layer poses serious health risks and taking measures to prevent depletion of the ozone layer. The figure shows that the ozone levels are decreasing with time. Satellites also help us to measure the variation in the gravitational field precisely, which in turn helps to give a better understanding of the geological structure of the sea floor. Gravitational measurements are made using active microwave sensors.

img

Figure 11.25 Vegetation distribution map of the world (Reproduced by permission of © NOAA/NPA). The image is the greyscale version of the original colour image. Original image is available on the companion website at www.wiley.com/go/maini

img

Figure 11.26 Global ozone distribution (Reproduced by permission of © DLR/NPA). The image is the greyscale version of the original colour image. Original image is available on the companion website at www.wiley.com/go/maini

11.10.9 Predicting Disasters

Remote sensing satellites give an early warning of the various natural disasters like earthquakes, volcanic eruptions, hurricanes, storms, and so on, thus enabling the evasive measures to be taken in time and preventing loss of life and property. Geomatics, a conglomerate of measuring, mapping, geodesy, satellite positioning, photogrammetry, computer systems and computer graphics, remote sensing, geographic information systems (GIS) and environmental visualization is a modern technology, that plays a vital role in mitigation of natural disasters.

11.10.9.1 Predicting Earthquakes

Remote sensing satellites help in predicting the time of the earthquake by sensing some precursory signals that the earthquake faults produce. These signals include changes in the tilt of the ground, magnetic anomalies, swarms of micro-earthquakes, surface temperature changes and a variety of electrical field changes prior to the occurrence of earthquakes. As an example, the French micro-satellite Demeter detects electromagnetic emissions from Earth that can be used for earthquake prediction. The NOAA/AVHRR series of satellites take themal images and can be used to predict the occurrence of earthquakes. It is observed that the surface temperature of the region where the earthquake is to happen increases by 2–3 °C prior to the earthquake (7–24 days before) and fades out within a day or two after the earthquake has occurred. As an example, Figure 11.27 shows the infrared data images taken by the moderate resolution imaging spectroradiometer (MODIS) on board NASA's Terra satellite of the region surrounding Gujarat, India. These images show a ‘thermal anomaly’ appearing on 21 January 2001 [Figure 11.27 (b)] prior to the earthquake on 26 January 2001. The anomaly disappears shortly after the earthquake [Figure 11.27 (c)]. The anomaly area appears yellow-orange. The boxed star in the images indicates the earthquake's epicentre. The region of thermal anomaly is southeast of the Bhuj region, near to the earthquake's epicentre. Multitemporal radar images of Earth can also be used to predict the occurrence of earthquakes by detecting the changes in the ground movement.

img

Figure 11.27 Predicting earthquakes using infrared data images from NASA's Terra satellite (Courtesy: NASA). The images are greyscale versions of original colour images. Original images are available on the companion website at www.wiley.com/go/maini

In addition to predicting the time of occurrence of an earthquake, remote sensing satellites are also used for earthquake disaster management. High resolution satellites having a resolution better than 1 m have revolutionized the concept of damage assessments, which were earlier carried out by sending a team of specialists into the field in order to visually assess the damage. This helps to speed up the process of damage assessment and could provide invaluable information for the rescue or assessment team when they are en route to a disaster site in a cost effective and unbiased fashion. This information also helps in planning the evacuation routes and designing centres for emergency operations. Figure 11.28 shows the use of remote sensing images for mapping damage due to an earthquake with Figure 11.28 (a) showing badly damaged buildings (red mark) and Figure 11.28 (b) showing less damaged buildings (yellow mark). It may be mentioned here that global positioning satellites (GPS) are also used for predicting earthquake occurrences by detecting precise measurements of the fault lines.

img

Figure 11.28 Use of images from remote sensing satellites for mapping building damage due to earthquake (Image source: GIS Development Private Limited). The images are the greyscale versions of the original colour images. Original images are available on the companion website at www.wiley.com/go/maini

11.10.9.2 Volcanic Eruptions

Satellites are an accurate, cost effective and efficient means of predicting volcanic eruptions as compared to ground-based methods. Remote sensing satellites and global positioning satellites are used for predicting the occurrence of volcanic eruptions. Remote sensing satellites use thermal, optical and microwave bands for predicting volcanic eruptions.

Before a volcano erupts, it usually has increased thermal activity, which appears as elevated surface temperature (hot spots) around the volcano's crater. Early detection of hot spots and their monitoring is a key factor in predicting possible volcanic eruptions. Hence by taking infrared images, a medium term warning can be made about an eruption that may be several days to some weeks away. The method is particularly useful for monitoring remote volcanoes. Active microwave techniques can be used to detect how the mountains inflate as hot rock is injected beneath them. As an example, the TOMS (total ozone mapping spectrometer) satellite produced an image of the Pinatubo volcanic cloud (emitted during the eruption in 1991) over a nine day period, showing the regional dispersal of the sulfur dioxide plume.

Precision surveying with GPS satellites can detect bulging of volcanoes months to years before an eruption, but is not useful for short term eruption forecasting. Two or more satellite images can be combined to make digital elevation models, which also help in predicting volcanic eruptions. Satellite images taken after the volcanic eruption help in the management of human resources living near the volcanic area by predicting the amount and direction of the flow of lava.

11.10.10 Other Applications

Other important applications of remote sensing satellites include the computation of digital elevation models (DEM) using two satellite images viewing the same area of Earth's surface from different orbits. Yet another application of remote sensing satellites is in the making of topographic maps that are used for viewing the Earth's surface in three dimensions by drawing landforms using contour lines of equal elevation. Remote sensing satellites are also used for making snow maps and for providing data on snow cover, sea ice content, river flow and so on.

11.11 Major Remote Sensing Missions

In this section three major remote sensing satellite missions will be described, namely the Landsat, SPOT and Radarsat satellite systems. The Landsat and SPOT systems operate in the optical and the thermal bands while Radarsat is a microwave remote sensing satellite system.

11.11.1 Landsat Satellite System

Landsat is a remote sensing satellite programme of the USA, launched with the objective of observing Earth on a global basis. It is the longest running enterprise for the acquisition of imagery of Earth from space. The Landsat programme comprises a series of optical/thermal remote sensing satellites for land observation purposes. Landsat imagery is used for global change research and applications in agriculture, water resources, urban growth, geology, forestry, regional planning, education and national security. Scientists use Landsat satellites to gather images of the land surface and surrounding coastal regions for global change research, regional environmental change studies and other civil and commercial purposes.

Eight Landsat satellites have been launched to date. All these satellites are three-axis stabilized orbiting in near polar sun-synchronous orbits. The first generation of Landsat satellites [Figure 11.29 (a)] comprised three satellites, Landsat-1, -2 and -3, also referred to as Earth resource technology satellites, ERTS-1, -2 and -3 respectively. Landsat-1 was launched on 23 July 1972, with a design life of one year but it remained in operation for six years until January 1978. Landsat-1 carried on board two Earth viewing sensors – a return beam vidicon (RBV) and a multispectral scanner (MSS). Landsat-2 and -3 satellites launched in 1975 and 1978 respectively, had similar configurations.

img

Figure 11.29 (a) First generation Landsat satellites, (b) second generation Landsat satellites and (c) Landsat-7 satellite (Courtesy: NASA)

The second generation of Landsat satellites [Figure 11.29 (b)] comprised of Landsat-4 and -5 satellites, launched in 1982 and 1984 respectively. Landsat-4 satellite carried a MSS and a thematic mapper (TM). Landsat-5 satellite was a duplicate of Landsat-4 satellite. Landsat-6 satellite was launched in October 1993 but failed to reach the final orbit. It had an enhanced thematic mapper (ETM) payload. Landsat-7 was launched in the year 1999 to cover up for the loss of Landsat-6 satellite. Landsat-7 satellite [Figure 11.29 (c)] comprised an advanced ETM payload referred to as the enhanced thematic mapper plus (ETM+). Currently, Landsat-7 and Landsat-8 satellites are operational. Landsat-8 satellite was launched on 11 February 2013. It sends 400 images every day, which are used for various applications. Table 11.2 enumerates the salient features of the Landsat satellites.

Table 11.2 Salient features of Landsat satellites

Satellites Orbit Altitude (km) Orbital period (min) Inclination (degrees) Temporal resolution (days) Equatorial crossing (a.m) Sensors
Landsat-1 Sun-synchronous 917 103 99.1 18 9:30 RBV, MSS
Landsat-2 Sun-synchronous 917 103 99.1 18 9:30 RBV, MSS
Landsat-3 Sun-synchronous 917 103 99.1 18 9:30 RBV, MSS
Landsat-4 Sun-synchronous 705 99 98.2 16 9:30 MSS, TM
Landsat-5 Sun-synchronous 705 99 98.2 16 9:30 MSS, TM
Landsat-6 Sun-synchronous 705 99 98.2 16 10:00 ETM
Landsat-7 Sun-synchronous 705 99 98.2 16 10:00 ETM+
Landsat-8 Sun-synchronous 705 99 98.2 16 10:00 OLI, TIRS

11.11.1.1 Payloads on Landsat Satellites

  1. Return beam vidicon (RBV). Landsat-1, -2 and -3 satellites had the RBV payload. RBV is a passive optical sensor comprising an optical camera system. The sensor comprises three independent cameras operating simultaneously in three different spectral bands from blue-green (0.47–0.575 μm) through yellow-red (0.58–0.68 μm) to near IR (0.69–0.83 μm) to sense the reflected solar energy from the ground. Each camera contained an optical lens, a 5.08 cm RBV, a thermoelectric cooler, deflection and focus coils, a mechanical shutter, erase lamps and sensor electronics (Figure 11.30). The cameras were similar except for the spectral filters contained in the lens assemblies that provided separate spectral viewing regions. The RBV of Landsat-1 satellite had a resolution of 80 m and that of Landsat-2 and -3 satellites had a resolution of 40 m.
  2. Multispectral scanner (MSS). Landsat-1 to -5 satellites had the MSS payload. The resolution of the MSS sensor was approximately 80 m with radiometric coverage in four spectral bands of 0.5 to 0.6 μm (green), 0.6 to 0.7 μm (red), 0.7 to 0.8 μm (near IR) and 0.8 to 1.1 μm (near IR) wavelengths. Only the MSS sensor on Landsat-3 satellite had a fifth band in the thermal IR. MSS is a push broom kind of sensor comprising of 24-element fibre optic array which scans from west to east across the Earth's surface, while the orbital motion of the spacecraft provides a natural north-to-south scanning motion. Then, a separate binary number array for each spectral band is generated. Each number corresponds to the amount of energy reflected into that band from a specific ground location. In the ground processing system, the binary number arrays are either directly interpreted by image classification software or reconstructed into images.
  3. Thematic mapper (TM). Landsat-4 and -5 satellites had this payload. TM sensors primarily detect reflected radiation from the Earth's surface in the visible and near IR wavelengths like the MSS, but the TM sensor provides more radiometric information than the MSS sensor. The wavelength range for the TM sensor is from 0.45 to 0.53 μm (blue band 1), 0.52 to 0.60 μm (green band 2), 0.63 to 0.69 μm (red band 3), 0.76 to 0.90 μm (near IR band 4), 1.55 to 1.75 μm (shortwave IR band 5) through 2.08 to 2.35 μm (shortwave IR band 7) to 10.40 to 12.50 μm (thermal IR band 6) portion of the electromagnetic spectrum. Sixteen detectors for the visible and mid IR wavelength bands in the TM sensor provide 16 scan lines on each active scan. Four detectors for the thermal IR band provide four scan lines on each active scan. The TM sensor has a spatial resolution of 30 m for the visible, near IR and mid IR wavelengths and a spatial resolution of 120 m for the thermal IR band.
  4. Enhanced thematic mapper (ETM). This instrument was carried on the Landsat-6 satellite which failed to reach its orbit. ETM operated in seven spectral channels similar to the TM (six with a ground resolution of 30 metres and one, thermal IR, with a ground resolution of 120 metres). It also had a panchromatic channel providing a ground resolution of 15 metres. ETM is an optical mechanical scanner where the mirror assembly scans in the west-to-east and east-to-west directions, whereas the satellite revolves in the north–south direction, hence providing two-dimensional coverage.
  5. Enhanced thematic mapper plus (ETM+). This instrument was carried on board the Landsat-7 satellite. The ETM+ instrument is an eight-band multispectral scanning radio-meter capable of providing high resolution image information of the Earth's surface. Its spectral bands are similar to those of the TM, except that the thermal IR band (band 6) has an improved resolution of 60 m (versus 120 m in the TM). There is also an additional panchromatic band operating at 0.5 to 0.9 μm with a 15 m resolution.
  6. Operational land imager (OLI). OLI uses a push broom sensor instead of the whisk broom sensors that were used in earlier Landsat satellites. It has over 7000 detectors per spectral band, which gives it significantly enhanced sensitivity, fewer moving parts and improved land surface information.
  7. Thermal infrared sensor (TIRS). The TIRS is used for thermal imaging. Its focal plane array uses gallium arsenide quantum well infrared photo detector arrays for detection of infrared radiation. Like OLI, it also employs a push broom sensor design and has a 185 km of cross-track field-of-view.
img

Figure 11.30 Return beam vidicon (RBV)

11.11.2 SPOT Satellite System

SPOT (satellite pour l'observation de la terre) is a high resolution, optical imaging Earth observation satellite system run by Spot Image company of France. SPOT satellites provide Earth observation images for diverse applications such as agriculture, cartography, cadastral mapping, environmental studies, urban planning, telecommunications, surveillance, forestry, land use/land cover mapping, natural hazard assessments, flood risk management, oil and gas exploration, geology and civil engineering. The SPOT program was initiated in the 1970s by Centre national d'études spatiales (CNES), a French space company and was developed in association with Belgium and Swedish space companies.

Since the launch of SPOT's first satellite SPOT-1 in 1986, the SPOT system has constantly provided improved quality of Earth observation images. Each of SPOT-1, -2 and -3 [Figure 11.31 (a)], launched in the years 1986, 1990 and 1993 respectively carried two identical HRV (high resolution visible) imaging instruments and two tape-recorders for imaging data. They had a design life of three years and are out of service now. Currently, two of the SPOT satellites, SPOT-5 and SPOT-6, launched in the years 2002 and 2012 respectively, are operational. SPOT-4 satellite [Figure 11.31 (b)], functional till July 2013, carried two high resolution visible infrared (HRVIR) imaging instruments and a vegetation instrument. SPOT-5 satellite [Figure 11.31 (c)] has two high resolution spectroscopic (HRS) instruments and a vegetation instrument. SPOT satellites move in a circular, sun-synchronous orbit at an altitude of 832 km. SPOT-6 is the latest operational satellite in the family of SPOT satellites. SPOT-6 was launched on 9 September 2012 on board PSLV-C21. SPOT-7 is scheduled for launch in 2014. Both these satellites are designed to provide continuity of high-resolution, wide-swath Earth imaging data up to 2024. Table 11.3 enumerates the salient features of these satellites.

img

Figure 11.31 (a) SPOT-1, -2 and -3 satellites (Reproduced by permission of © CNES), (b) SPOT-4 satellite (Reproduced by permission of © CNES/ill. D. DUCROS, 1998) and (c) SPOT-5 satellite (Reproduced by permission of © CNES/ill. D. DUCROS, 2002)

Table 11.3 Salient features of SPOT satellites

Satellites Orbit Altitude (km) Orbital period (min) Inclination (degrees) Temporal resolution (days) Equatorial crossing (a.m) Sensors
SPOT-1 Sun-synchronous 832 101 98.7 26 10:30 2 HRV
SPOT-2 Sun-synchronous 832 101 98.7 26 10:30 2 HRV
SPOT-3 Sun-synchronous 832 101 98.7 26 10:30 2 HRV
SPOT-4 Sun-synchronous 832 101 98.7 26 10:30 2 HRVIR, vegetation instrument
SPOT-5 Sun-synchronous 832 101 98.7 26 10:30 2 HRS, vegetation instrument
SPOT-6 Sun-synchronous (In quadratic phase with Pleiades satellites) 695 98.79 98.2 26 10:00 2NAOMI

11.11.2.1 Payloads on Board SPOT Satellites

  1. High resolution visible (HRV) instrument. SPOT-1, -2 and -3 satellites carried the HRV push broom linear array sensor (Figure 11.32). HRV operates in two modes, namely the panchromatic mode and the multiband mode. In the panchromatic mode, the operational wavelength band is quite broad, from 0.51 to 0.73 μm, having a resolution of 10 m. The multiband mode operates in three narrow spectral bands of 0.50 to 0.59 μm (XS1 band green), 0.61 to 0.68 μm (XS2 band red) and 0.79 to 0.89μm (XS3 band near IR), with a resolution of 20 m per pixel. Data acquired in the two modes can also be combined to form multispectral images. These sensors also have the capability of oblique viewing (with a viewing angle of 27° relative to the vertical) on either side of the satellite nadir and hence offering more flexibility in observation, enabling the acquisition of stereoscopic images.
  2. High resolution visible infrared (HRVIR) instrument. The SPOT-4 satellite carried this instrument. The instrument has a resolution of 20 m and operates in four spectral bands of 0.50 to 0.59 μm (B1 band green), 0.61 to 0.68 μm (B2 band red), 0.78 to 0.89 μm (B3 band near IR) and 1.58 to 1.75 μm (B4 band shortwave IR). In addition to these bands, there is a monospectral band (M band) operating in the same spectral region as the B2 band but having a resolution of 10 m.
  3. High resolution stereoscopic (HRS) instrument. The HRS payload flown on SPOT-5 satellite is dedicated to taking simultaneous stereo pair images (Figure 11.33). It operates in the same multispectral bands (B1, B2, B3 and B4) of the HRVIR instrument on the SPOT-4 satellite but has a resolution of 10 m in the B1, B2 and B3 bands and 20 m in the B4 band. It also has a panchromatic mode of operation in the spectral band of 0.48 to 0.71 μm having a resolution of 2.5 to 5 m.
  4. Vegetation instrument. The vegetation instruments were flown on board SPOT-4 and SPOT-5 satellites with the instrument on SPOT-4 satellite referred to as Vegetation-1 and the instrument on the SPOT-5 satellite referred to as Vegetation-2. These instruments are four channel instruments with three channels having the same spectral band as the B2, B3 and B4 bands of the HRVIR instrument and the fourth channel referred to as the B0 channel operating in the 0.43 to 0.47 μm band for oceanographic applications and atmospheric corrections.
  5. New AstroSat optical modular instrument (NAOMI): The NAOMI sensor on board SPOT-6 satellite is a high resolution optical imager and operates in five spectral bands of 0.45 to 0.745 μm (Band PAN VIS), 0.45 to 0.52 μm (Band 1 Blue), 0.53 to 0.59 μm (Band 2 Green), 0.625 to 0.695 μm (Band 3 Red) and 0.76 to 0.89 μm (Band 4 NIR).
img

Figure 11.32 HRV instrument (Reproduced by permission of Spot Image -© CNES)

img

Figure 11.33 High resolution stereoscopic (HRS) instrument (Reproduced by permission of Spot Image -© CNES)

11.11.3 Radarsat Satellite System

Radarsat is a Canadian remote sensing satellite system with two operational satellites namely Radarsat-1 and Radarsat-2. Both the satellites carry on-board SAR sensors and orbit in sun-synchronous orbits with an altitude of 798 km and inclination of 98.6°. Radarsat-1 (Figure 11.34) was the first satellite launched in this system. It was launched on 4 November 1995 with the aim of studying the polar regions, to aid in maritime navigation, natural resource identification, management of agricultural and water resources, and monitoring of environmental changes. Radarsat-2, the second satellite of the Radarsat series was launched on 14 December 2007. It is used for a variety of applications including sea ice mapping and ship routing, iceberg detection, agricultural crop monitoring, marine surveillance for ship and pollution detection, terrestrial defence surveillance and target identification, geological mapping, land use mapping, wetlands mapping and topographic mapping. The Radarsat constellation mission is a follow-on project to Radarsat-2. It will comprise three satellites and is proposed to be functional by 2018.

img

Figure 11.34 Radarsat-1 satellite (Reproduced by permission of © Canadian Space Agency, 2006)

11.11.3.1 Radarsat Satellite Payloads

Both of the Radarsat satellites have SAR sensors operating in the on board C band. The SAR sensor on Radarsat-1 satellite has the unique capability to acquire data in any one of the possible seven imaging modes. Each mode varies with respect to swath width, resolution, incidence angle and number of looks. Because different applications require different imaging modes, the satellite gives users tremendous flexibility in choosing the type of SAR data most suitable for their application. It operates in the C-band at a frequency of 5.3 GHz with HH polarization. The ground resolution varies from 8 m to 100 m and the swath width varies from 50 km to 500 km for different imaging modes.

The Radarsat-2 SAR payload ensures continuity of all existing Radarsat-1 modes, and offers an extensive range of additional features ranging from improvement in resolution to full flexibility in the selection of polarization options to the ability to select all beam modes in both left and right looking modes. The different polarization modes offered are HH, HV, VV and VH. The ground resolution varies from 3 m to 100 m and the swath width is selectable from 20 km to 500 km. Other salient features include high downlink power, secure data and telemetry, solid-state recorders, on-board GPS receiver and the use of a high-precision attitude control system. The enhanced capabilities are provided by a significant improvement in instrument design, employing a state-of-the-art phased array antenna composed of an array of hundreds of miniature transmit-receive modules. The antenna is capable of being steered electronically over the full range of the swath and can switch between different operating modes virtually instantaneously.

11.11.4 Indian Remote Sensing Satellite System

The Indian remote sensing (IRS) satellite system comprises one of the largest constellations of Earth observation satellites in use for a wide variety of remote sensing and Earth observation applications at both national and international level. These satellites are designed, launched and maintained by the Indian Space Research Organization (ISRO). Some of the application areas these remote sensing satellites are designed for include crop area assessment and production estimation of major crops, flood risk zone mapping and flood damage assessment, snow melt run-off estimates for planning use of water resources, drought monitoring and assessment, urban planning, forest survey, mineral prospecting, coastal studies, land use and land cover mapping, and so on. Different payloads on the remote sensing spacecraft provide day and night high resolution imaging in several spectral bands.

The different remote sensing satellite missions of the IRS satellite system include IRS-1A, IRS-1B, IRS-P1, IRS-P2, IRS-1C, IRS-P3, IRS-1D, IRS-P4 (also called Oceansat-1), Oceansat-2, Oceansat-3, Technology Experiment Satellite (TES), IRS-P5 (also called Cartosat-1), Cartosat-2, Cartosat-2A, Cartosat-2B, Cartosat-3, IRS-P6 (also called Resourcesat-1), Resoucesat-2, Resourcesat-3, IMS-1, Risat-1, Saral and Megha-Tropiques. IRS-P1 was a launch failure. Of the remaining satellites, some of them have successfully completed their intended missions and others are still in service. Those that have completed their missions include IRS-1A, IRS-1B, IRS-P2, IRS-1C, IRS-P3, IRS-1D, IRS-P4 and TES. Some of the more recent remote sensing satellites are briefly described in the following paragraphs. The descriptions are mainly confined to spacecraft launch details and payloads on board the spacecraft.

Resourcesat-1, also known as IRS-P6, was launched on board indigenously developed polar satellite launch vehicle PSLV-C5 on 17 October 2003. It was placed in a 817 km high polar sun synchronous orbit. The payload comprises three cameras, including a high resolution LISS-4 (linear imaging self scanner) camera operating in three spectral bands in the visible and near infrared region with 5.8 m spatial resolution and steerable up to +26° across track to obtain stereoscopic images with a revisit time of 5 days, a medium resolution LISS-3 camera operating in three spectral bands in the visible and near infrared region and one in the shortwave infrared (SWIR) band with 23.5 m spatial resolution and 142 km swath with a revisit time period of 24 days and an advance wide field sensor (AWiS) operating in three spectral bands in the visible and near infrared region and one band in the shortwave infrared with 56 m spatial resolution. Two such cameras (AWiS-A and AWiS-B) achieve a swath of 730 km. The satellite was intended to continue to provide the remote sensing services earlier provided by IRS-1C and IRS-1D with enhanced resolution and data quality.

Resourcesat-2 is a follow-up satellite to Resourcesat-1 and is intended to provide remote sensing data services to global users provided by Resourcesat-1. Resourcesat-2 cameras have enhanced performance as compared to that on board Resourcesat-1. For example, LISS-3 has improved radiometric accuracy from 7 to 10 bits, LISS-4 has enhanced multispectral swath from 23 km to 70 km and improved radiometric accuracy from 7 bits to 10 bits, and AWiS has improved radiometric accuracy from 10 to 12 bits. Resourcesat-3 is a follow-up satellite to Resourcersat-2 and is scheduled to be launched in 2015. The LISS-3 camera on board the satellite is LISS-3-WS, which is an advanced version of LISS-3 having wider swath equal to that of the AWiS sensor. LISS-3_WS therefore has the spatial resolution of LISS-3 and swath width of AWiS. It also carries an atmospheric correction sensor.

Cartosat-1 was launched on 5 May 2005 on board PSLV-C6 into a 617 km polar sun synchronous orbit. Two panchromatic cameras, namely PAN (fore) and PAN (aft), with 2.5 m resolution and a swath coverage of 30 km constitute the payload. The data from Cartosat-1 is used for the preparation of cartographic maps. A solid state recorder on board the satellite provides global data storage of areas not visible to the ground station. Cartosat-2 (Figure 11.35) is a follow-up satellite to Cartosat-1 and was launched on 10 January 2007. It is an advanced remote sensing satellite capable of providing scene-specific spot imagery. The payload is a single panchromatic camera capable of providing better than 1 m spatial resolution with a swath of 9.6 km. Cartosat-3 is a further advanced version of Cartosat-2. The camera on board the satellite will have a resolution of 30 cm and a swath of 6 km. It is scheduled to be launched during 2014 and will be used for cartography, weather mapping and some strategic applications.

img

Figure 11.35 Cartosat-2 remote sensing satellite (Courtesy: ISRO)

Oceansat-1 was launched on 26 May 1999. The payload of Oceansat-1 includes an ocean colour monitor operating in 402–422, 433–453, 480–500, 500–520, 545–565, 660–689, 745–785 and 845–885 nm wavelength bands with a spatial resolution of 360 m and a multi-frequency scanning microwave radiometer. The ocean colour monitor has a swath of 1420 km. It is intended to study physical and biological aspects of oceanography. Oceansat-2 was launched on 23 September 2009. In addition to providing service continuity to the users of the ocean colour monitor of Oceansat-1, it has potential applications in other areas too. The payload of Oceansat-2 includes an ocean colour monitor, a Ku-band pencil beam scatterometer and a radio occultation sounder for atmosphere. Oceansat-3 is the latest in the series of Oceansat remote sensing satellites and is scheduled for launch during 2014. It is mainly intended for ocean biology and sea state applications. Oceansat-3's planned payload includes a 12-channel ocean colour monitor, a thermal infrared sensor, a scatterometer and a passive microwave radiometer.

Risat-1 (Radar Satellite-1) is a state-of-the-art microwave remote sensing satellite with imaging capability during both day and night and under all weather conditions. It was launched on 26 April 2012 into a circular polar sun synchronous orbit of 536 km altitude. Microwave synthetic aperture radar operating in C-band is the payload. Active microwave remote sensing provides cloud penetration and day–night imaging capability. The all-weather and day/night imaging capability of Risat-1 is put to use in applications in agriculture, forestry, soil moisture, geology, sea ice, coastal monitoring, object identification and flood monitoring. Figure 11.36 shows a photograph of Risat-1. Risat-2 was built by Israel Aerospace Industries and was launched before Risat-1 on 20 April 2009. It was India's first radar imaging satellite. The payload is an X-band synthetic aperture radar used to monitor Indian borders to check counter-insurgency and counter-terrorism.

img

Figure 11.36 Risat-1 remote sensing satellite (Courtesy: ISRO)

Saral is a joint Indo–French mission by ISRO and the Centre National d'Etudes Spatiales (CNES). It is intended for ocean studies such as marine meteorology and sea state forecasting, continental ice studies, management and protection of marine eco systems, environmental monitoring, protection of biodiversity and so on. It was launched on 25 February 2013 on board PSLV-C20 into a 781 km polar sun synchronous orbit.

11.12 Future Trends

Since the launch of the first remote sensing satellite, Landsat-1, in the early 1970s, remarkable growth has been made in the field of remote sensing satellites both in terms of technological developments and potential applications. The aim of the remote sensing satellite missions of today is to provide highly reliable and accurate data and launch satellites with long life times having a high level of redundancy and sensor stability in order to cater to the needs of critical remote sensing applications. These applications mostly require long-term observation with precise and accurate information. To minimize the risk of mission failure, a maximum redundancy approach is being pursued. In a nutshell, the focus is to launch satellites with longer lifetimes having on board them sophisticated sensors with maximum redundancy possible and keeping the satellite mass to the minimum possible.

Technological advances have led to development of new sensors, improvement in the resolution of the sensors, increase in observation area and reduction in access time, that is time taken between the request of an image by the user and its delivery. Future trends are to further improve each of these parameters to have more accurate and precise remote sensing data. Future missions will make use of new measurement technologies such as cloud radars, lidars and polarimetric sensors that will provide new insights into the key parameters of atmospheric temperature and moisture, soil moisture and ocean salinity. Recent developments in the field of lidar technology and laser terrain mapping systems will drastically reduce the time and efforts needed to prepare digital elevation models. Improvements in sensor technology especially in the resolution of the sensors have led to the development of hyper-spectral and ultra-spectral systems. These systems image the scene over a large number of discrete and contiguous spectral bands resulting in images over the complete reflectance spectrum. Several new gravity field missions aimed at more precise determination of the marine geoid will also be launched in the future. These missions will also focus on disaster management and studies of key Earth system processes – the water cycle, carbon cycle, cryosphere, the role of clouds and aerosols in global climate change and sea level rise.

Other than improvements in the sensor technology, great advances have been made in the image compression and image analysis techniques. Image compression techniques have made it possible to transfer voluminous image data. The latest image compression techniques are image pyramids, fractal and wavelet compression. The image analysis techniques that will be used extensively in the future include image fusion, interoferometry and decision support systems and so on. Image fusion refers to merging data from a large number of sensors in hyper-spectral and ultra-spectral systems to improve system performance, to generate sharpened images, improve geometric corrections, provide stereo-viewing capabilities for stereo-photogrammetry, to enhance certain features not visible in single data images, detect changes using multi-temporal data, replace defective data and substitute missing information from one image with information from another image. Fusion of image data is done at three processing levels namely at the pixel level, feature level and decision level. Radar interferometry is a rapidly developing field in which two or more images of the same location are processed together to derive the digital elevation model. Decision support system refers to interactive, flexible and adaptable computer based information system that aids in storing and processing the image data and aids in the decision-making process.

Further Readings

  1. Allan, T.D. (1983) Satellite Microwave Remote Sensing John Wiley & Sons, Inc., New York.
  2. Berman, A.E. (1999) Exploring the Universe through Satellite Imagery, Tri-Space, Inc.
  3. Bromberg, J.L. (1999) NASA and the Space Industry, John Hopkins University Press, Baltimore, Maryland.
  4. Clareton, A.M. (1991) Satellite Remote Sensing in Climatology, CRC Press, Florida.
  5. Conway, E.D. (1997) Introduction to Satellite Image Interpretation, John Hopkins University Press, Baltimore, Maryland.
  6. Denegre, J. (1994) Thematic Mapping from Satellite Imagery: Guide Book, Pergamon Press, Oxford.
  7. Gatland, K. (1990) Illustrated Encyclopedia of Space Technology, Crown, New York.
  8. Gurney, R.J., Foster, J.L. and Parkinson, C.L. (1993) Atlas of Satellite Observations Related to Global Change, Cambridge University Press.
  9. Hiroyuki, F. (2001) Sensor Systems and Next Generation Satellites IV, SPIE – The International Society for Optical Engineering, Bellingham, Washington.
  10. Lillesand, T.M., Kiefer, R.W. and Chipman, J.W. (2004) Remote Sensing and Image Interpretation, John Wiley & Sons, Inc., New York.
  11. Sanchez, J. and Canton, M.P. (1999) Space Image Processing, CRC Press, Boca Raton, Florida.
  12. Fernand Verger, Isabelle Sourbes-Verger, Raymond Ghirardi, Xavier Pasco, Stephen Lyle, Paul Reilly The Cambridge Encyclopedia of Space Cambridge University Press 2003

Internet Sites

  1. www.gisdevelopment.net
  2. www.astronautix.com
  3. http://www.crisp.nus.edu.sg/~research/tutorial/spacebrn.htm
  4. http://www.crisp.nus.edu.sg/~research/tutorial/image.htm
  5. http://www.crisp.nus.edu.sg/~research/tutorial/optical.htm
  6. http://www.crisp.nus.edu.sg/~research/tutorial/opt_int.htm
  7. http://www.crisp.nus.edu.sg/~research/tutorial/infrared.htm
  8. http://www.crisp.nus.edu.sg/~research/tutorial/mw.htm
  9. http://www.crisp.nus.edu.sg/~research/tutorial/sar_int.htm
  10. http://www.crisp.nus.edu.sg/~research/tutorial/process.htm
  11. http://www.gisdevelopment.net/tutorials/tuman008.htm
  12. http://rst.gsfc.nasa.gov/Front/tofc.html
  13. www.isro.org
  14. www.nasda.go.jp
  15. www.noaa.gov
  16. www.orbiimage.com
  17. www.spot4.cnes.fr
  18. www.spotimage.fr
  19. www.spaceimaging.com
  20. www.skyrocket.de

Glossary

Active remote sensing:
Active remote sensing involves active artificial sources of radiation generally mounted on the remote sensing platform that are used for illuminating the objects. The energy reflected or scattered by the objects is recorded in this case
Aerial remote sensing:
Aerial remote sensing uses platforms like aircraft, balloons, rockets, helicopters, etc., for remote sensing
Central perspective scanners:
The central perspective scanners utilizes either electromechanical or linear array technology to form image lines, but images in each line form a perspective at the centre of the image rather than at the centre of each line
False colour composite image:
If the spectral bands in the image do not correspond to the three primary colours, the resulting image is called a false colour composite image. Hence, the colour of an object in the displayed image has no resemblance to its actual colour
Geographic Information System (GIS):
The Geographic Information System (GIS) is a computer-based information system used to represent digitally and analyse the geographic features present on Earth's surface and the events taking place on it
Indian remote sensing satellite (IRS) system:
This is one of the largest constellation of Earth observation satellites and is used for a wide variety of remote sensing and Earth observation applications at both national and international level. These satellites are designed, launched and maintained by the Indian Space Research Organization (ISRO).
Instantaneous field-of-view (IFOV):
This is defined as the solid angle from which the electromagnetic radiation measured by the sensor at a given point of time emanates
Landsat satellite system:
The Landsat satellite system is the USA's remote sensing satellite programme launched with the aim of observing Earth on a global basis. It comprises seven optical/thermal remote sensing satellites for land observation purposes
Microwave radiometer:
The microwave radiometer is a passive device that records the natural microwave emission from Earth
Microwave remote sensing systems:
Microwave remote sensing systems utilize the microwave band, generally from 1 cm to 1 m for remote sensing applications
Monogenic images (panchromatic images):
Monogenic images are produced from a single primary image by applying some changes to it, like enlargement, reduction, error correction, contrast adjustments, in order to extract maximum information from the primary image
Multispectral images:
In multispectral images, the final image is produced from three images taken in different spectral bands and by assigning a separate primary colour to each image
Multitemporal images:
Multitemporal images are secondary images produced by combining two or more primary images taken at different times
Natural colour composite image:
Natural colour composite images are those images in which the spectral bands are combined in such a way that the appearance of the displayed image resembles a visible colour photograph
Non-scanning systems:
Non-scanning systems explore the entire field in one take
Optical mechanical scanner:
An optical mechanical scanner is a multispectral radiometer where the scanning is done in a series of lines oriented perpendicular to the direction of the motion of the satellite using a rotating or an oscillating mirror
Optical remote sensing systems:
Optical remote sensing systems mainly makes use of visible (0.3–0.7 μm), near-IR (0.72–1.30 μm) and shortwave-IR (1.30–3.00 μm) bands to form images of the Earth's surface. Some optical remote sensing systems also use laser radars, laser distance meters, etc.
Passive remote sensing:
Passive remote sensing refers to the detection of reflected or emitted radiations from natural sources like the sun, etc., or the detection of thermal radiation or the microwave radiation emitted by objects
Polygenic secondary images:
These are composite images formed by combining two or three primary images in order to make the extraction of information from the image easier and more meaningful
Push broom scanners:
A push broom scanner is a scanner without any mechanical scanning mirror but with a linear array of solid semiconductor elements located at the focal plane of the lens system, which enables it to record one line of an image at one time
Radarsat satellite system:
Radarsat is a Canadian remote sensing satellite system
Radiometric resolution:
Radiometric resolution refers to the smallest change in the intensity level that can be detected by the remote sensing system
Resolution:
Resolution is defined as the ability of the entire remote sensing system (including the lens, antennas, display, exposure, processing, etc.) to render a sharply defined image
Spatial resolution:
Spatial resolution is defined as the minimum distance the two point features on the ground should have in order to be distinguished as separate objects
Spectral resolution:
This is determined by the bandwidths of the electromagnetic radiation of the channels. The narrower the bandwidth used, the higher is the spectral resolution achieved
SPOT satellites:
SPOT is the French satellite programme, with Belgium and Sweden as minority partners. The system is designed by the French Space Agency (CNES) and is operated by its subsidiary, Spot Image
Synthetic aperture radar:
Synthetic aperture radar (SAR) uses a technique of synthesizing a very large array antenna over a finite period of time by using a series of returns from a relatively much smaller physical antenna that is moving with respect to the target
Temporal resolution:
Temporal resolution is specified as the number of days in which the satellite revisits a particular place again
Thermal remote sensing systems:
Thermal remote sensing systems sense the thermal radiations emitted by objects in the mid-IR (3–5 μm) and the long IR (8–14 μm) bands
True colour composite image:
In the true colour composite image, the three primary colours are assigned to the same coloured spectral bands, resulting in images similar to that seen by a human eye
Wind scatterometer:
The wind scatterometer is used to measure wind speed and direction over the ocean surface by sending out microwave pulses along several directions and recording the magnitude of the signals backscattered from the ocean surface
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.165.180