Environmental Sustainability in Practice

Remote Sensing

Remote sensing offers a bird's eye view of the Earth, and is typically used to collect electromagnetic energy carrying important information about objects/areas at or near the Earth’s surface. The electromagnetic spectrum is a continuum of electromagnetic energy by wavelength size from shortest (gamma) to longest (radio) wavelengths. These wavelengths can be measured in micrometres or nanometers. When using remote-sensing technologies to address contemporary issues in environmental sustainability, we are most interested in the visible (blue, green, red), infrared (near, mid, and thermal), and the microwave portions of the spectrum.  


Features on the Earth's surface react differently to electromagnetic energy emitted by the sun. For example, water strongly absorbs near-infrared energy while healthy, photosynthesizing vegetation strongly reflects near-infrared (NIR) energy. As vegetation becomes stressed, less NIR energy is reflected. Thus, remote sensing devices can detect changes in the health of vegetation before our eyes can detect and notice such change.

An Earth-surface feature's response to the different types of electromagnetic energy can be graphed (as shown in the Figure below), which produces that feature's spectral signature or a spectral reflectance curve. The percentage of reflectance is graphed on the y-axis while the wavelength is represented on the x-axis. Just like you have a signature that is unique to yourself, surface features also have unique spectral signatures. These signatures can be used to identify unknown features in an image and differentiate between different types of plants and animals. 

How are the data collected?

Remotely sensed data are either collected using passive or active sensors. These sensors may be found on hand-held devices, drones, aircraft, and satellites. Passive sensors passively record reflected/emitted electromagnetic energy from the Earth. This energy originates from the sun. Active sensors (such as RADARSAT) have their own energy source, sending signals to the Earth and recording those backscattered signals at the sensor level. Remote-sensing data are subsequently delivered to a “client” who then imports these data into a remote-sensing software for further image processing.

The data are typically delivered as “bands” of imagery, covering different parts of the electromagnetic spectrum. The data could consist of one band of data (as illustrated in the diagram below). Alternatively, the data could include multi-spectral data (or multiple bands of imagery, as shown below).



Using a hypothetical image, the figure below shows the typical format of one band of 8-bit image data. On the far right of the figure, we see a greyscale indicating that light image tones represent areas with higher reflectance than those pixels with darker tones. The image in the middle of this diagram provides a view of the numeric data (or brightness values, BV) attached to each pixel. Light tones have much higher BVs (e.g., 221) compared to darker tones (e.g., 17). 


Remote sensing images are comprised of thousands to millions of square grid cells called pixels. Each pixel in an image contains numeric data representing the amount of reflectance of an object on the Earth's surface. This numeric data values are called brightness values (BVs) and are also known as digital numbers. The range of possible BVs for an image pixel is dependent on the image’s radiometric resolution (see definition below). For example, an image with a radiometric resolution of 8-bits will have BVs ranging from 0-255. As demonstrated in the figure above, pixels with a BV of 0 will appear black while pixels with BVs closer to 255 will appear brighter.  

A satellite sensor acquiring remote-sensing imagery has four characteristics that define its capabilities as a sensor; these are spatial resolution, spectral resolution, radiometric resolution, and temporal resolution.Terms like "high" and "low" are typically used to describe these different types of image resolution. For example, the phrase "high spatial resolution imagery" would be used to refer to images with very small pixel sizes (e.g., 10 cm). 
 

Remote Sensing Images

We can view remote-sensing images using RGB (Red, Green, Blue) computer monitors. In RGB monitors, three bands of black-and-white image data are combined into one new colour image through the red filter (or “gun"), another through a green filter, and the third through a blue gun. The assigning of various bands to colour filters (or guns) can produce either a true-colour or standard false-colour composite image.  

True-colour composite images: To produce a true-colour composite: the red band is assigned the red “gun”; the green band is assigned the green “gun”; and the blue band is assigned the blue “gun”. True-colour composite images are easy to visually identify as they show the Earth's surface the same way we would see it using our own remote-sensing devices (i.e., our eyes). 


Standard false-colour composite images: To produce a standard false-colour composite image: the near-infrared band is assigned the red “gun”; the red band is assigned the green “gun”; and the green band is assigned the blue “gun”. In the standard false-colour composite image below, vegetation appears in different shades of red depending on the health and density of the vegetation present.


Fortunately, GPS data can also be overlaid onto existing remote-sensing data to show the precise location of objects (e.g., individual vines) on the Earth’s surface, as shown in the true-colour composite image below.


Click here to read about some remote-sensing applications. 

The following videos provide a succinct summary of the main concepts in remote sensing. 


 

This page references: