ITC analysis of aerial images
- Introduction
- Aerial Sensor Technology
- Image needs for computer image analysis
- Synergy with LiDAR
- BRDF Correction and Normalization of Aerial Data Area
- Tests of feature-based BRDF correction curves on two flight lines
- Conclusion
Aerial Sensor Technology
The availability of digital aerial images is sharply on the rise as numerous good quality sensors exist and because imagery is often acquired to systematically populate databases from which it is later purchased (e.g., NorthWest Geomatics' ADS40 acquisitions). In addition, the forest inventory community (i.e., the provincial governments, the forest companies) is moving to a soft-copy interpretation approach, discovering the added benefits of direct GIS compatibility and the pleasure of multispectral image interpretation; while retaining one of the main benefit from the past, that of controlled acquisition conditions via specific contracts. However, the use of the semi-automatic ITC approach to forest inventories on large areas covered by aerial images (or strips) has not been fully demonstrated yet.
In the field of digital remote sensing (DRS), there are three types of well know multispectral satellite sensors. First, sensors such as the one found on the Landsat satellite series that gather data one pixel at the time while using an oscillating mirror that sweeps across the field of view to create the image lines. Second, the so-called ‘push-broom’ sensors à la SPOT, that gather one image line at the time with a linear array (i.e., a line) of Charge-coupled device (CCD) photosites. Third, the ‘full frame’ sensors like IKONOS, QuickBird, etc. that gather a full two-dimensional (2D) scene with one 2D CCD sensor per spectral band. All of these use prism or gate diffraction to gather the data in the other spectral bands.
A common feature of satellite images is their acquisition from a high altitude (600-800 km) allowing them to cover substantial areas of the ground while using lenses exhibiting very small view angles (2–3°). Compare to aerial images, this simplifies substantially the pre-processing needed before an ITC analysis can be carried out, as objects (trees) are essentially all seen from above (i.e., no leaning to the side) and most importantly, are all part of the same scene. However, it should be noted that some satellite sensors can be rotated to look to the side, thus increasing the probability of an acquisition, but creating images where objects are seen with significant view angles.
The satellite sensors described above all sprung from developments made first on aerial sensors. This trend continues, albeit mostly outside of Canada now. For examples, the Deadulus series of airborne sensors sponsored a Landsat-type sensor, while the MEIS, the DMZ, or the Leica sensors, to name but a few, followed a push-broom approach. In addition to using multiple linear arrays to gather the multispectral data at nadir, many of these sensors also have similar acquisition capabilities looking forward and looking backward, thus creating multispectral stereo pairs for the more ‘traditional’ interpretive approaches (now done on computers, in the so-called soft-copy environment). The Casi is essentially a hyperspectral sensor that can be used in multispectral mode, but typically producing more spectral bands than the ubiquitous four (nIR, R, G, and B). Sensors like that of Kodak or Applanix use the 2D frame camera approach, with or without stereo pair capabilities. Others sensors are even more sophisticated. Example systems and types are presented in Table 1.
Type of sensors | Examples |
---|---|
Mechanical scanner (oscillating mirror, one pixel) | Landsat, Deadalus |
Push broom sensors (one line at the time) | MEIS, ADS40(v1,v2) , SPOT |
Frame cameras (2 dimensions) | Kodak, DMC, Applanix, ALTM |
Frame camera with pan-sharpening | IKONOS, QuickBird, … |
Wide push broom sensors (more than one lines) | M7VI (v1, v2, v3) sensors |
Others (time synchronization and pan-sharpening) | Vexcel UltraCam |
The Vexcel UltraCam sensor uses a sophisticated system of gathering panchromatic image parts through time, while getting its multispectral channels from single 2D arrays. It thus has to reconstruct the panchromatic image from all these frames, relying on correlation within overlapping image regions to make a single grey-level high definition scene. Multiple lenses are involved, thus corrections, such as those for vignetting, are made more complicated. Sections of what is often considered a single panchromatic image can be spectrally off in relation to one another. Since a pan-sharpening technique is used to produce the high definition versions of the multispectral images, such anomalies are bound to carry through.
On the other hand, the M7VI sensor uses an approach that sits in between the 2D frame camera capturing a full scene and the linear arrays capturing a single line of image at a time. It could be considered a wide push-broom approach, gathering data a few hundred lines at a time, yet delivering data in long strips like that of a push-broom sensor. In addition, up to five cameras (thus five lenses) are used in parallel to create a wider strip, leading to numerous artefacts in both image strip directions.
As a final note here: a warning that pan-sharpening, whether with satellite or airborne sensors, done on-the-fly on board the sensor or on the ground after the acquisition is done, can often be misleading. It is important to always keep in mind that the multispectral data was acquired at a cruder resolution (often by a factor of four) and that this will affect (to a certain extent) the delineation of tree crowns, but more importantly, their species recognition. Indeed, most multispectral pixels that appear to be well within a tree crown will have had their value affected by what is around that tree crown, be it shade, another crown, or background vegetation or material.
Project status
- On-going