Lab 2: Radiometric and Atmospheric Correction

Goals and Background:


The goal of this lab was to familiarize students with performing non-radiative transfer code (RTC) methods of absolute and also relative atmospheric correction on remotely sensed images. More specifically, the Empirical Line Calibration and Enhanced Image Dark Object Subtraction absolute atmospheric correction methods were performed in parts 1 and 2 of this lab, and Multidate Image Normalization was performed in part 3 of the lab. Because of the complexity of finding the in-situ data such as atmospheric moisture content, temperature, pressure, aerosol contents, vertical distribution of water vapor, and other factors, RTC methods were not employed in this lab. All work was performed in ERDAS Imagine.

Atmospheric correction is a necessary step before performing many major remote sensing functions. Before measuring any biophysical elements of an image including soil properties (such as performing a ferrous mineral index), vegetation characteristics (such as performing an NDVI), or mineral characteristics, atmospheric correction must be performed. Also necessitating the use of atmospheric correction is the creation of image mosaics and integrating multi-sensor data.

Atmospheric correction can remove scattering of all types from images (Raleigh, Mie, and non-selective scattering), atmospheric absorption (by N, O2, or CO2) of different bands, and reflection from areas other than the target area by topography.


Methods:


Part 1 Overview:

Empirical line calibration (ELC) works by pairing spectral signature sample pixels in an image with field collected spectral radiance measurements. For the purposes of this lab, spectral reference samples from libraries were used. After creating a number of these, regression equations are created for altering the value recorded by each band of the sensor. Each band is then altered per it's regression equation pixel by pixel to create an output image. The equation is as follows:

CRk = DNk * Mk + Lk

Where:
CRk = corrected digital output pixel values for a band (k).
DNk = the image band(s) that should be corrected.
Mk = is a multiplicative term affecting the brightness values on the image or the gain.
Lk = is an additive term, or the offset.

Part 1 Section 1:

To begin, the image to be corrected was opened. The image in this case was a Landsat 5 image of Eau Claire and the surrounding area from August 3rd, 2011. The Spectral Analysis Workstation was then opened and the sensor set to Landsat 5 TM- 6 bands viewing in the preset false IR (4,3,2) band combination. Subsequently, the Atmospheric Adjustment Tool (Figure 1) was opened and the method was set to Empirical Line.

Part 1 Section 2:

In the Atmospheric Adjustment Tool, samples were taken and assigned to spectral refernce signatures from the libraries provided. From the ASTER library, sample pixels were assigned to the asphalitic concrete, pinewood, grass, and tapwater reference samples. From the USGS V4 library, an aluminum roof was assigned to the Alunite AL706Na sample.

Figure 1
Part 1 Section 3:

After saving the atmospheric adjustment file, the ELC was executed from the Spectral Analysis Workstation on the image.


Part 2 Overview:

Enhanced image based dark object subtraction is a much more complex method that is almost as effective as RTC atmospheric correction methods. The method takes into consideration sensor gain, offset, solar irradiance, solar zenith angle, atmospheric scattering and absorption, and path radiance. This method follows two steps: first is conversion of the digital number (DN) to at-satellite spectral radiance, and the second is the conversion of that value to true surface reflectance. The equation for the first step of this method is shown in Figure 2. The equation for the second step of the method is shown in Figure 3. Figure 4 shows the second step.

Step 1

Figure 3

Where:

Lλ = At-sensor spectral radiance in [W/(m² sr μm)]
Qcal = Landsat image (digital number DN)
Qcalmin = Minimum quantized calibrated pixel value corresponding to LMINλ
Qcalmax = Maximum quantized calibrated pixel value corresponding to LMAXλ
LMINλ = Spectral –at sensor radiance that is scaled to Qcalmin [W/(m² sr μm)]
LMAXλ = Spectral –at sensor radiance that is scaled to Qcalmax [W/(m² sr μm)]

Step 2

Figure 4

Where:

Rλ = True surface reflectance.
D = Distance between Earth and sun [astronomical units].
Lλ = At-sensor spectral radiance image.
Lλhaze = path radiance.
TAUv = Atmospheric transmittance from ground to sensor.
Esunλ = Mean atmospheric spectral irradiance [W/(m² μm)]
θs = sun zenith angle (or 90- sun elevation angle).
TAUz = Atmospheric transmittance from sun to ground.

Section 1:

In section 1 the first step of the process was carried out. Using Model Builder, each band was subjected to the equation with parameters specifically for that band. These came from the image metadata. This equation for this first step is shown as part of the competed model in Figure 5, however after step 1 was completed step 1 was executed as the histograms of the output image were referenced for part 2 of the process. Figure 5 shows the complete model for the correction simply to show the complete process.

Section 2:

The parameters for step 2 were found in various places such as tables, the histograms of the at-satellite spectral radiance images, and general research. Lλhaze was found by estimating the distance on the x axis between 0 and the beginning of the image histogram. TAUv and TAUz are found from the optical thickness at the time of the image but this data was not available so TAUv was approximated at 1, and the satellite and sensor specific TAUz averages were used. Sun elevation angle was found in the metadata, and was used to find the sun zenith angle. The second equation that the data for each sensor was run through is this second step, and is shown in Figure 5.

Figure 5
The layers were compiled using the Layer Stack tool shown in Figure 6 after both sections of the model were run to create a single image holding all of the bands.

Figure 6

Part 3 Overview:

In this part two images of Chicago and the surrounding area, one from 2000 and one from 2009, were used to perform a multidate image normalization process. 15 feature points with signatures were collected from areas on both images whose signatures should not have changed over the time period. Regression equations were then created for each band using the 15 points from each image. The 2009 image was plotted on the X axis, as this image was being normalized to the 2000 image.

The regression equation used is: Lλsensor = Gainλ * DN + Biasλ

Where:

Lλsensor = the normalized image
Gainλ = the regression coefficient
DN = X or the 2009 uncorrected image
Biasλ = the y-axis intercept

This is a last resort process for when no in-stu data can be collected or found.

Part 3 Section 1


Figure 7
With the images open in two separate viewers, the Spectral Profile window was opened for each image. Spectral profile points were then created on each image, the number of the points corresponding to matching areas on each image. Points were collected from the same areas on the lake, from rooftops, from lakes, and from rivers. As stated before a total of 15 points were collected. The complete sets of points on the profile window are shown in Figure 7. The data from these graphs was then copied to an Excel spreadsheet, where graphs and regression equations were created for each band, the values of the 2009 image on the X axis, and the values of the 2000 image on the Y. These graphs can be seen in Figures 8-13.
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12
Figure 13
Part 3 Section 2:

The equations from these graphs were finally used in Model Builder to process all bands of the image (Figure 14). The bands were then stacked using the same method as in part 2.
Figure 14
Results:

Part 1:


Figure 15
Figure 15 shows the spectral differences between an identical place before and after ELC processing. It is evident that there were changes made showing the same pixel on the two profiles, although because they are using two different ranges of tonal values, direct comparisons cannot be made. THis was an unsuspected problem with this lab, and file formatting changes may need to be made. Comparative observations could however be made between profiles in the shape of the profile. Tonal changes also could be observed after processing.

One issue that could have created poor results with this process was the use of library reference data instead of field collected in-situ data. Some reference data did not match up to the sample pixels correctly, for example turbulent river water was assigned tap water, and these would give two different signatures.

Figure 16 shows the image before (left) and after (right) the ELC process. Some tonal differences can be seen.

Figure 16

Part 2:


Figure 17 shows the 2011 Eau Claire image before (left) and after (right) enhanced image dark object subtraction. Generally tones are darker in the corrected image. The image is also much clearer and there is more contrast between dark and bright areas of the image. Reflectance other than surface reflectance must have brightened the original image in all areas, leading to less clarity and contrast. Darker purple and red areas of vegetation seem darker, with the color being more vivid and pronounced. Pink areas of grass seem darker. Roads have a more vivd light blue color. Water seems darker and the spectral profile of the corrected image shows a significant decrease in band 1.

Figure 17
This method seems to have worked better than ELC. The values entered in the equation seem to be more correct than the values used in the ELC (incorrect library profiles), and the method seems to be much more complex. The DOS method also resulted in a darker image, leading me to believe that it was able to remove more reflectance other that target reflectance than the ELC method did.

Part 3:

After normalizing, profiles of areas of the same land cover were chosen to compare between the two images. The profiles revealed that the normalization had all but equalized the profiles of the same areas with the same land cover between the 2000 image and the normalized 2009 image. This shows that the process worked relatively properly. There are also visible tonal differences as seen in Figure 18, which shows the Chicago image before (left) and after (right) normalization.

Figure 18

Sources:

Lab instruction from Dr. Cyril Wilson. Landsat imagery is from the Earth Resources Observation and Science Center of the US Geological Survey. Spectral signatures were used from the ASTER and USGS V4 libraries.

Comments

Popular posts from this blog

Lab 8: Expert System/Decision Tree and Artificial Neural Network Classifiers

Lab 7: Object-based classification