05-29-2019 10:12 AM
I have used ERDAS IMAGINE to perform land cover classifications on optical imagery, however, I am performing my first land cover classification on radar imagery and I am uncertain on the procedure. I have not been able to find any information or instructions on this specific topic anywhere online.
The data is a single scene of Sentinel-1 dual-polarisation (VV+VH) data taken in Interferometric Wide swath (IW) mode. I have pre-processed the data in ESA’s SNAP (Sentinels Application Platform) software using the Sentinel-1 Toolbox (S1TBX) module, and I have exported the product (containing the VV and VH bands as well as a couple of derived bands) as a GEOTIFF to open in ERDAS. In ERDAS I have a signature file that I created by reading a shapefile that contained my sampling polygons. Where I start to run into difficulty is understanding how to actually perform a supervised classification for Sentinel-1 data. Do I need to use the Radar Toolbox in any way? Or because I have the Sentinel-1 scene in TIFF format do I proceed the exact same way as for optical data? When I evaluate my signature file, identical values appear in the columns for Red, Green, Blue (the values are different from one sample to the next, but the values within a single sample are identical across the RGB columns in the signature file). I know from performing supervised classifications on optical data that these values would normally be unique. What band combination should I use in the colour gun for visualizing and performing land cover classifications on Sentinel-1 scenes in ERDAS? My study area contains a mixture of forest, other veg, and water.
I am grateful for any tips or advice on how to perform a supervised land cover classification in ERDAS using a Sentinel-1 scene. Thank you very much.
Solved! Go to Solution.
06-02-2019 05:56 AM
Attempts to classify and map vegetation types with optical images runs into a basic impasse; optical sensors respond to target chemistry and all vegetation is mostly cellulose and chlorophyll. Hyperspectral looks for subtle color variation in the green vegetation. The next step was to look for changes in the vegetation density by matching the plant growth cycle (phenology) with changes in NVDI.
Vegetation mapping with radar is based on phenological monitoring. A radar sensor is responsive to target structure and dielectric constant. The response from a simple stem and branch is very different from the response of a leafy or blossomed structure; and not only in intensity. The radar reflection from a simple structure, like a stem or trunk, is modeled as a single or double bounce. In this sort of radar-target interaction, the return signal is largely the polarization of the transmitted signal. This is the co-polar (HH or VV) signal. But the interaction with a complex target, such as a vegetation canopy, is via multiple bounces and this results in some amount of cross-polar signal (HV or VH) being created. Thus, changes in the cross-polar signal or the cross-polar to co-polar ratio quantifies the change in the vegetation structure. As most vegetation monitoring is directed toward crops, and these are generally planted in a known regimen, it is possible to know when the crops-of-interest go through major structural changes (blossom) and plan data acquisition accordingly.
Attempts to use the tradition optical classification tools probably won’t help much; for one you only have 2 bands. If you had 3 wavelengths (X, C, L), each with 2 bands, then maybe. In addition, there is no reason to be using the SNAP (or any other) Toolbox; IMAGINE has all the Sentinel-1 capability you need to process either Bursts or Swaths. And don’t change format to Tiff. In IMAGINE you use the native S-1 format and so retain all the radiometric range and associated metadata (plus data space and time). The S-1 Tools and functionality are all there in IMAGINE.
To develop your classification regimen, use the Spatial Modeler to develop a few potential classification algorithms. With the S-1 VV, VH dataset you could create VH/VV, VH/(VV+VH) and (VV-VH)/(VV+VH) as possible analogs to the optical NDVI algorithm. What works best for your problem? Moving forward, you need to develop a Model (algorithm) for your vegetation mapping utilizing S-2 imagery, perhaps via the same phenological cycle. Note that I don’t try to combine data from optical and radar sensors, I do a classification on each sensor-type separately based on the imaging phenomenology of that sensor and then combine the two (radar and optical) classification products using a logic tree.
I will append a PPT presentation demonstrating how I do this.
Advanced Sensor Software
06-10-2019 05:58 AM - edited 06-10-2019 09:22 AM
I want to do sentinel-1 and sentinel-2 time series classification on erdas imagine. I have 16 sentinel-1 images with one band VH/VV in db scale, 16 GLCM images with (each one has 20 textures), 16 NDVI images, 16 Sentinel-2 images (each one has 8 bands).
First, I have to do layer stack and then classification.
Just to try it;
First, I exported one S1 and one S2 image in SNAP and opened in Erdas Imagine. I couldn't open sentinel images directly because It cannot read BEAM-DIMAP format.
Then I created layer stack of one S1 image and one S2 image
And then created .sig file on the stacked image for my vector files (I did this from signature editor)
When I try to do maximum likelihood classification, it gives an errors like this : Input image does not match signature's image association
How can I solve this problem?
EDIT: I solved the problem by defining my image via "Image Association" under "Edit" option in "Signature Editor" tool. However, I dont understand what you mean using Sentinel-1 or Sentinel-2 images directly without exporting to Geo-Tiff format. If you mean bands like "B2.img" or "VV.img" inside image folders, It will take very long time to stack around 500 bands (for time series analysis). Also, I am not sure If it will practically be possible without any error.