09-15-2020 09:50 AM - last edited on 09-25-2020 11:48 AM by ian.anderson
Apologies, I have posted thios also in a comment.
When importing Sentinel 2 for example, in the Sub-images option, can anyone indicate what method is incorporated in ERDAS for downscaling all the non-10m bands to 10m, to obtain "AllBandsAt10Meters"?
See images below
I would like to know please if the downscaling involves a simple spatial resample method (and if yes, which is it), or if there's a pan-sharpening method applied? Is there any documentation online about this option in ERDAS and could anyone please provide it? I have looked it up in ERDAS Help but no mention of it.
Solved! Go to Solution.
09-15-2020 11:16 AM
I'm pretty sure it just upsamples using Nearest Neighbor to retain spectral integrity.
09-16-2020 02:11 AM
Thank you for this and sorry for the hassle
But just to make sure I got it right: so by "upsamples" - you mean it upsamples from 20 or 60 m to 10 m? Just because in literature downscaling and upsampling can cause confusion, sorry about that.
I'm looking at this paper here https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W16/95/2019/isprs-archives-XL... and they mention: "Several studies have been carried out on the upsampling of 20m and 60m Sentinel-2 bands to 10 meters resolution taking advantage of 10m bands."
I just want to make sure I'm not interpreting the terms wrongly
09-16-2020 02:20 AM
Just to argue my confusion - this paper here https://eprints.lancs.ac.uk/id/eprint/125240/1/Manuscript.pdf mentions in the abstract
" Landsat 8 bands are downscaled to 10 m using 10 m Sentinel-2 bands "
09-16-2020 08:21 AM
In my mind at least "up-sampling" would mean increasing the spatial resolution of a datasets, such as going from 30m pixels to 10m pixels. Downsampling or degrading would indicate the reverse - a decrease in spatial resolution, e.g. from 10m to 30m.
Down-scaling isn't a term I've really heard before, Let's think - a 1m pixel image displayed at 1:1 might have a display scale on-screen of 1:4,000. The same image with 4m pixels at 1:1 would have a scale of 1:16,000. The latter I would say is a smaller scale (the fraction is smaller), so I (personally) would consider it a "down-scaling", which is at odds with the usage you quote. But that's just my opinion :-)
It would certainly be an interesting project to implement a Spatial Model which pan-sharpens the 20 and 60m bands of Sentinel-2 using the 10m bands that have wavelength overlap.
But now that I think about it there's virtually no wavelength overlap between the Sentinel-2 bands (except 8 and 8a). Normally a pan-sharpening technique relies on there being some wavelength contribution between the higher resolution band and the lower resolution band its being merged with. This is often true for Pan bands because the Pan response is over a very broad spectrum that encompasses the visible wavelengths and up into the NIR.
So it's feasible to use the Landsat-8 Pan band to merge with at least some of the Sentinel-2 bands (perhaps b2 - b8), but not really to combine the Sentinel-2 bands themselves. Not in anything other than a visually interpretable manner anyway (i.e. it "looks nice").
09-17-2020 07:09 AM
Thank you for this, it really helped clearify things, yes I definitely agree with you, it was to make sure I got it right. I really appreciate all the information, very useful, and I aprreciate the time you put into this!
As you mentioned there is no panchromatic band for sentinel 2 but there is recent research out there that either uses the Sentinel 2 10 m bands or other high res bands from sensors (provided that there is radiometric/spatial perfect overlap) etc..
Please first and foremost consider the text in red, the rest is just info which those conclusions are based on.
I have looked into the matter and there seems to be another solution and it seems to make sense.. and I would really appreciate your opinion/community's on this, and I averaged the 2,3,4 and 8 bands and used it as "pan" band in the NNDiffuse Pan Sharpening tool for the rededge bands and SWIR bands 5,6,7,8A, 11 and 12. It might seem "forced" but it's in line with the reseach cited here below, and these seem to be quite valid sources.
please find attached or referenced via URL:
"Although Sentinel-2 provides a high range of multispectral bands, the lack of panchromatic band disables the production of a set of fineresolution (10 m) bands. However, few methods have been developed for increasing the spatial resolution of the 20 m bands up to 10 m."
However, the papers attached show up-to-date workarounds, and these have been in place for years now, like the articles attached:
The main conclusions and solutions after a quick lit review are in red, please let me know what you think (apologies I tried to make it short, there's a lot of info to summarize)!
The MDPI proceedings article “Sentinel-2 Pan Sharpening—Comparative Analysis” 2018 mentions 3 methods and the conclusion is:
a. For Red Edge bands: The method that performed best: producing the panchromatic band by averaging all 10-m bands (Band 2–4 and Band 8)
The Remote Sensing paper MDPI 2016 “Water Bodies’ Mapping from Sentinel-2 Imagery with Modified Normalized Difference Water Index at 10-m Spatial Resolution Produced by Sharpening the SWIR Band” https://www.mdpi.com/2072-4292/8/4/354 concludes
b. For SWIR bands: pan-sharpening using the 10-m NIR band as the PAN-like band – in line with the article “PANSHARPENING ON THE NARROW VNIR AND SWIR SPECTRAL BANDS OF SENTINEL-2” 2016
c. The pan-sharpening can be done using NNDiffuse Resolution Merge Pan Sharpening tool in ERDAS (as far as I tested this seems the most appropriate, see the homonymous pdf attached) but also manually https://community.hexagongeospatial.com/t5/Spatial-Modeler-eTraining/Basic-Pan-Sharpening-using-Band...
“Fusion of Landsat 8 OLI and Sentinel-2 MSI data” – “Manuscript.pdf” mentions that, for each of the 20 m bands 11 and 12, a 10 m band with the greatest correlation (quantified by the correlation coefficient (CC)) with it was selected from the 10 m bands 2, 3, 4, and 8 and used as the covariate in ATPRK (The advanced area-to-point regression kriging (ATPRK) approach
This article “PANSHARPENING ON THE NARROW VNIR AND SWIR SPECTRAL BANDS OF
SENTINEL-2” 2016 https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B7/723/2016/isprs-archives-XLI-B... mentions: The equivalent procedure is followed for the Sentinel-2 20 m SWIR bands B11 and B12. Spectrally, the closest candidate higher resolution band is B8 and thus this one was employed and regarded as the panchromatic one during pansharpening. In this case, the spectral sensitivity between the high resolution band (i.e.,B8) and the two SWIR bands was significant.
Also these ones set the basis in 2015:
VivoneG., R. Restaino, M. Dalla Mura, G. Licciardi, and J. Chanussot, “Contrast and error-based fusion schemes for multispectral image pansharpening,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 5, pp. 930–934, May 2014.
GarzelliA., F. Nencini, and L. Capobianco, “Optimal MMSE pan sharpening of very high resolution multispectral images,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 1, pp. 228–236, Jan. 2008.
Please let me know what you think
09-17-2020 12:33 PM
hi @AGsilver ,
Interesting reading and it all makes sense. If you are going to merge one of the non-overlapping high res bands with one or more of the lower res bands then you'd want to use the ones which have the highest correlation with those band.
But you do have to decide if the merging that you might be doing is appropriate to the purpose to which you intend to put the sharpened data. For example, if you are sharpening the SWIR bands using the NIR band the sharpened band(s) are nolonger representative of the SWIR wavelength response of a feature at that pixel location. Its a mixture of the two. So you should be very cautious in how you use that sharpened band.
Why are you wanting to sharpen the data?
09-18-2020 01:49 AM - edited 09-18-2020 02:10 AM
Thank you and very good point there, re the wavelength of the original band to sharpen, should have thought of it, I noticed it myself after I sharpened the Red Edge bands with the average of the 2,3,4 and 8 VNIR bands, when I was comparing (just visually) one of the sparpened output with its original correspondent band..
Re your question, why do we want to sharpen the data :
Our land cover/use classification is mainly carried out on a different source on imagery, airborne of 1 m spatial res, but that covers only Vis and NIR. Sentinel 2 is just an ancillary data, to help extract high detail "trickier" "more subtle" features, that are in fact "nuances" of one main class like types of grassland, types of forestry, or different wetlands. The purpose was to sharpen those red edge and SWIR bands to get a higher spatial resolution as possible to avoid blurry edges, to exploit those too, to be able to map out those classes that are quite spectrally similar (the VNIR didn't suffice obviously) but that can be solved with red edge bands and swir easily...
but yes, that is a problem, that the sharpened bands will be a mixture, but I don't know to what extent...We'll take into account and see what decision we reach
Thank you again for all this, for all the time and effort and I hope that this can be used in the future by other users as this is a common issue, I think.
09-18-2020 02:10 AM
Sorry I'm going back to this, just to let you know that I checked the data importaed as mentioned throught direct read, choosing all bands to 10m...and it doesn't up-sample it, it just creates a layer stack with all the bands maintaining the original different resolutions.
So I confirm that in the same layer stack, there are the 10 m bands, the 20 m and 60 m all together at their original spatial resolution
09-18-2020 08:57 AM
How did you check the spatial resolution of the converted / stacked data? Just visually? Because I'm pretty sure everything will have been sampled to the highest common pixel size (i.e. 10m) by the importer in order to create a single file with all bands in it.
It's the nature of nearest neighbor sampling that the 4 10m pixels resulting from upsampling a 20m pixel will all have the exact same DN values as the original 20m pixel, So when displayed the group of four pixels will look identical to the original single pixel. Which makes it difficult to "see" what the pixel resolution is. It's caught me out a few times over the years.
It sounds like what I should do at some stage is build a Spatial Model which ingests the different resolution bands separately and upsamples the coarser resolution bands using a different resampling technique to just Nearest Neighbor. Certainly going from 20m to 10m applying a LaGRange resampler will probably produce nicely "sharpened" results for just those bands, without needing to introduce the influence of other wavelengths. The 60m might be trickier. Perhaps a "subtle" pan-sharpening of those bands using the closest (most correlated) band would be appropriate.
Definitely an area for further discussion.