09-18-2020 12:06 PM
I literally measured with the measure tool the pixel side size..The metadata (properties) mentions 10m, I actually had no idea a layer stack can stack different spatial resolutions also...
No I didn't think of the DN value aspect but as I was taking the attached screenshots I think I realized what you mean! So I cannot see any edge/side of the 10m resampled pixel because it's the same spectral value, so my eyes see the same colour, I understand. Sorry for the rookie question but how do I check it then? to make sure of the spatial res output I mean. Maybe just to test it, I should subset just a band (of the orig 20m) from the stack and I would see the metadata, that would be definitely its metadata related to that band only.
RE Nearest Neighbour, now correct me if I'm wrong, I've always noticed it's used when one wants to maintain spatial (and spectral as you said) consistency, in poor words, when you want the image as unmodified and closer to original as possible like when we're re-projecting or like https://youtu.be/QJRL0aGC-BI?t=21 here at min 3:31. Please correct me if I'm wrong
For this purpose, a sharper look, as you said, I read notes on different resampling methods than NN, like from bicubic onwards to strongest like what you mentioned LaGrange
Anyway, I am really grateful for all this, learned a lot, definitely very useful and appreciated!
09-18-2020 12:57 PM
Yes, the visual measuring is going to catch you out every time. It's definitely caught me out in the past.
There's not really much you can do to visually confirm that a 60m pixel has been turned into 36 10m pixels with the same DN values if you used Nearest Neighbor. You just have to believe what the software is telling you. Even then the software itself may be "interpreting" values, so it's a tricky business.
And yes, NN is usually used to preserve spectral integrity when reprojectiing or otherwise needing to resample an output pixel value from an input set of data. But if, say, the resampling was happening because you were rotating the image, using NN might result in a "stair-stepped" effect on what were previously nice straight edges. So we use what are effectively distance-weighted techniques to determine what the output pixel value should be. Like with the pan-sharpeniing discussion, which resampling technique you use is going to come down to what purpose you are eventually going to put the data to. If the purpose is to perform supervised classification, you might use NN. If the purpose is a pretty looking backdrop for vector GIS data, CC might be appropriate, if the purpose is visual interpretation, then LaGrange might be appropriate. Etc.
Have a great weekend!
09-25-2020 11:46 AM - edited 09-28-2020 06:05 AM
Still not sure I entirely agree with the approach, but I built a Spatial Model which ingests Sentinel-2 L1 SAFE-formatted datasets and intelligently upsamples the 20m and 60m bands to 10m. I say L1 because ts designed to throughput all 13 bands. I may have a go at the L2 12-band variants in a separate model. Really its more appropriate to atmospherically corrected data anyway since its using NNDiffuse.
The 10m bands are left alone, i.e. they should have exactly the same DN values as the input data.
The 20m bands are NNDiffuse merged with an average of the 10m bands.
The 60m bands were tricky. Band 1, the Coastal Blue (or Aerosol) band at 60m, is way down there at the bottom of the wavelengths on its own. I basically upsampled it using LaGrange and injected a very small amount of Band 2 (Blue) to sharpen it to 10m. You could play with the percentage contribution of the Blue band to inject more of it, but it seems counter productive to inject too much of Band 2 - you might as well just use Band 2 if that's what you do.
Band 9 (Water Vapor) and Band 10 (Cirrus) don't seem like they should be merged with anything so I simply upsampled using LaGrange to 10m.
And that's petty much it. It has a Preview mode or you can spit out an upsampled file. If you pick JPEG 2000 as the output format I've set it to default to Lossless and if you do TIFF it should use Packbits.
Hope this helps.
09-25-2020 01:01 PM
Here's the variant of the model that works with 12-band Level 2 Sentinel-2 datasets.
Basically there's no Band 10 (Cirrus) information in the atmospherically corrected L2 data (because there's no surface reflectance) so there's no processing of that band. Otherwise it's the same approach as for the 13-band data covered above.
Please note that both these Spatial Models were developed using a pre-release of ERDAS IMAGINE 2020 Update 2, so there's a possibility they wont open fully in Update 1. If that proves to be the case let me know and I can probably tweak them.
09-26-2020 02:05 AM - edited 09-26-2020 02:18 AM
This is great, thank you for all your effort, really appreciated and I hope I'll have the chance to test it.
Unfortunately I don't have access anymore to ERDAS 2020 (I'm starting a new role on Monday, changing jobs)
But I can say that I tried the pan sharpening with NNDiffuse and the results were unacceptable, as you said it was a mixture but actually the result was so far away from the actual spectral values (I tested it with the red edge bands and then also on the SWIR ones)
After other tests, the only one maintaining the closest spectral values is Wavelet Resolution Merge WRM - it's excellent and I was so happy to see the results, the sharpening worked perfectly while maintaining spectral consistency with the original bands and their native spectral values
So I might suggest to use WRM-as you mentioned pansharpening can generate a mixture of specteal values - I am glad to inform that WRM is the only one that actually does not do that.. I outputted the pansharpened at the same radiometric res of 16 unsigned to make comparison possible... So I checked the pixel values pansh vs original bands, for either the 4 rededge (well, of which one is NIR edge) and the 2 SWIR and.. SUCCESS ! I'm sorry I don't have the screenshots, but I'm sure anyone who would like to test it, they'll get the same result, compared to nn diffuse or any other pansharpening method available there in Erdas
I'm sure my enthusiasm and satisfaction are transparent.
So I really hope this will benefit other users, I'm sure there are others aiming to do the same thing and now they have the solution in this conversation
So using Wavelet Resolution Merge for the
(1)red edge bands I used as pan band the Vis-Nir average(I tested it also with red-NIR average as "pan band" in WRM, and then also using only NIR as pan band - unsatisfactory results - the best one as pan band is the average of all 4 bands of visible and NIR, so sentinel 2- bands 2, 3, 4 and 8)
(2)for SWIR only the NIR
and applying WRM, there you have it. The valid and correct method.
Honestly, I have not considered the 60 m bands because we di not need it, and there is too a long a way from 60 to 10..as you said no "pan" band to come up with.., but surely your solution is very useful, thanks!
I look forward to testing your models as soon as I'll get the chance.
Thank you again and I am sure this will be useful for the community here and for any user