02-24-2018 05:21 PM
Yes, the Max value is 65535 and the needed portion (Black) has a value of 3 digit (e.g. 255) as I see it from the inquiry tool. However, I don't have a clue how to set/recode each data to nodata or 0. Is that using spatial modeler or any other tool? Thanks!
02-24-2018 07:07 AM
ImageInfo. I think it's on the Home tab, near the left side. Probably called Image Metadata or something these days (don't have the software with me right now). You can use that to both set the NoData value and recalculate stats.
02-24-2018 10:17 AM - edited 02-24-2018 10:19 AM
Thanks so much!
It is solved! After I set the Nodata value to 65535 and compute the statistics, the out put is visible (image at the right). The left image is the output obtained by the order of Orthorectified-pansharpened-subset of WV-02 imagery while the right one is obtained by order of Orthorectified-subset-pansharpend (nodata value set to 65535) of the same imagery. From the image below, the image at the right has a better color output than the left one regardless of its bluring. Is there any way to imporve this blur effect? If not, I will use the left output to classify my project area based on tree species. Thanks again!
02-24-2018 01:56 PM
the difference is potentially down down to the fact that it's a color-space transform. Since the full image and the subset will have different color spaces you get different results.
Go go with whatever you think is going to give you better results. If you're going with traditional classifiers (max likelihood, etc) I'd personally be tempted to go with the "blurrier" image (especially since it also appears to have richer "colors"). If you're going to use the Signature Editor you could also use the same training AOIs on both images and see which image gives you better seperability in classes prior to deciding which one to go with in the final classification. Please report back how you get on!
03-03-2018 06:01 PM
I have tired to classify those two images using supervised classification method. I have used the same training samples (AOIs) for both and chose 4 classes as shown below in the images.
The area coverage of each class for the one image is different from the second while the overall classification acuracy for both were 73.17 % (some litratures said 85% or more to say "good" classification. For checking their accuracy I have used 41 reference points. I hope there is no significant difference between these two classified images, in other words, the blury effect has not much impact on supervised classification.