03-17-2019 07:55 PM
I think I am almost there.
Running the model on a heavily vegetated area it excludes nearly all healthy grass and includes nearly all tree pixels, but on a built up commercial area it picks up shadows and some cars. I think the problem is
I tried creating a centrepoint after the variance texture per feature but the results aren't right. Is there a variance filter I need to include or is there a way of using the original point in the machine learning, but still plug the buffer into the variance texture?
03-18-2019 11:09 AM
Oh. I hadn't thought that through, had I? You're right. If you use Variance Texture per Feature when the Features are point geometries the Variance is effectively going to be 0 (because only 1 pixel is going to be considered, the rest are NoData).
So you addressed that by buffering the points? How big a buffer did you specify? Just enough to cover the 3x3 moving window extent?
Even if you kept it small like that, I wonder whether the "low variance" that you'll get when processing the pixels just inside the buffer extent might unduly bias the (averaged) texture measure for the buffer feature.
So I wonder if it would actually produce better values if you use the regular Variance Texture (not Variance Texture per Feature) operator (across the entire image extent), scale the variance values to the desired range, and then use Add Attributes by Location to pick up the Variance values at the point locations. Avoid using a buffer at all. That might be worth a try.
04-03-2019 08:32 PM
I have had luck wit the 2012 model, by having the variance texture on both the machine learning and output models.
I seem to have it excluding the right amount of grass for the 2012 aerial imagery, but I found I had to use a matrix of 57*57 grids in both models, which takes quite a while to run on each image - and there are 40 of them. I tried 37*37 but grass had crept in. I haven't tried anything in between. I then tried running this model on the 2018 imagery but it doesn't show enough pixels, but when zoomed out it clearly has picked up the trees, just not enough detail. I thought if only the gaps between the close pixels were filled that would probably cover the canopy but I'm not sure if that's possible.
I tried including RGB to Intensity and the Apply Index to output NDVI on separate attempts, but the results are varying but incorrect regardless.
At least I'm slowly making progress. Just need consistent results between the years. Not sure if you have any other ideas but you have been very helpful.
04-04-2019 10:53 PM
As I have the variance texture which uses the matrix, what if I use a more managable figure, say 17*17, and then include the focal density operator to return the number of times a pixel is found using the same matrix, and then where the number is high (where it might be picking up a lot of grass) somehow remove that from the variance raster output before running the miz file over it. I'm not sure how to actually subtract pixels this way. How could this be done? I wouldn't know if the focal density produces an actual attribute which I could filter.
07-09-2019 05:44 PM
Just wondering if there are any other operators I should try apart from variance texture, as I've still been struggling to get it to a point where it removes most of the grass, so I couldn't even filter those pixels out as they are more than just specks.
07-29-2019 06:51 AM
Sure - remember that you can include any measure to train and classify with when using the Machine Learning approahces. You just need to know what might be a pertinent measure to differentiate the features you are trying to classify. So there's lots of texture measures, but you could also use vegetation indices, tasselled cap bands, texture of the vegetation index, etc.