12-19-2018 03:07 PM
Thanks for the reponse.
I do have multiple classes, just outputting trees. Perhaps I should output trees after machine learning if I want this to work on other townships.
I'll try Johnnie's procedure on the other towns and maybe tweak the machine learning step so it doesn't exlude other classes too early in the process.
Thanks again.
Josh
12-28-2018 03:40 AM - edited 12-29-2018 02:16 AM
I have succesfully used machine intellect for the 2012 CIR imagert, but it is difficult getting the balance right with the sample points and the result is excluding some treed areas.
Some very green (red) grass is darker than some trees, and with my many sample points for trees, grass, shadows and water - some sections of trees aren't being picked up, probably where they are lighter than some of the grassed areas.
Is the solution to add a lot more sample points in these missing areas and classify as trees. I am quite happy with the grassed areas being picked up as Grass, but there are holes in the tree areas for 2012 which mean I am getting less tree canopy than the NDVI spatial model I used for the 2018 CIR imagery, which in reality there should be more.
Any help appreciated.
Thanks in adance,
Josh
01-01-2019 08:09 PM
Sorry me again (and happy new year).
I'm not sure I will be able to compare 2012 and 2018 CIR imagery accurately with spatial models. It seems the results are too inconsistent between the two years. I have use machine learning for 2012, even with a good few thousand sample points and there are still too many holes. It still picks up edges of shadows even though I have sample points for shadows. Perhaps there are trees being picked up with a similar pixel value.
What would be the best solution in this case? Do I add a lot many more sample points for 2012? And since I am not doing machine learning for 2018 the results are quite different. I tested the machine learning on the 2018 imagery with the same miz file but it isn't capturing enough tree canopy.
I don't know if it helps but I have attached the miz and spatial models for you.
ndvi_with_color_table_2018.gmdx is what I used for the 2018 CIR imagery
machine_learning_tree.gmdx uses the shapefile of sample points for 2012 which consists of
- trees (1776)
- buildings (5)
- grass (1406)
- shadow (443)
- Water (33)
as well as a few other points.
I then ran the spatial model tree_output_2012.gmdx which uses the miz from the previous model.
Regards,
Josh
01-07-2019 01:32 PM
Hi Josh,
Are you sure your tree training sample points don't include shadows within the canopy? If you've placed a point on a tree, but the point happens to fall on a dark pixel in that crown, it may be causing confusion with the shadow class.
You may also want to broaden the attributes you are training on to something other than just Mean DN value. Texture measures would generally be useful for discriminating trees from other vegetation for example. Calculating the NDVI might be useful as well (perhaps as an alternative to using the original band values). Using, say, Intensity, NDVI and Variance Texture as attributes rather than RGB values (i.e. "color") might also make things less susceptible to changes between dates of imagery.
Cheers
01-16-2019 04:29 PM
Thanks Ian,
I will look at my sample data again as I must have picked up some shadows and classified as trees.
A couple of questions. With Intensity, I can't simply point the raster input to the R,G,B inputs can I?
Do I need to extract these bands first?
Also, with variance texture, what do I input into the focus?
Regards,
Josh
01-17-2019 05:44 AM
Hi Josh,
By intensity do you mean the colorspace Intensity when converting RGB to Intensity, Hue, Saturation? If so then yes, you'll need to use the RGB to IHS operator to convert three input bands (representing the R, G and B wavelength bands) into IHS and then take just the I. Or actually I think there's a dedicated RGB to Intensity operator if all you want is Intensity.
You'll need to use the Band Selection operator (three of them) to select the R, G and B bands to send to the appropriate ports on the Intensity operators.
On any moving-window operators such as Variance Texture the Focus input port wants a MAtrix to define the moving window extent. E.g. a 3x3 matrix populated with all 1s would be a 3x3 moving window around each pixel being analysed. So use a Custom Matrix Input operator and hook it up to the Focus port. Double-click the Custom Matrix Input operator to bring up the Matrix Source dialog. This let's you define the size, shape and other characteristics of the moving window. Larger extents let you start doing useful things like clipping the corners to form circular windows:
Cheers
01-17-2019 05:44 AM
Hi Josh,
By intensity do you mean the colorspace Intensity when converting RGB to Intensity, Hue, Saturation? If so then yes, you'll need to use the RGB to IHS operator to convert three input bands (representing the R, G and B wavelength bands) into IHS and then take just the I. Or actually I think there's a dedicated RGB to Intensity operator if all you want is Intensity.
You'll need to use the Band Selection operator (three of them) to select the R, G and B bands to send to the appropriate ports on the Intensity operators.
On any moving-window operators such as Variance Texture the Focus input port wants a MAtrix to define the moving window extent. E.g. a 3x3 matrix populated with all 1s would be a 3x3 moving window around each pixel being analysed. So use a Custom Matrix Input operator and hook it up to the Focus port. Double-click the Custom Matrix Input operator to bring up the Matrix Source dialog. This let's you define the size, shape and other characteristics of the moving window. Larger extents let you start doing useful things like clipping the corners to form circular windows:
Cheers
01-17-2019 05:21 PM
Thanks Ian.
In terms of using the band selection, woudl RGB be 1:1, 2:2 and 3:3 for each band selection?
01-18-2019 07:51 AM
That's usually (but not always) correct if the input is a color airphoto or other "true color" 3-band image.
3 weeks ago
Sorry, just following up on this. Do I use the variance texture before creating the miz file/before the Initialize Random Forest operator, or is it done when classifying the machine learning against the raster input?
Currently, after I classify using machine learning there is an attribute lookup to find "Tree" pixels, before outputting to another raster. Is this too late in the process to be using this. When does the variance texture take place?
Sorry about the questions. I'm still learning.
Regards,
Josh