12-25-2019 07:20 PM
01-03-2020 06:13 AM
It looks like you haven't defined your classification grid at a similar scale to your training chip sizes.
01-30-2020 07:05 PM
01-31-2020 08:01 AM
Classic Inception-based CNN Deep Learning classifies pictures. For example, you train it on pictures of flowers and then give it a bunch of pictures and it will pick out the pictures which contain flowers. It does not tell you where in the picture the flower is, just that the picture contains a flower.
So if you train Inception with, say, 256 x 256 chips (pictures) of "wheat field" you get a trained classifier that wants to look at 256 x 256 pictures to see if they are similar to wheat field.
I.e you don't want to simply pass the classifier a 50,000 x 50,000 satellite image and ask it to tell you if the image contains wheat fields or not.
You instead break the satellite image into 256 x 256 chunks and have the deep learning algorithm look at each chunk in turn and flag if that chunk has wheat field in it or not. That's the grid you are getting.
So the "FeautresIn" port on Classify Using Deep Learning takes the data that defines the grid to use to break up the "Raster" input. I usually use Create Dice Boundaries to generate the grid I want to use because it allows me to generate a grid with overlap.
If you want to know where the wheat field is more accurately than just "in this grid cell", then you are better off training an Initialize Object Detection deep learning algorithm. But "wheat field" is a bad example for this because Object Detectors are looking for consistent (similar size, shape, color, edges, etc) objects, and fields don't tend to be regular. Trying to map locations of, say, oil palm tress, would be a better application of Object Detection.
But really it sounds like you are looking for a per-pixel classifier? Those currently fall more into the category of Machine Learning (not Deep Learning) - SVM, CART, Random forest, etc., which are also available in Spatial Modeler.
Or if you are wanting an Inception-style output, but with more than just the most likely (highest probability) class being tagged in the output attributes, then that might be a good Idea to post for future development.
Hope this helps.
02-18-2020 11:27 AM
Thank you for your input. This certainly helps.
I have tried to run a DL test for image classification on a small image using an inception machine intellect. I got a classified vector grid but there are some dummies in the classified vector files, besides a few trainings classes were not included in the classified results.
Then, I used a larger image the process ran but I was not able to get the classified grid. My GPU was disabled (because it is 2 GB in size). Is this related to the limited computer resources? I am using a fast computer DELL T3620 with quad cores (i7-7700K@ 4.2 GHz), 8 logical processors and 16 GB memory.
I look forward to hearing from you.
02-19-2020 01:16 PM
Size of image shouldnt really be an issue for Inception, assuming that you are applying the same size (spacing) of grid to the larger image as you did for the smaller one, becuase the process occurs grid cell by grid cell.
So if your grid is defined as, say 512 rows by 512 colmns of pixels, it doesn't mater if you then classify an image of 10k by 10k or one of 100k by 100k - the processing in both case should be occuring in chunks of data that are 512 x 512. There's just more of those chunks to process in the latter case. So the latter case will take longer to ocmplete, but really shouldn't be requiring more resources than the smaller image case.
Did you generate similar spaced grids for the second image?
Were there error messages shown in the Session Log?
a month ago
I am attaching some images that describe the workflow I used to generate a classified vector grid using a DL model.
The raster is a 3-band image and I created a vector grid for that image (12 col x 19 rows). A pretty small data set.
The training areas (chips) consist of 3 classes (Rock1, Rock2 and Rock3).
I used the attached model to generate the classified vector grid. The Model ran successfully, but the results look weird to me. There were only 2 classes (Rock 1 and Rock3) represented in the classified shape file. The third class (Rock2) is not represented.
Then, I used a bigger image and I got nothing. There was no error message, though.
I was wondering if this is really related to the limited computing resources.
Hope you can help with this.
a month ago
The validation accuracy on your Machine Intellect is only 0.33, which is very low, probably because you have only provided 3 - 9 training samples per class. Deep Learning usually requires hundreds of training samples per class to adequately train the classifier on what the patterns are that it is meant to be looking for.
I suspect you'll need to supply more training before you start to see meaningful results.
a month ago
Many thanks for your prompt reply.
That test was done on a small data set. I used a bigger data set with a large number of training chips per class ( around 20) but I got nothing.
The classified vector grid was empty.
a month ago
20 samples per class is still pretty low. What Validation Accuracy were you getting with that many?
If its still low you might want to increase the Training Steps value. But really the only answer may be more training samples. Another thing to try would be using the Augment Training Data operator.