03-27-2019 02:32 AM
I am working for Planetek Italia and I am testing "Initialize Inception" with a particular dataset.
The dataset has the following characteristics:
The training runs with successful with accuracy equal to 75% (after 100 epochs) but, when I apply the .miz file to a selected Sentinel2 data (the image has 640x640 pixel with a shpe file that contains a 10x10 grid) the results are with no sense.
This means that each box in the predicted grid have problems (as showed see in the attached file).
I read the paper related to Google Net, the neural network that Inception use, and probably the input data have to be different size and number of bands...
So my questions are:
PS. Here (https://towardsdatascience.com/land-use-land-cover-classification-with-deep-learning-9a5041095ddb) there are more information about the dataset
03-27-2019 10:35 AM
Please check the validation accuracy also.
The training accuracy gives you the accuracy when predicting the images it was trained on. Since teh model is trained on this data, it is expected that the traininga ccuracy will always be good.
The validation accuracy, on the other hand, is the accuracy you get when classifying independent data. By default we withhold 10% of the training data to use as validation data.
When your training accuracy is close to the validation accuracy, we saythe model is not fitted to teh training data and can classify independent data as good as the training data.
What was the validation accuracy you got for the model?
03-28-2019 01:59 AM
the validation is 70%, for this reason the results shuold be more accurate when I apply the .miz to a new image.