Hexagon Geospatial
MENU

ERDAS IMAGINE

Discuss and share topics of interest using ERDAS IMAGINE the world’s leading geospatial data authoring system.
Showing results for 
Search instead for 
Do you mean 
Reply
Occasional Contributor
Posts: 6
Registered: ‎03-27-2019

[Deep Learning] - Initialize Inception using Sentinel2 Data: Results with problems

Hello everyone,

I am working for Planetek Italia and I am testing "Initialize Inception" with a particular dataset.

The dataset has the following characteristics:

  1. Sentinel2 data (10m for each bands);
  2. 9 classes;
  3. 64x64 pixels with 13bands.

The training runs with successful with accuracy equal to 75% (after 100 epochs) but, when I apply the .miz file to a selected Sentinel2 data (the image has 640x640 pixel with a shpe file that contains a 10x10 grid) the results are with no sense.

This means that each box in the predicted grid have problems (as showed see in the attached file).

 

I read the paper related to Google Net, the neural network that Inception use, and probably the input data have to be different size and number of bands...

 

So my questions are:

  1. How can I improve my results?
  2. Are there any problem in my input dataset?
  3. Are there particular input characteristics that I have to respect in order to run in the best way the algorithm?

Thanks,

 

Nicolò Taggio

 

PS. Here (https://towardsdatascience.com/land-use-land-cover-classification-with-deep-learning-9a5041095ddb) there are more information about the dataset 

Staff
Posts: 130
Registered: ‎06-30-2016

Re: [Deep Learning] - Initialize Inception using Sentinel2 Data: Results with problems

Hi Nicolò

 

Please check the validation accuracy also.

 

The training accuracy gives you the accuracy when predicting the images it was trained on. Since teh model is trained on this data, it is expected that the traininga ccuracy will always be good.

 

The validation accuracy, on the other hand, is the accuracy you get when classifying independent data. By default we withhold 10% of the training data to use as validation data.

 

When your training accuracy is close to the validation accuracy, we saythe model is not fitted to teh training data and can classify independent data as good as the training data.

 

What was the validation accuracy you got for the model?

 

Regards, Sam

Occasional Contributor
Posts: 6
Registered: ‎03-27-2019

Re: [Deep Learning] - Initialize Inception using Sentinel2 Data: Results with problems

Hi Sam,

the validation is 70%, for this reason the results  shuold be more accurate when I apply the .miz to a new image.

 

Thanks,

 

Nicolò Taggio

Do you need immediate support?
If you encounter a critical issue and need immediate assistance please submit a Service Request through our Support Portal.