Showing results for 
Search instead for 
Do you mean 

Autogrid

by Technical Evangelist on ‎12-18-2019 01:14 PM - edited on ‎02-21-2020 06:51 AM by Community Manager (328 Views)

Download model

Description:

The primary purpose of this spatial model is to show how to use the new (to ERDAS IMAGINE / SMSDK 2020) Create Dice Boundaries operator to create a regular grid of area geometries over an image. This is useful for purposes such as Zonal Change Detection and Deep Learning feature extraction, especially to produce a grid not possible in those applications, such as a grid where the polygons have overlap.

 

Overlapping square polygons created over a satellite image
Grid_Output.PNG

The primary use for Create Dice Boundaries was to replicate the functionary provided by the old Dice Image dialog. The Dice Image dialog takes as input an image file and a set of parameters which define how to break that image up into many equal sized and potentially overlapping chips. So the Create Dice Boundaries operator was implemented to create the boundary definitions that you might wish to use for purposes such as subsetting an image. You can find the Spatial Model which performs the Dice Image function in $IMAGINE_HOME\etc\models\diceimage.gmdx

However, in the Spatial Model presented in this article, the Create Dice Boundaries operator is instead used to create boundaries which are themselves turned into area geometries in a single output vector layer (usually a Shapefile).

 

autogrid_v16_6_3.gmdx
Autogrid_Model.PNG

 

The Create Dice Boundaries operator generates a List of Boundaries, not a Features stream. So the first task is to take the List of Boundaries and read it through a Features Input operator to turn the Boundaries into a Features stream. 

The Create Dice Boundaries operator also creates Boundaries (and subsequently area geometries via the Features Input) in an "image space" coordinate system (grid coordinates). So to create Features which can be overlaid with the original data, or other geospatial data, the features geometries have to be re-associated with the Coordinate Reference System (CRS) of the input image. So the Dictionary Item operator is used to mine the Metadata of the input raster to determine the value associated with its Boundary.CRS key. That is then used as the TargetCRS  input to the Coordinate Transformation operator. This transforms the "image space" coordinates into the same CRS as the input image. The area geometries can then be successfully overlaid onto the input image in a Preview or written to an output vector file for use in other tools. 

Assumptions

You must have Spatial Modeler 2020, or later, installed to use this Spatial Model.

 

Input parameters: 

Input Raster Filename: Name of the raster image to be used as the spatial extent reference from which the tile/grid area geometries will be generated.

Grid Size in Pixels: Size of the grid to create, measured in pixels of the Input Raster. For example, a value of 256 will create a grid of (by default) butt-joining area polygons covering 256 x 256 pixels starting at the lower left corner of the input image.

Percent Overlap: Defaults to 0, but a value less than 100 can be entered to define an overlap (in both vertical and horizontal directions) between successive area geometries. For example, entering 50 will create square area geometries which overlap 50% across lines and columns.

Output Grid Filename: Name of the output Shapefile to create which contains the area geometries defining the grid.

 

Autogrid_GUI.PNG

   

 

Comments
by
on ‎11-03-2020 01:54 AM

Hi @ian.anderson ,

 

I just saw this article by searching for a way to automatically create a grid for my Deep Learning based classification. You mentioned a possible usecase for Deep Learning feature extraction, so I have a question about the possible overlap.

 

What could be the advantage of that overlap in a DL-based classification? And what happens to it, if the overlapping grid-cells are assigned to different classes?

 

Kind regards,

Lennart

by Technical Evangelist
on ‎11-03-2020 08:31 AM

Hi @Lennart_I. ,

 

Deep Learning classification (or certainly the Initialize Inception approach) takes in large images as tiles and checks each of those tiles to see if the feature of interest exists within the tile. Which work fine if the feature of interest is contained entirely within the tile (and you trained the classifier using whole images of the feature of interest).

 

Easiest to think of an example - you've trained Inception to look for airplanes. You did this by training on examples of pictures of aircraft and those pictures were of whole aircraft. So Inception knows what a whole aircraft looks like. Now feed in a satellite image which has aircraft somewhere within it. That image is broken up into tiles to be analyzed for the existence of aircraft. In one tile there's an entire aircraft - that tile is flagged as containing aircraft. But the next aircraft has been split in half by the tiling process. Inception does not know what half an aircraft is supposed to look like, so neither tile is flagged as containing an aircraft.

 

Now if you had used overlapping tiles it's less likely that there will be a tile that does not entirely contain the whole aircraft.

 

Overlapping has a number of disadvantages though , primarily that it means you have many more tiles to process, which takes longer. The other being the one you mention in  that you have to deal with interpreting overlapping tiles. It really depends on what your end goal is how you approach that. 

 

You could use other approaches, such as training using "bits of aircraft", but that also has disadvantages in that things that aren't aircraft, but which look like the "bits of aircraft", are more likely then to get classified as aircraft.

 

Hope that answers the question?

 

Cheers

 

Courses
Contributors