Showing results for 
Search instead for 
Do you mean 

Raster Generalization (Degrade, Downsample, etc)

by Technical Evangelist on ‎08-16-2018 09:46 AM - edited on ‎02-24-2020 04:45 AM by Community Manager (1,016 Views)

Download model



This model reduces the resolution of an image by an integer factor in the X and Y directions. As built, the model uses a Focal Mean operator and so averages all of the original "small" pixels that make up the new "big" pixels. If the X and Y factors are large, this method takes more of the original pixels into account in the computation than a bilinear interpolation or cubic convolution resample would, since these resampling methods use only a small window for computation. By using Focal Mean the model emulates the functionality of the Degrade dialog. But it can be easily modified to perform other generalization tasks.


A multispectral input image (on the left) downgraded using Focal Mean by a factor of 10 (output to the right)


Replace Focal Mean with a different Focal operator to generalize in different manners. For example,  use Focal Max to perform a Max Pixel Decimation. Or use Focal Majority to generalize thematic data.


Thematic Landcover raster degraded using Focal Majority by a factor of 10


The original Community support posting which triggered creation of this model actually used a Focal Sum to calculate the total population inhabiting larger mapping units. In this instance you would want to alter the way the output raster's data type is defined since a Global Sum might produce values outside the data range of the input raster. 


In other words this is a very useful general model that can be tailored to your specific requirements with a minimum of editing.




Speaking of which, as built, the model identifies if the input raster is a single band thematic dataset and, if it is, it copies across all the attribute fields from the input raster. However this isn't really appropriate when using the Focal Mean operator since an averaged class value wont generally be valid (averaging class 1, water and class 3, urban, to get class 2, vegetation, obviously isn't correct). But it is included in the model for when you use a Focal operation which is more pertinent to thematic data. Do bear in mind though that some attributes (e.g. an area value) may no longer be valid or accurate on the results. 


How does it do that?


It's the combination of a Warp operator and Define Processing Area (DPA), with pixel size set to DegradationFactor times the input pixel size, that does the heavy lifting of the downsampling for you.


Conceptually here's what's happening:


Say you have 1m input data and want to downgrade to 20m pixels. So you enter a Degradation Factor of 20, which defines a Kernel that's 20 rows by 20 columns (actually, even sized factors get bumped by 1 in the "Even Matrix" sub-model, so the kernel becomes 21 x 21). The Focal Mean operator performs its function at every 1m pixel location, averaging the surrounding 21 x 21 pixels at every input pixel. Then the Warp and Define Processing Area specifies to downsample from 1m to 20m . Since the Warp specifies Nearest Neighbor resampling, and the DPA requests 20m pixels, only every 20th row and 20th column value is retained as the pixel value for the "bigger" output pixel. Hence you have a 20 x 20m pixel which represents the Mean of all 400 1m pixels that fell inside its extent.


That makes it sound like the Spatial Modeler is performing a lot of unnecessary calculations and then throwing away 399/400 of them. But because of the Pull architecture used by Spatial Modeler that is not the case. The DPA operator requests data from upstream at the downgraded pixel resolution, such that the Warp supplies data at the downgraded resolution. In turn Warp requests data from its upstream operators at the downgraded and nearest neighbor resampled values, so the Focal Mean only actually performs its calculations at the spacing that Warp requests of it. Hence there's no unnecessary calculations performed.  


It's important to note that for this to work correctly the Warp operator has had its UsePyramids port turned from the default True, to False, in this model. If Warp were allowed to use pyramids supplied (if available on the Raster Input) from its upstream operators then the closest pyramid to the requested degraded pixel resolution would be used and therefore downsampling will have already been applied (since the act of building Pyramids consists of a sequential downsampling process) - but that downsampling will not be the one you wanted from the Focal operator. So the Warp operator has been set to not use pyramids, even if they are available. In conjunction with the Nearest Neighbor resampling this means that the desired Focal operation is effectively performed.


There is a downside to not allowing Warp to use pyramids if available. When performing a Preview, and zooming out beyond 1:1 pixel scaling, Spatial Modeler will take longer to generate the Preview than if pyramids were used.  Even when Running this model, the fact that data is being downgraded would mean that pyramids would normally be used. In this instance we always want the full input resolution values to be used for the calculations (even if many pixel locations are being "skipped") and so use of pyramids must be turned off. However it's obviously better in this instance to get the correct result rather than being able to roam and zoom a Preview in real-time.


Even Matrix sub-model


Now - why do even numbered Degradation Factors need to be turned into a kernel that's 1 larger then the specified value? Here's why. Moving windows work by centering on the (input) pixel being processed and looking at the surrounding pixels masked by the remainder of the matrix. Easy if the matrix has odd dimensions (e.g. 3x3). But if you have an even number there's no "center pixel" in the kernel window. So the window that is considered  is effectively half a pixel "shifted" from the output pixel location. Normally this doesn't matter - you asked for an even numbered window and that's the cost of doing so. It's the nature of raster processing.


But here we want an exact "downsampling" from, say,  1m pixels to 20m pixels. In order to get an exact output 20m pixel that lines up with 20 x 20 input 1m pixels (to the exact edge) I had to increase the matrix by 1 and fill the last row and column with zeros. I.e. a 21 x 21 matrix. And set the Define Processing Area parameters the way they are set. This gets things to line up.  


The "Even Matrix" sub-model that constructs the necessary Kernel is based on a modified version of this Recipe :


Input parameters: 


Raster Input: Filename of the input raster dataset to be processed. 

Degredation Factor: An integer value of 2 or more denoting the factor to downsample the raster. For example, a Degradation Factor of 4 would produce an output image where each output pixel covers the extent of 16 (i.e.  4 x 4) input pixels.

Raster Output: Filename of the output raster dataset to be created.