Due to binning of statistics (and other attribute tables), copying the contrast table from one 16-bit image (such as DigitalGlobe's WorldView imagery or ESA's Sentinel-2) to another is not straightforward. This model simplifies that process.
Why should I be interested in transferring Contrast Tables using a Spatial Model? The basic issue is that if you are trying to visually compare two (or more) images, whether they be of the same location but different dates, or of neighboring locations, the differing statistical distributions in each image will result in differing display characteristics when default statistical stretches are applied. For example, these two DigitalGlobe WorldView-2 images (8-band, unsigned 16-bit, displayed using a 8,6,1 band combination), were displayed using the default 2.5% / 1% Percentage stretches:
Two WorldView-2 images displayed side-by-side in a single 2D View |
The image on the right has a large percentage of its pixels in the ocean and so has a markedly different histogram from that of the predominantly land-based image on the left. Consequently the "same" statistical stretch applied to the two images results in markedly different contrast tables and markedly different visual appearance.
What we need to do is literally use the same Contrast Table to display both images. I.e if DN value 389 in the left image is being mapped to screen brightness 125 we want the same mapping to be applied to the image on the right image.
So first we can manipulate the radiometry of the left hand image (which we will now refer to as the From image). Alter the display parameters to get the appearance you want to be used for all other images. For the purposes of this demonstration I applied a 2 SD stretch and then clicked the Save icon on the Quick Access Toolbar so that the stretch parameters were saved to the Contrast Tables of the From image (for just the displayed bands 8,6 and 1 since only those bands were displayed).
Left (From) image has had 2 SD stretch applied |
This saved Contrast Table is what we want to transfer to the right image (which we will now refer to as the To image).
For 8-bit (or lower) imagery this is a relatively easy task. You Can Save Breakpoints to a file from the From image and then Load Breakpoints for the To image and you're done. But even for 8-bit data this is a manual process that has to be performed using the GUI, whereas you may want to automate the process using a Spatial Model (and the Batch tool). But more importantly, with imagery using greater then 8-bit pixel depths (such as this u16 WorldView-2 imagery) the statistics (and other attribute information associated with the DN values) are binned (usually using Direct Binning). One consequence of Direct Binning is that if, for example, a band has no pixels with DN values from 0 to 39, the attribute information starts at value 40 (which is then considered Bin 0). Whereas a second image may not have values between 0 and 79 and so it's binned attributes start at 80 (as Bin 0). This makes transferring a binned Contrast Table from one 16-bit image to another tricky because the relative offsets of the start of the table need to taken into account.
For example, here's the (start of) the Contrast Table for a u16 image whose Minimum DN value is 318. In other words, Bin[0] corresponds to DN value 318 in this image. Bin[1] is DN 319. Etc.
HfaView showing the Contrast Table saved in an image header |
The model transfer_lut_v16_1_8.gmdx takes care of these relative offsets of the start of the Table for you.
transfer_lut_v16_1_8.gmdx |
Remove both images from the 2D View, run the spatial model transfer_lut_v16_1_8.gmdx and then re-display both images and the result looks like this (because a common contrast table is now being used to stretch both images):
After running transfer_lut_v16_1_8.gmdx and redisplaying both images |
With this example the two images now appear seamless (because they are actually tiles from the same data collect). The images you apply this technique to may still appear different after transferring a common contrast table between them. But you will now be sure that the visual difference you are seeing is truly because of differences in DN values, nor an arbitrary difference imposed by using different stretches. This makes visual change detection, for example, a much more objective analysis technique.
How does the model work?
The main model shown above, outside of the Iterator, first looks at the input From image to determine if the bit depth is 8 or less. If it is, then the subsequent operations within the Iterator are much simpler since the From contrast table(s) can be simply copied into the To contrast table(s) without modification.
For data greater than 8-bit further analyses are performed on the input From and To images to determine the binning differences between the two. The Statistics of each image are interrogated, primarily in order to determine if the binned contrast table(s) of the From image start at a DN value greater than, or less than that of the corresponding binned contrast table(s) of the To image. A From contrast table that started later would need to be padded on the front, whereas one that started earlier would need to be trimmed to match that of the To attribute tables.
The size of the trimming or padding necessary is also determined by looking at the differences (ranges) between Mins and between the Maxs.
The Metadata of the input To image is also interrogated and a Dictionary Item operator is used to read the BandName(s) of the input To layers.
All of the results at this stage, bar the "is it 8-bit or less" boolean, produce Tables with n rows, where n is the number of bands in the input images. For example, if the input image each have 8 bands, Tables with 8 rows (and appropriate values) will be created. Since all the Tables being input to the Iterator operator will have the same number of rows, the Iterator will run n times, each time using the nth row value from every table as the inputs to that iteration.
Inside the Iterator it gets a bit more complicated.
Iterator sub-model |
The Iterator sub-model is iterating on each band based on the input values for that band. The Contrast Attribute Table is read from band n of the From image. Based on the relative positioning and size of that Contrast table compared to the To image's Attribute Tables, there are various If Else branches which control whether it is padded or trimmed, how much it is padded or trimmed and what values are used for any padding. There are also two If Else operators which look to see if the particular From image band being processed has no Contrast Table and attempts no transfer in that instance. Otherwise the adjusted Contrast Table is attached to the To image using the Raster Attribute Output operator. This process is repeated until all bands of the input image have been processed.
When padding of the Contrast Table is required, the Append Table sub model is used to stack two Tables (current Contrast and the Padding table) together. The operation of this is described in the Table Stack Snippet article.
When padding the end of the Contrast Table, the Get Last Value in Table sub-model is used to determine the last value present in the Contrast Table (it might not be 1.0!) and a padding Table is created populated with that value (and with the number of rows necessary to pad the Contrast Table the appropriate amount).
Get Last Value in Table sub-model |
From: Name of an input image file containing saved Contrast Table(s) to be transferred to another image. The image can be 1 band or more.
To: Name of an existing image to which the Contrast Table(s) will be attached. The To image must have the same number of bands, and the same bit depth, as the From image.