Showing results for 
Search instead for 
Do you mean 

[Hotfix] How to configure Apollo CZS (Clip zip ship) to avoid 0 byte output (errors.txt)

by Technical Evangelist ‎05-12-2017 11:31 AM - edited ‎02-05-2018 09:35 AM (1,154 Views)

Customer may experienced the following issue when doing CZS workflow. The zip file only contains errors.txt ASCII file.

Message: null
Timestamp:Fri May 16 08:03:38 CDT 2014
Context: provisiong for coverage: CN28642, output format: IMG


Also user may found out-of-memory error in JBOSS or TOMCAT log
“java.lang.OutOfMemoryError: Java heap space ". 


There are several moving parts and limitation about CZS, and the customer may have to tweak a few settings to get their system to work correctly with this size dataset and CZS. 


[1] TIFF format as intermediate format for CZS
APOLLO team introduced BIG TIFF iamge as intermediate data format in CZS in APOLLO 2016.03.

APOLLO versions before 16.03 still internally use TIFF image as intermediate data format for CZS. Due to 2GB limit of TIFF image, any CZS request beyond this limit will fail. 


[2] Default Download Maximum request size

Go to Apollo data manager, Configuration->Clip/Zip/Ship->Download Maximum Request Size(MB)

By default the max request size is 100MB, user need to modify this value in order to czs a large polygon.

This is a approximation of the output size, pixel width * pixel height * 8 * number-of-bands, for each image in the request; And it doesn't consider the output file types (e.g., compressed ECW or JP2)


This setting currently will only work for "download" workflow, not "CZS" workflow. So this Maximum request size will not be honored by CZS workflow. This issue is fixed by the attached hotfix (Apollo 2016.03 and 2016.04)




[3] Large output trigger (LOT):
This setting can be changed in APOLLO Data Manager, Go to Services→Rasters→EAIM, right click to choose “Edit Provider”. From “Data Source” tab, there is a pixel dimension threshold that determines when APOLLO uses it's "Very Large Coverage Manager" functionality to tile image requests instead of trying to handle the entire image request in memory at once.
The default value is 7500, and it's been that way since the Apollo 2014 release. However we believe this value is now significantly too large when compared to other default settings, and should be moved back down to 2500 (APOLLO development team will change this setting back to 2500 in APOLLO 2016.05).  


Image 1.jpg


Note: 2500 is just a starting point, the customer may see improved performance by using a larger value, but that would require higher memory limits in both Tomcat and the RDS processes. For now, they should try 2500 and we'll go from there.

Normally this is the only change that was required to allow the CZS operations to succeed, even with default RDS and TOMCAT memory settings.
Customer can try lowering this below 2500 if they are still having problems. As you go lower, it will cause APOLLO to create smaller and smaller tiles, which will reduce the memory requirements in the RDS and TOMCAT processes, but result in slower WMS processing. It's a trade-off. So 2500 is recommended value for LOT.



[4] RDS process memory/performance:

This can be set in the APOLLO Data Manager as well, Go to Configuration->Performance, the (32-bit|64-bit) RDS java tune up parameters setting.


Image 2.jpg


This value is a command line passed to the Java VM for each RDS process. The main value that users often need to change is the "-Xmx" value. This controls the max heap size for the Java process. As mentioned above, user did not have to change this value to get the CZS to work after lowering the Large Output Trigger, however the customer's usage may vary and they should be aware of this setting. There are two RDS processes – one for ERDAS Imagine (32-bit) and one for ImageX (i.e. APOLLO Core libraries – 64-bit). For the 32-bit process, the overall max memory size of the process is limited by the Windows 32-bit process limitations, namely 2GB. Since this is only part of the memory that is used by a process, you can't set this to 2GB, you have to set it lower. In QA testing, we can usually set this up to about 1300m. For the 64-bit process, there are limits on the memory size of the process but they are well beyond any amount of memory that someone would have in a server, so for all intents and purposes it is unlimited.


If the user still has trouble after lowering the Large Output Trigger, then a good starting point would be bumping the 32-bit -Xmx to 256m, and the 64-bit -Xmx to 1024m. By default both -Xmx are set to 128m.


NOTE: Special caution for 32bit RDS Xmx. 32bit RDS JAVA process will internally call IMAGINE 32bit native C/C++ code. Both 32bit JAVA RDS and 32bit IMAGINE native code share the same 32bit application 2GB memory space. RDS is responsible for communicating between tomcat and IMAGINE native code, and IMAGINE native code is doing the actual work. While RDS Java process needs to have large enough memory to hold the image bytes returned from IMAGINE native code, but not too much.

So if user allocate more memory to JVM, then there is less memory left for IMAGINE 32bit native code. The error message shows Apollo is running out of memory from IMAGINE native code, so give RDS java heap size (Xmx) a lot memory will actually take memory from IMAGINE native code. Apollo team suggest dropping 32bit RDS Xmx back down to 256m or even the default 128m. 256m is a good start for 32bit RDS Xmx




BE EXTREMELY CAUTIOUS… this will cause the java.exe processes to consume much more memory so the customer’s production server should be adequately resourced to accomplish this. It is possible that if the settings are too high for the server's resources, the server could become unresponsive.



[5] TOMCAT memory settings:
This can be set using the tomcat configuration utility @ <APOLLO_Installation_Folder>/tomcat/bin/apollotomcatw.exe. Run it as Administrator. On the Java tab, there are settings for Initial memory pool and Maximum memory pool. The default for the Maximum memory pool is 4096 (mb). If the customer is still having problems after tinkering with step 3 and 4, then they can try upping the TOMCAT memory size to 8gb, maybe 12-16gb depending on their server memory.

Image 4.jpg


[6] Disk space:
APOLLO also makes heavy use of the disk in these CZS operations. Temp files are created in two locations:
(1) the TOMCAT temp folder located at <APOLLO_Installation_Folder>/tomcat/temp
(2) the czs temp folder at <<APOLLO_storage>/czs/tmp.
(3) the final location of the CZS output is at <APOLLO_storage>/htdocs/provisioning.
APOLLO could potentially require many GB of free space on the disks that contain these folders. The APOLLO storage location is configurable through the APOLLO configuration wizard.
The tomcat temp folder can be changed using the tomcat configuration utility mentioned in [5] above. In the Java tab, Java Options, you'll see a line like this:\Program Files\Hexagon\ERDAS APOLLO\tomcat\temp. The path can be changed if the user is running out of space on the drive that contains the default path.



In a summry, customer will be able to complete most of their CZS operations if they lower the Large Output Trigger setting to 2500, perhaps increasing the RDS and/or Tomcat memory limits, and ensuring there is plenty of free disk space for APOLLO to operate with.