07-03-2017 10:10 PM
I'm interested in how others handle WMTS Cache folder location when using an load balanced farm.
In our scenario we believe the network storage is about 30% slower than local vm disk storage (rough benchmark).
(high level architecture can be seen at http://community.hexagongeospatial.com/t5/Support-ERDAS-APOLLO/Load-balanced-farm-fast-storage-ideas...)
I'm interested in
thanks for any ideas or insights.
07-04-2017 01:33 AM
I recently worked with a customer to implement a load-balanced GeoMedia WebMap solution which is running a WMTS which stores its tile cache on a network share. The servers are VMWare virtual machines. I do not know anything about the storage setup.
There are currently two servers in the server pool accessing tiles from the same network share. I decided to store the tile cache in a network share to reduce the administration overhead of storing the tile cache on each server as the tile cache is deleted weekly and certain levels pregenerated using Cache Filler Manager. The WMTS is then left in 'on demand' mode to generate tiles on-the-fly for the remaing levels as users request them.
I did no formal testing of the network share performance and this system has been in production since June 2015 with no reported performance issues with the WMTS.
11-15-2018 12:30 AM
One of our customers is also interested in setting up a load balanced farm to reduce time when generating the tile cache for his different WMTS instances. Is there any documentation available on how to implement such a load balancing server architecture?
Thank you for your answer.
11-15-2018 12:15 PM
There is no 'one' load balanced farm config - that would depend on many factors outside of WebMap considerations.
In our particular experience the load balancing environment was already in place - provided by the IT infrastructure.
However we did test it and found storage throughput problems.
So first up - test your environment.
e.g. do some file storage benchmarking using (say) diskspd. A brief example of using diskspd can be found at https://community.hexagongeospatial.com/t5/ERDAS-APOLLO-ECW-JP2/Load-balanced-farm-fast-storage-idea... - it is not authorative but what I did in my first play with diskspd.
In our case we found the shared network storage to be signicantly slower that local server storage.
I don't know the full architecture diagram in our case as our customers IT put it together, but turned out to be something like:
i.e. the storage sharing was a fairly standard setup, but what we didn't know was that there were low bandwidth firewalls between the WebMap server and that storage. That resulted in slow throughput which reduced performance of WMTS - particuarly getting to the shared storage as that required going through two low bandwidth firewalls that were saturated. The IT team had not realised that geospatial applications - in particular likes of WMTS or ECW sharing etc - had a very high storage throughput requirement and that with all the other enterprise apps running the firewalls were saturated. Throughput/storage performance is one of the core bottlenecks of geospatial applications.
Please note that the above solution - with high bandwidth firewalls and high storage performance- is generally the most desirable - as long as throughput is maintained.
In our case they couldn't fix the firewalls at the time so we went with a different arrangement. Given the local drive performance was so much better than server storage, we created a local share on one of the local webmap VMs and put the WMTS on it - after verifying the throughput of 3 servers hitting the same local share did not degrade.
So there was only one low bandwidth firewall to pass through - performance was reasonably decent. (At least I think there was a firewal there - I didn't get all the details).
We also put the GWMCache on each individual server. I believe that normally in a cluster the GWMCache would also go on shared storage to ensure each server can access all cache files generated by the other servers. (With WMS/WMTS/WFS that is not as big an concern as the GWMCache can be 'private' to each server; other generally older applications using shared GWMcache to serve up files via URL there is a need for commonly accessable cache )
To ensure the all caches were accessable from all servers, we used an old sharing scheme:
a. Create subfolder under h:\GWMCache folder
called CacheA on ServerA shared as \\ServerA\GWMCache\CacheA
called CacheB on ServerB shared as \\ServerB\GWMCache\CacheB
called CacheC on ServerC shared as \\ServerC\GWMCache\CacheC
i. Properties for each CacheX
1. Security – Add read access to
2. Share CacheX and grant read access ServerAServiceAccount;ServerBServiceAccount;ServerCServiceAccount
b. IIS Manager, expand GWMCache
i. Create 3 virtual directories under GWMCache
Alias Physical Path Physical Path Physical Path
Server A Server B Server C
CacheA H:\GWMCache\CacheA \\ServerA\CacheA \\ServerA\CacheA
CacheB \\ServerB\CacheB H:\GWMCache\CacheB \\ServerB\CacheB
CacheC \\ServerC\CacheC \\ServerC\CacheC H:\GWMCache\CacheC
Each server should end up with IIS looking like:
With each of CacheA, CacheB, CacheC resolving to the appropriate server.
c. Update GeoMedia WebMap cache setting
i. Launch adminconsole
ii. GeoMedia WebMap>System Settings
1. Change virtual Directory to
for server A: GWMCache/CacheA
for server B: GWMCache/CacheB
for server C: GWMCache/CacheC
e.g. for server B
2. Use Configuration Test to verify cache works on each server.
Again - only do above if your shared network storage is significantly slower than local storage and your doing something other than WMS/WFS/WMTS.