Hexagon Geospatial
MENU

WebGIS

Need a push in the right direction when configuring WebMap, Portal or SDI services? Looking for hints and tips, or just looking for Ideas and information? The WebGIS discussion board is where you start those discussions, connect and share information.
Showing results for 
Search instead for 
Do you mean 
Reply
Super Contributor
Posts: 334
Registered: ‎10-12-2015

WMTS Cache in cluster (load balanced farm)

I'm interested in how others handle WMTS Cache folder location when using an load balanced farm.

 

In our scenario we believe the network storage is about 30% slower than local vm disk storage (rough benchmark).

(high level architecture can be seen at http://community.hexagongeospatial.com/t5/Support-ERDAS-APOLLO/Load-balanced-farm-fast-storage-ideas...)

 

I'm interested in

  • where do others store the WMTS Cache in similar multi-node environment? e.g. on the network storage or on local VM.
    • If on Network Storage only, what storage architecture is used to keep performance up? Or accept the lose in performance retrieving from network storage instead of local storage?
  • Does anyone hold the WMTS Cache on all the nodes in the farm? e.g. write to one node or network storage and then sync to local storage across all nodes? if so, how do you accomplish the sync process?

thanks for any ideas or insights.

Shaun

Regular Contributor
Posts: 246
Registered: ‎10-26-2015

Re: WMTS Cache in cluster (load balanced farm)

Hi Shaun,

I recently worked with a customer to implement a load-balanced GeoMedia WebMap solution which is running a WMTS which stores its tile cache on a network share. The servers are VMWare virtual machines. I do not know anything about the storage setup.

 

There are currently two servers in the server pool accessing tiles from the same network share. I decided to store the tile cache in a network share to reduce the administration overhead of storing the tile cache on each server as the tile cache is deleted weekly and certain levels pregenerated using Cache Filler Manager. The WMTS is then left in 'on demand' mode to generate tiles on-the-fly for the remaing levels as users request them.

 

I did no formal testing of the network share performance and this system has been in production since June 2015 with no reported performance issues with the WMTS.

 

HTH

 

Colin

LST
Occasional Contributor
Posts: 7
Registered: ‎06-13-2016

Re: WMTS Cache in cluster (load balanced farm)

Hi Colin

 

One of our customers is also interested in setting up a load balanced farm to reduce time when generating the tile cache for his different WMTS instances. Is there any documentation available on how to implement such a load balancing server architecture?

 

Thank you for your answer.

 

Best regards,

Laura 

Highlighted
Super Contributor
Posts: 334
Registered: ‎10-12-2015

Re: WMTS Cache in cluster (load balanced farm)

There is no 'one' load balanced farm config - that would depend on many factors outside of WebMap considerations.

 

In our particular experience the load balancing environment was already in place - provided by the IT infrastructure.

However we did test it and found storage throughput problems.

So first up - test your environment. 

e.g. do some file storage benchmarking using (say) diskspd. A brief example of using diskspd can be found at https://community.hexagongeospatial.com/t5/ERDAS-APOLLO-ECW-JP2/Load-balanced-farm-fast-storage-idea... - it is not authorative but what I did in my first play with diskspd.

 

In our case we found the shared network storage to be signicantly slower that local server storage.

 

I don't know the full architecture diagram in our case as our customers IT put it together, but turned out to be something like:

Storage-LowBandwidthFirewall.png

i.e. the storage sharing was a fairly standard setup, but what we didn't know was that there were low bandwidth firewalls between the WebMap server and that storage. That resulted in slow throughput which reduced performance of WMTS - particuarly getting to the shared storage as that required going through two low bandwidth firewalls that were saturated. The IT team had not realised that geospatial applications - in particular likes of WMTS or ECW sharing etc - had a very high storage throughput requirement and that with all the other enterprise apps running the firewalls were saturated. Throughput/storage performance is one of the core bottlenecks of geospatial applications.

 

Please note that the above solution - with high bandwidth firewalls and high storage performance- is generally the most desirable - as long as throughput is maintained.

 

In our case they couldn't fix the firewalls at the time so we went with a different arrangement. Given the local drive performance was so much better than server storage, we created a local share on one of the local webmap VMs and put the WMTS on it - after verifying the throughput of 3 servers hitting the same local share did not degrade.

something like:

WMTS-HeldLocal.png

So there was only one low bandwidth firewall to pass through - performance was reasonably decent. (At least I think there was a firewal there - I didn't get all the details).

 

We also put the GWMCache on each individual server. I believe that normally in a cluster the GWMCache would also go on shared storage to ensure each server can access all cache files generated by the other servers. (With WMS/WMTS/WFS that is not as big an concern as the GWMCache can be 'private' to each server; other generally older applications using shared GWMcache to serve up files via URL there is a need for commonly accessable cache )

To ensure the all caches were accessable from all servers, we used an old sharing scheme:


a. Create subfolder under h:\GWMCache folder
called CacheA on ServerA shared as \\ServerA\GWMCache\CacheA
called CacheB on ServerB shared as \\ServerB\GWMCache\CacheB 
called CacheC on ServerC shared as \\ServerC\GWMCache\CacheC


i. Properties for each CacheX
1. Security – Add read access to
ServerAServiceAccount;ServerBServiceAccount;ServerCServiceAccount

2. Share CacheX and grant read access ServerAServiceAccount;ServerBServiceAccount;ServerCServiceAccount

b. IIS Manager, expand GWMCache
i. Create 3 virtual directories under GWMCache

 

Alias             Physical Path                     Physical Path                    Physical Path

                     Server A                             Server B                            Server C

CacheA        H:\GWMCache\CacheA      \\ServerA\CacheA             \\ServerA\CacheA

CacheB        \\ServerB\CacheB               H:\GWMCache\CacheB   \\ServerB\CacheB

CacheC        \\ServerC\CacheC              \\ServerC\CacheC            H:\GWMCache\CacheC

 

Each server should end up with IIS looking like:
IIS-GWMCache.png
With each of CacheA, CacheB, CacheC resolving to the appropriate server.


c. Update GeoMedia WebMap cache setting
i. Launch adminconsole
ii. GeoMedia WebMap>System Settings
iii. Cache
1. Change virtual Directory to
for server A: GWMCache/CacheA
for server B: GWMCache/CacheB
for server C: GWMCache/CacheC
e.g. for server B

GWMShareSetting.png

2. Use Configuration Test to verify cache works on each server.

 

Again - only do above if your shared network storage is significantly slower than local storage and your doing something other than WMS/WFS/WMTS.

 

 

 

Polls
Please register to vote
Do you need immediate support?
If you encounter a critical issue and need immediate assistance please submit a Service Request through our Support Portal.