Hexagon Geospatial
MENU

ERDAS APOLLO & ECW/JP2

Wondering how others have configured their ERDAS APOLLO server or what data they are crawling? The ERDAS APOLLO Discussion board is a place to find information, share ideas and more. Join the community, connect, contribute and share.
Showing results for 
Search instead for 
Do you mean 
Reply
Regular Contributor
Posts: 299
Registered: ‎10-12-2015

Load balanced farm - fast storage ideas

Client has setup storage for a new load balanced farm.

Going from: single VM with all content (ECW, WMTS, config etc) on local VM disk

Going to: 2-3 VMs with netscaler load balancer, share storage (ECW, WMTS, config etc) on central file server. See diagram below.

 

Following a good suggestion (thanks Chris) I ran some benchmark testing on their new storage and found it was significantly slower accessing storage from new file server storage compared to current situation accessing from VM local disk.

 

Client is wondering what storage configuration others might be using when introducing load balanced farms or similar scenario?

 

Arch-StorageOverview2.png

sw: Apollo Professional (core and catalog in use; GeoMedia WebMap; Geospatail SDI. Actively using ECW from apollo and WMTS generated by WebMap.

 

Tests I ran in case there are better ways of testing:

 

  1. Large file read throughput (simulate Apollo ECW read)
    diskspd -d30 -o4 -t8 -h -r -w0 -L -Z1G -c14G  \\DataStore.xxx.xxx\GISData\Imagery\DiskspdTest\iotestr.dat > H:\Diskspd\DiskSpeedLargeReadResults_Network3.txt
    (-w0 means 100% read, no write; -c14G means 14 GB file; -d30 is run for 30 seconds; o4 = 4 outstanding i/0 requests per target per thread;-t8 is 8 threads per targetSmiley Wink 

 2. Small file write throughput (WebMap output / WMTS generation)

diskspd -d1 -o4 -t24 -h -r -w100 -L -Z1G -c75K  \\DataStore.xxx.xxx\GISData\Imagery\DiskspdTest\iotestw.dat > H:\Diskspd\DiskSpeedSmallWriteResults_Network3.txt
(-d1 is 1 second write (short); -t24 is 24 threads (hopefully simulate 24 map servers); -w100 is 100% write; -c75K is 75kb file)

  

3. Small file read throughput (WMTS read by end users)

diskspd -d1 -o4 -t32 -h -r -w0 -L -Z1G -c75K  \\DataStore.xxx.xxx\GISData\Imagery\DiskspdTest\iotestw.dat > H:\Diskspd\DiskSpeedSmallReadResults_Network3.txt

(-d1 is 1 second write (short); -t32 is 32 threads (hopefully simulate 32 simultaneous wmts tile requests); -w0 is 100% read (no write); -c75K is 75kb file)

 

 

 

Regular Contributor
Posts: 227
Registered: ‎10-26-2015

Re: Load balanced farm - fast storage ideas

Hi Shaun,

Thanks for sharing this information. It has been interesting to learn about using Diskspd Utility.

 

I have run one of your tests on an environment to compare performance between virtual machine hard disk and network share.

Could you tell which particular parts of the report you compared?

Was it "AvgLat" timings and/or Total (ms) values?

 

Are you able to share any of your results?

 

I see for the 'Small file write throughput' test Total (ms) timings of 5 - 18 for the virtual machine hard disk and 110 - 120 for the network share.

 

Thanks,

Colin

Highlighted
Regular Contributor
Posts: 299
Registered: ‎10-12-2015

Re: Load balanced farm - fast storage ideas

Hi Colin,

 

'Small file write throughput' my results where

targetTotal (ms) [99th percentile]I/O per s [total]MB/s (total)AvgLat (total)LatStdDevRead (ms) maxCPU Usage (Avg)
Local VM Drive40.1434879.9330526.2144.99246.8912.34%
Network Storage75.5681737.06108.5773.6870.71277.2493.69%
 Small is goodBig is good Small is good   

My understanding is you want Total (ms) 99th percentile small and AvgLat (latency) small.

 

Large file read over 30 seconds to simulate Apollo ECW reads

target

Total (ms)
[99th percentile]

I/O per s [total]

MB/s (total)

AvgLat (total)

LatStdDev

Read (ms) max

CPU Usage (Avg)

Local VM Drive

0.55

138451

8653.19

0.229

0.156

47.857

34.61%

Network Storage

22.142

1759.7

109.98

18.18

1.138

43.263

5.93%

 

Small is good

Big is good

 

Small is good

   

 

 

Small file write to simulate WebMap or WMTS writes

Target

Total (ms) [99th percentile]

I/O per s [total]

MB/s (total)

AvgLat (total)

LatStdDev

Write (ms) (max)

CPU Usage (Avg)

Local VM Drive

76.837

2378.92

148.68

40.368

15.066

83.366

0.62%

Network Storage

118.137

1075.3

67.21

86.454

16.369

119.903

2.77%

 

Small is good

Big is good

 

Small is good

   

 

First time I've used Diskspd as well and still trying to decide if I'm using it reasonably.

 

Shaun

Do you need immediate support?
If you encounter a critical issue and need immediate assistance please submit a Service Request through our Support Portal.