a month ago
I am loading a large MDB file using GeoMedia objects in to my software. I notice that sometimes this process can take about 15 seconds and other times it can take over an hour!! I have been looking at this problem a lot and recently have used Process Monitor to look at the what is going on during the process. I noticed that when it takes a long time the data is being read in 4K chunks. When it is quick it is always 32K chunks. Here is an image when the process was very slow running though our software using GeoMedia Objects. This is not always the case, and sometimes runs fast but then the blocks are 32K in size:
And this is the same file being loaded directly in GeoMedia. It only took a few seconds.
Why would this be the case? Something is going on that makes the objects sometimes work in 4K and sometimes 32K.
Any ideas on ways to force it to always be 32K???
Solved! Go to Solution.
4 weeks ago
I have no answer or solution.
I guess when GeoMedia opens the connection, it "tests" the access time to the storage medium (hard drive, network drive etc.) and uses that test result to determine the size of the chunk/block (or a similar method). If the medium's first response is slow, access will be slow ...
Is your MDB on a network drive? We often have performance problems with MDBs on network drives. Whenever possible we always try to save MDBs locally.
I do not know anymore.
Maybe HxGN can say more about the internals.
4 weeks ago
Thank you for your response.
It is running from a Samsung SSD. The strange thing is that this happens intermittently for me. However, we have a customer who has the problem all the time.
Do you know what coding language the GeoMedia objects are written in?
Could it be a heap fragmentation issue? I have seen this issue before when working on C++ projects and had to write a memory manager when reading large datablocks to protect the heap:
This isn't as much of a problem in .NET where the heap expands as needed. Unfortunately this is still limited to 2G for a 32bit program unless the LARGEADDRESSAWARE flag is used to increase the available space to 3G. Still fragmentation can occur.
I think you are right, something is "testing" the access and determining to use the appropriate data block size. Either because of some physical SSD parameter, or memory heap allocation, or phase of the moon?
I did read that the SSD block size is 4k. Another interesting this is that I did some read tests from the SSD using 4k blocks and it is obviously very quick!! When I see the problem with my MDB there is a "large" pause between each 4k read which is where the over an hour delay is coming from. So perhaps it is not the reading that is the problem but the processing of the data?
Many thanks for your help!
4 weeks ago
Difficult to say without seeing the GeoMedia code. Is all speculation ...
And as a user, you can't do anything (even if you can prove the heap fragmentation through a memory monitoring) if the developers don't manage the memory properly.
The "old" parts of GeoMedia are mainly written in C/C++ (x86 COM) as far as I know. Newer components are based on .NET technology.
The reason for the problem may found in GeoMedia, in Access or in a middleware component. Hard to say "from the outside".
And the mixing of COM and .NET with the interoperability layer is not necessarily useful for performance and stability ...
4 weeks ago
Hi Hesrah and Giulio,
Many thanks for time in responding to my question. I will contact GeoMedia directly and see if they can do anything about this problem that I am seeing. I am sure other people must be seeing it to.
One of our customers is still using GeoMedia 6.1 so perhaps now is a good time for them to upgrade!