Hexagon Geospatial
MENU

M.App Enterprise

Discuss topics with other M.App Enterprise Product pioneers and experts to get the most out of it.
Showing results for 
Search instead for 
Do you mean 
Reply
Frequent Contributor
Posts: 115
Registered: ‎04-05-2016

Limiting the vectorset that is loaded

Hi,

 

I would like to find a way to limit the amount of data that is loaded as a vectorset in an analyser view without changing and republishing the (whole) vector set in M.App Studio.

 

We have a very large vector set that is published and cached in M.App Enterprise. We use this vectorset in an analyser view but the loading time and performance is very bad so we need to limit the amount of data loaded and shown in the analyser view by dynamically creating a subset of the data via spatial or attributive filtering and load this one in the App.

 

Any ideas on accomplishing this? I haven't really understood the purpose and usecase of a boundary dataset in the Feature Analyser. Could this help me anyhow?

 

Thanks and kind regards,

Sven

Frequent Contributor
Posts: 134
Registered: ‎04-11-2016

Re: Limiting the vectorset that is loaded

Hi Sven

 

Boundary dataset can be use as a dimension to query your data, more specifically it allows user to click on the boundary (polygon) features and return the results that are related to the boundary. It's my understanding that at the time of writing you need to define a join between boundary and feature data to make it work, spatial query is not supported yet (Product centre correct me if I am wrong). 

For example, you have a feature class representing incident points, and a boundary feature class representing city boundaries. You need to have a foreign key column in incident point feature class that can be joined to the Primary Key in city boundary feature class. Then if you click on a city boundary feature, Analyzer will return the incidents that are joined to it. As mentioned earlier, this is not a spatial query, personally I think it would be helpful if Product Centre can make boundary spatial querying available in Analyzer. I have submitted this idea on ideation board:

http://community.hexagongeospatial.com/t5/Product-Ideas/Mapp-Enterprise-Analyzer-Geometry-based-boun...

 

 

Regarding performance, I have played with a online CSV file that contains 130,000 point features. The initial loading time took about 7-8 secondes (which is pretty good). And once the initial loading is complete, the performace in Analyzer is very good. 

 

If you want to filter your data at vector data steup level, I see there is an filter option in Vector Data, I wonder if this can help you..although I haven't played with it yet.

 

 

Cheers

Yuan

Frequent Contributor
Posts: 115
Registered: ‎04-05-2016

Re: Limiting the vectorset that is loaded

Hi Yuan,

 

thank you for your well written explanation on how the boundary dataset works and how to set it up. I also like your suggestion on the ideation board and pumped it up with some Hexpoints ;-)

 

Unfortunately if this is how boundaries work, they can't help me with my task to filter the dataset when the Analyzer View is loaded. In my opinion the performance is very good (we have good experience with a ~140000 points dataset) but it is still very limited. We want to extent this dataset to one that has the size of about 22 million points. This would be way to huge to load up in the view.

 

I have tried the filter expression when defining vector data and it works fine but this also doesn't really help because:

 

  1. It is a setting that has to be made in M.App studio. I don't want the user to have administrative rights, I want him to be able to define the spatial extent of the App before he loads it.
  2. After defining the new filtered vector data, we have to republish the vector set which would take too long. In our case ~15-20mins on 140000 points.

I am looking towards a dynamic solution that definitely would have to include some customization. For instance a preconnected map view where the user can draw a geometry and after confirmation the analyzer view opens only loading the features that intersect this geometry. I haven't been able to try this since the provided documentation and examples on the analyzer API are very sparse.

 

Do you think such a customization would be possible?

 

Thanks again and kind regards,

Sven

Frequent Contributor
Posts: 134
Registered: ‎04-11-2016

Re: Limiting the vectorset that is loaded

Hi Sven

 

A couple of things came in my head, not sure if going to be helpful.

I understand you want to allow the user to define the study area before Analyzer loads. Are these areas based on anything? E.g., do they represent townships, cities etc? If they are, what do you think of pre-filtering your data 22 million points based on these area geometries and then create separated Analyzer Mapps? E.g., you may have 22 million water intake points across a country, you can break them down by regions? (I think you have thought about this, but worth a try)

 

Will leave it to the product centre to get back to you on the API that you can use for customisation. As you are aware that in Analyzer application there is already a tool that allows you to draw a polygon feature and return the features that are spatially overlapping with it, and then you can use the widget to further query the returned features. But it doesn't quite meet your requirements of "pre-defining" areas before Analyzer launches.

 

 

Cheers

Yuan

Frequent Contributor
Posts: 115
Registered: ‎04-05-2016

Re: Limiting the vectorset that is loaded

[ Edited ]

Hi Yuan,

 

thank you for your help and your suggestions.

 

The first approach is certainly viable but it takes a lot of preprocessing time and it's rather difficult to manage all the regional extracted datasets and Mapps. Also we don't only need fixed regions as a filter but also filtering by attributes and by loose geometries.

Your second suggestion doesn't help in my case, because the analyzer view would still attempt to load the whole gigantic dataset during startup.

 

In the meantime I found a solution where we statically refer an online csv-file dataset. We then produce this dataset dynamically as needed and overwrite the file.

 

Regards,

Sven

Do you need immediate support?
If you encounter a critical issue and need immediate assistance please submit a Development Ticket through our Development Ticket Portal.