Trying to gather all client history information so we can load it into mongoDB and do some crunching on it.
According to the API docs for CMX, there is nothing to prevent using /api/location/v1/history/clients to get history on all clients, whether there is 1 or several hundred clients. However, according to the CMX engineers, they will suggest using it with mac address and query conditions such as date.
Try something like this for CMX 10 & 8 respectively,
|id||Macaddress (all), IP Address (client) or Username (client) - optional|
|mapHierarchy||String containing campus>building>floor|
|currentlyTracked||Boolean if client is currently found by the MSE|
|pageSize||Integer (default 5000)|
Is it possible to get credentials set up to query any database underneath MSE or is the API the only way to get data out? I am in a chicken and egg situation where they don't have clients going in. They want to analyze all client movement. We'd like to dump this all into MongoDB rather than have to navigate using the API which is very tedious and we have yet to decide the best approach even if we have to write a program that uses the API to get all the information we want.
Regarding CMX 8.0, the customer would like to know if a process exists to query any database underneath MSE or another way (not API) to get data out. He is running into timeouts with client history information.
With MSE 8.0, there is an underlying Oracle database and the customer could theoretically go directly into the Database, but this is not something BU supports or recommends.
The API method is the recommended method and if they are getting too much data they should select data records only for specific day at a time.
After trying a variety of things it appears that pagesize is the key to not overwhelming anything. For example a pagesize of 10 seems to work consistently. Paring that way down seems to work ok to get results. Also the mapHierarchy if an entire area is not required. So I think we can proceed by writing some code that is fed in map information and then goes through history fetching the pages in a loop.