Why would a normal person build their own spatiotemporal database?

Miljen Mikić - 06/11/2019 | 4 min read
Telematic Big Data SpaceTime
Mireo SpaceTime – an absurdly fast moving objects database - spatiotemporal database

Storing and querying datasets that contain objects in a geometric space have always required special treatment. The choice of data structures and query algorithms can easily make the difference between a query that runs in seconds or in days.
Werner Vogels, Amazon CTO

Mireo has been in the vehicle tracking business since 2001 and during that time we have learned many valuable lessons. One of the most important ones is that these systems usually don't scale. What does this mean? As the number of records exceeds a few billion mark, the processing of data requires exponentially more time. This is true for all systems processing data produced by moving objects, but which are not designed as natively horizontally scalable.

In the late summer of 2014, we got an interesting and challenging business proposal – to design and develop a system capable of live tracking 2 million trucks and storing their log history for a period of 5 years. Can you imagine the amount of spatiotemporal data we would need to store and process? In order to correctly perform the map-matching process (i.e. detect the road where the vehicle is currently located, based on its imprecise GPS position), a vehicle needs to send its GPS positions relatively frequently. For the sake of simplicity, let's approximate this with ”1 position every 3 seconds“ while a vehicle is driving. Now, let's do the math:

2.000.000 vehicles

x 1.200 GPS positions per vehicle per hour

x 5 years

= 105.120.000.000.000 positions

Yes, that's more than 105 trillion. Needless to say, our old vehicle tracking system that relies on the combination of relational and NoSQL databases would fall apart under such amount of data. Additionally, the new system was required to be able to resolve queries such as ”fetch the total number of truck visits to the specified factory“.

There were several difficult problems to be solved in order to fulfill this kind of business proposal, such as load distribution, map-matching, networking (tracking 2 million vehicles means 2 million permanently open TCP connections). However, the most difficult one was definitely choosing the right storage. Where should we store the data? To a file system? To a relational database? To a NoSQL database? To a general-purpose object storage like Hadoop? Each of the mentioned options presented problems of its own: traditional relational SQL databases are not designed as distributed and horizontally scalable, NoSQL key-value databases have records with the same index dispersed around the disk (performance killer!), with clustered file systems (Ceph, GlusterFS, HDFS..) it is not possible to use maximal disk bandwidth with the level of abstraction which those file systems bring, etc.

The storing of data needed to be efficient, but the most important was to provide fast searching. For example, the execution time of the query ”fetch all positions and timestamps when specified 4 trucks met each other“ could not exceed few seconds in the worst case scenario. Additionally, we wanted to have the possibility to make OLAP-like queries, grouping, joining... In other words, some special kind of relational-like database would be the best choice. But which one? We had to do some serious brainstorming.

A mantra in the software engineering world is ”don't reinvent the wheel“. Naturally, we started to analyze current state-of-the-art solutions for processing massive spatiotemporal data. Besides the other obvious properties of such a system, we quickly figured out that the most critical part was the way in which the system would index multidimensional data. It turned out that there were three major multidimensional index types used in major spatial systems: R-tree family of trees (PostgreSQL/PostGIS, Oracle Spatial, MemSQL, ...), kd-tree family (Oracle Spatial Quad tree index) and Geohash (GeoMesa and various variants of products found on Google Cloud). Guess what? None of these solutions is suitable for indexing a huge amount of real-world spatiotemporal data! Let's see why.

Due to the aforementioned non-uniform data distribution, R-tree family of indexes is practically useless on a large scale because index traversal over high-density areas effectively becomes linear. A similar, but much worse effect would happen with fixed-grid space/time partition indexes like Geohash. The index traversal on high-density areas with Geohash would again become very linear with no possibility for more granular partitioning. To make things worse, Geohash-based indexes could in theory yield good record filtering only if range queries are finitely bounded by all dimensions. For example, in the context of a vehicle tracking system, it is natural to store and index triplets having vehicle ID, timestamp and location. Then, trying to execute a simple query of the form “fetch full history for one single vehicle for the period of one year“ using Geohash would effectively linearly scan the complete database because the query is bounded by vehicle ID and timestamp, but not by location!

kd-tree family of indexes are the most efficient type of indexing n-dimensional interval data, but with one huge drawback – they are built on the top of static data sets. It would be possible to efficiently index even a large interval data set using kd-tree type of index, but only if the underlying recordset would not change afterwards. There had been attempts in scientific literature to loosen this requirement, but to the best of our knowledge, there was no commercially available system using some of those approaches. In our scenario, new data would keep arriving all the time so static data sets were not an option.

To wrap up, the system capable of storing and analyzing a large amount of spatiotemporal real-time data had to contain at least the following:

  1. it had to be designed from the ground up as clustered, distributed system with data processing modules deployed as close as possible to the physical data storage (in other words, traditional RDMBS are useless for this purpose)
  2. it had to be highly available, fault-tolerant and natively horizontally scalable
  3. the index on spatiotemporal records was by far the most important ingredient of an efficient spatiotemporal database processing system; as we saw, none of the current state-of-the-art spatiotemporal indexes was suitable for dealing with a large amount of records coming from moving objects
  4. partitioning scheme had to exploit spatiotemporal properties of the underlying data - records that are close (based on vehicle ID, position and timestamp) needed to be in the same or adjacent shards

We had to come up with something new, something absurdly fast and massively scalable. After 5 years of hard work by the group of extremely talented people, we are proud to present Mireo SpaceTime Cluster. For the details about SpaceTime, read more at our next blog post here.

On the other hand, if you want to see the system in action immediately, you can book a demo.

If you want to find more detailed information about the SpaceTime solution, visit our definitive guide through an unusual and fundamentally new approach of using spatiotemporal data in our everyday routine.

If you liked this, here’s what to read next: