Skip to content

The Race for Spatial Database Innovation

Paul Nalos
November 11, 20093 min
Using technology to map, explore, and reason about our world spatially is a powerful concept. Part of the challenge is efficiently storing and analyzing large volumes of information, and spatial databases have evolved to fill that role. However, the lines are blurry; not everyone agrees which functionality should be in the database and which should be elsewhere. For example, what kinds of spatial data should be natively understood by the database? Just points, lines, and polygons? Or should we add curves, rasters, 3D models, networks, TINs, point clouds, and more? Also, what kinds of transformation and analysis should be done inside the database, and what should be done in the application layer, e.g. with GIS software?

Using technology to map, explore, and reason about our world spatially is a powerful concept. Part of the challenge is efficiently storing and analyzing large volumes of information, and spatial databases have evolved to fill that role. However, the lines are blurry; not everyone agrees which functionality should be in the database and which should be elsewhere. For example, what kinds of spatial data should be natively understood by the database? Just points, lines, and polygons? Or should we add curves, rasters, 3D models, networks, TINs, point clouds, and more? Also, what kinds of transformation and analysis should be done inside the database, and what should be done in the application layer, e.g. with GIS software?

In one view, it makes sense to define a minimal set of geometric primitives and functions that provide value in many situations, and incrementally expand them as clear value is shown. This is great for interoperability and is exactly what the OGC, ISO, and other standards bodies have done. Being conservative also reduces the chance of committing to immature models when new types of data emerge (e.g., raster imagery, 3D building models, LIDAR). Last week, Dale highlighted the benefits of straightforward, easy to implement standards, and while not all geospatial standards fall into this category, it is a worthy goal.

On the other hand, what I see in the market is a rapid expansion of expressive and analytical power. SQL Server and PostGIS join Oracle, Informix, and DB2 with the ability to reason geodetically. Netezza and Teradata combine spatial with their high-performance data warehouse technologies. Oracle continues to innovate in 3D, with support for 3D models, TINs, and point clouds. The debate over the value of storing rasters in the database appears to be ending, with PostGIS support on the horizon, and the introduction of RasterLite, together joining mature support in Oracle and ESRI’s Geodatabase.

Last month, I had the pleasure of hearing Joel Spolsky speak at StackOverflow DevDays in Seattle. He discussed the trade-off between simplicity and complexity when building great applications, and I expected him to say something like, “Expose lots of flexibility in your application, but make it possible to keep simple tasks simple, e.g. using wizards,” but what he actually said was much more interesting: To be successful in the market, software has to have all those features – and thus complexity – and the main thing is to always let the user set the agenda and stay in control. Perhaps this explains the explosion of geospatial features in database products; all that expressive and analytical power is simply a hallmark of success.

Safe product icons
Reach out and get started with FME today

Real change is just a platform away.

FME is ready to put your data to work and transform your business today. Are you?