JR
Josef Rissling
Stories
Code
Sound
About

map-tool.jpg
Map Tool
A map comparison project for University of Bremen



map-tool.jpg
Map Tool
A map comparison project for University of Bremen
Map Tool

Map Tool is an online map comparison and exploration tool written in JavaScript (using HTML5,WebGL/GLSL). It allows to compare different zoom levels of different map providers interactively. Additionally, the tool analyzes the map data and visualizes its features. The inspiration for the tool comes from Justin O’Beirne who wrote the great study "Cartography Comparison", in which he compares several features of Google Maps and Apple Maps.

Screenshots of Map Tool, first: a map from OSM with features, second: a map of Google Maps with features
maptool-openstreetmaps.jpg maptool-googlemaps.jpg
Online Version
Motivation

The motivation behind this tool is similar to Justin O’Beirne's motivation. It's his question about how the "World's first Universal Map - the one that is used by a majority of the global population" could look like. One interesting question for the fields of cartography and its user experience is what is shown and what is not. To be able to answer that question, Justin O’Beirne compared several features in his study - listing and analyzing the statistics visually.
My professor, Johannes Schöning from HCI Research Group Bremen, and me thought it would be great to have an automated process that would be able to do the same interactively, allowing users to explore and compare different parts of maps on their own. The inspiration here came from Map Compare which allows to compare 4 maps and styles at the same time.
However, the idea for the tool was not to compare specialized views, but comparing the original default maps and their statistics and numbers. Also the tool should create a library base that would be able to be expanded in the future for more features.
What is needed

To be able to compare features, you'll need the data of the maps. In O’Beirne's study 3 main feature types were present: colors, icons and labels. He compared them by appearence, style and also categorized their types (e.g. labels for city, sub area, countries, water).
Most providers have APIs that allow embedding their map services into webpages - except Apple Maps (September 2017) - but there is no easy way to extract the data shown on the maps. There are possibilities to style layers (Google Maps), or get very deep into the structural data (Open Street Maps). The remaining task will still be to extract features from imagery and combine the overlapping data of different APIs. Since my time was limited to two weeks, it seemed not to be possible to write for every API an data retriever. That's why I stayed in the domain of images, which is the common dominator of all services.

Screenshots of various API options, first: OSM bicycle map, second: a filterd and label removed map of Google Maps
maptool-osm-bicycle.jpg maptool-googlemaps-styled.jpg
Getting the maps

First, I experimented with the Google Maps API directly, which worked well (after getting an API key). The second map I tried was Open Street Maps through the OpenLayers Library, which on its own worked also well. But soon a problem aroused: both APIs directly would (although slightly) not give the same views given the same coordinates. Luckily, another library was able to solve the problem: OL3-Google-Maps, an extension to OpenLayers v3 which integrates Google Maps. At this point, I decided to only use OpenLayers - since it integrates a lot more map providers ( including Bing Maps) and continued with the feature detection part.
Main App

The focus in the main app was the ability to analyze in real-time. For this reason I implemented a mechanism, that gives each visible map a priority. The map with the highest priority will then be processed first. But not all possible features are processed at once. After each feature part, it re-evaluates the priorities and then chooses which map and feature should be processed next. Maps that have already computed some of their features get lower priorities. To achieve high frame rates and responsiveness the detection of one feature is "pseudo-threaded". Large processing steps are chopped into smaller chunks, since they could block the main UI thread.
Additionally, in the OpenLayers library, the event for detecting a completely loaded map is not reliable, and for that reason it re-ensures 3 times (in 1 second intervals after the first complete event) if the map data did change after the event - if it did it will clear all computed features and restart the process.
Front End

The front end is designed to render different layouts by using "presets". A preset is a JSON-like object that declaratively describes the visible content. It allows to define the type of maps and features that are shown and how to style them. Feature types can be setup to render differently, according to the needs of the map layout. For example a preset can declare how many colors should be visible in the color palette and whether to also render the rest of the colors accumulated.

Variations of layouts.
First: 2x9 Zoom Level Layout with 2 sources ( with Colors & Regions), Second: 4x4 Grid with a single source ( Colors only)
maptool-zooms.jpg maptool-grid.jpg


More details to the specification can be found in the appendix section
→ Preset Details
Feature Types


Colors
For this feature all map pixels are hashed into a color hash map (the hashing is a dimension reduction in HSL that brings similar colors together). To generate the color palette, averages for the largest color hashes are computed. Finally a merging algorithm combines very close ones to one color entry (see image below).

Regions
The regions are pixels that have neighbors (+/-X and +/-Y ) with the same color. It is a classic flood fill algorithm, slightly modified for optimization. Hovering regions will draw an overlay over the related map.

Icons
Matching the icons is done fuzzily and is more in an "alpha" state. Hovering icon symbols will draw their positions on the map.
The basic algorithm compares sections of the map images with sampled icons and renders the similarity into a result image. Transparent parts in the icon images do not count. In mathematically terms (simplified):

With values for icons computed as:

maptool-formula-iconmatching-alpha.svg
Then a pixel's amount in the map is:

maptool-formula-iconmatching-amount.svg
The icon matching has some problems, that were (partially) solved by iterating (match parameters) and observing the data. The main tradeoffs were made for performance reasons by balancing false positives for less execution time. As example: instead of using more than 25 icon matchers for one of the OSM POIs categories, synthetic ones were generated (by hand in GIMP) that would match less accurate, but magnitudes faster. Also Google Maps' icons vary per zoom level. They have 30x30 pixels at level 17 but shrink to 12x12 at level 13. The change of the icons is not trivial, since they are not just scaled versions. Nevertheless, the icon matching uses scaled versions from level 17 to match at all other zoom levels (with accuracy penalties).

Labels
Since the labels are the most complex feature - they involve recognition and classifying their category - they were excluded (more in the summary).

First: Color Palette Generation (Pixels > Hashes > Merged), Second: Icon Matching (Map > Regions > Filter > POIs)
maptool-colorpalette.svg maptool-icon-matching.jpg
Implementation Details
Further technical information and algorithms can be found in the appendix section
→ Feature Detection Details


Results

For a static comparison I exported a zoom series (level 13-17) from Berlin. Although the icon matching has some error, it still gives a good (but rough) assessment in the different approaches how information is rendered.
OSM is only showing public transport stations from 13 to 15 and then starts with a lot of POIs at level 16. Google Maps on the other side shows already POIs at level 13 but less public transport stations.
The color spectrum also reflects the different rendering styles of those two maps. Especially the comparison of the distribution of the 5 most colors vs. the rest of the colors is interesting there. Since OSM has a lot of details color-wise the rest of the color palette has a very high portion, while Google Maps is rather simplified and its rest portion is quite low. This is also indicated by the connected regions that cover more space at Google Maps than in OSM. The differences shrink although with higher zoom levels.
Zoom Level 13. First: OSM, Second: Google Maps
maptool-z13-osm.jpg maptool-z13-google.jpg
Zoom Level 14. First: OSM, Second: Google Maps
maptool-z14-osm.jpg maptool-z14-google.jpg
Zoom Level 15. First: OSM, Second: Google Maps
maptool-z15-osm.jpg maptool-z15-google.jpg
Zoom Level 16. First: OSM, Second: Google Maps
maptool-z16-osm.jpg maptool-z16-google.jpg
Zoom Level 17. First: OSM, Second: Google Maps
maptool-z17-osm.jpg maptool-z17-google.jpg
Summary

The base of map tool gives already a good amount of comparison possibilities. The real-time analysis enables an explorative approach expressing visual differences in statistics and numbers. A part that is missing is a summary section, that shows data of different stats in graphs in relation - e.g. number of icons per zoom level.

From the technical side improvements and extensions can be made in various fields. Especially the icon matching can be improved by increasing the quality of icon matcher sets. Also new icon matcher sets can be added to be able to compare more maps (Bing Maps for example).
The icon matching algorithm can be extended to match geometric objects that have clear areas that mark special positions - like the left and right side of the shapes used for high way signs. The resulting POIs just have to be connected (left POIs matching right POIs with angle and distance).

First: Set of Icons, Second: Combining Matching of POIs to Geometry
maptool-icons.jpg maptool-rectangle-matching.jpg

The labels problem can maybe be addressed with Tesseract. Experiments with the online detection showed that it goes in the right direction. The detection of text fragments is quite good, but the resulting text quality depends highly on the angle of the text baseline. Some research could be invested to figure out if and how it can be added to the feature detection.

Tesseract Experiments. First: Detection Boxes, Second: Actual Text
maptool-tesseract-test-detection.jpg maptool-tesseract-test-text.jpg
2017 Josef Rissling Contact Terms of Use Nutzungsbedingungen
This site uses cookies. Read more.
Stories
Code
Sound
About