Taxonomy for Visualizing Location-Based Information
Posted: January 19th, 2007 | No Comments »Suomela, R., and Lehikoinen, J. Taxonomy for visualizing location-based information. Virtual Reality 8, 2 (2004), 71–82.
This paper concentrates on analyzing different visualizations for location-based applications. It studies two factors that affect the visualization of location-based data. The two factors are the environment model they use, ranging from three dimensions (3D) to no dimensions (0D) at all; and the viewpoint, whether it is a first-person or a third-person view. The authors suggest a taxonomy featuring model-views (MV) for visualizing location-based data.
The environment model is used to denote how many dimensions the application uses in visualizing the environment. If no environment model is used, the user does not gain specific location information of an object, except that the object might be somewhere close by.
- 3D environment model: these applications have an accurate 3D model of the environment and they place the location-based data onto its actual location in either the virtual or augmented view
- 2D environment model: the locations of the virtual objects are accurately projected onto a plane
- 1D environment model: application only shows one aspect of the location-based data
- No environment model: the applications present the data to the user but nothing about its location or relation to the user
The user’s view to the location-based data is one of the two:
- First person view: the user views the location-based data from a user-centric view, and the location-based data is spread around him or her
- Third person view: the user views both the location-based data and his or her representation
The first-person views, MV(x, 1), can help the user in wayfinding and provide additional information on objects. It is easy to show where the next waypoint is or the direction to it, and all visible real-world objects can be digitally augmented with additional information. The third-person views on the other hand can show the user a much wider area in all the directions around the user, as they are not restricted to the user’s current viewpoint and orientation.
Navigational tasks and views
For some tasks, the egocentric views are better, while for other tasks, some other views would be preferred. Navigational tasks with digital maps can be defined as searching tasks (naïve search and primed search), and exploration tasks. A fourth task can be defined as a targeted search. In a targeted search, the target is shown on the map; in primed search, the target is known, but does not appear on the map; in naïve search, there is no a prior knowledge of the position of the target, and the target is not shown on the map; in exploration, no specific target has been set.
Alignment
An important aspect concerning maps and navigation is alignment; that is, it specifies how the map is oriented with respect to the user and the environment. A map may be reader aligned, in which case the orientation of the map remains constant with regard to the reader’s body. An environment-aligned map, on the contrary, is oriented consistently with regard to the environment; in other words, north on the map always corresponds to north in the environment.
People difficulties in map reading
Even though a 2D map display is a well known visualization technique, it has been found that the most severe problem with
using traditional 2D maps is the inability to understand the spatial relationships between the real-world objects, and, therefore, to match the map and terrain model in one’s mind (a study shows suggests that up to 64% of the population have difficulties in map reading.
Models and location accuracy
Not all of the models need similar accuracy for the location. The AR applications need to determine the user’s viewpoint very accurately, as they need to know how the real world is aligned to the user. On the other hand, applications that only list the virtual objects do
not need to know the location very accurately.
Examples from the taxonomy
3D environment model: first person view; MV(3, 1)
3D environment model: third person view; MV(3, 3)
2D environment model: first person view; MV(2, 1)
For example, car navigation system
2D environment model: third person view; MV(2, 3)
The application only needs to know the user’s location; other sensor information is not necessary. Increasing the map scale can compensate for an inaccurate location of the user, but if the user’s location is not know accurately, there is no point in showing “You Are Here”. Previous studies have shown that a map is easier to use if it is aligned forward up.
1D environment model: third person view; MV(1, 3)
Relation to my thesis: The authors mentions the relation between the model and accuracy to position the user’s viewpoint. Yet, they suggest that virtual objects do not need to be perfectly located. If the hardware does not have accurate sensors, the third-person views might be more user-friendly. This still has to be studies and proved. Moreover, they mention that “location-based information is, typically, a set of virtual objects in a certain area, and that virtual object have a precise location in the real world”. I do not agree about virtual object having a precise location, when one think for example of Flickr geotagged images attached to the an area (i.e. a place) and not a position. Many times, an area does not have clear limits such as walls and people have different perspective of an area.
The model lack of the time dimension, since virtual objects are not necessarily fixed.
Relevant reference:
Aretz AJ (1991) The design of electronic map displays. Hum Factors 33(1):85–101