Visualizing Trips and Travel Characteristics from GPS Data

Posted: July 15th, 2006 | No Comments »

Stopher P, Bullock P and Jiang Q 2003 ‘Visualising trips and travel characteristics from GPS data‘, Road & Transport Research, vol.12:2, pp. 3-14.

This paper, in the field of automatic mobility surveys, talks about the issue in presenting information contained in GPS records so that it is understandable both to the survey respondent and to the decision makers. It relates on how to convert the data from cars travels into discrete trips, and on their visualization.

An example of the type of data available from the GPS devices consist in:

* Latitude and longitude in degrees and decimal degrees, with hemispheric (E, W, N,S) designation
* Altitude in meters above sea level
* Heading in degrees from north
* Coordinated Universal Time (UTC Time, or Greenwich Mean Time)
* Coordinated Universal Date (UTC Date)
* Speed in km/h
* Horizontal dispersion of precision (HDOP)
* Satellites in view.

The most challenging part of the data manipulation is breaking the record into individual trips. The main difficulty is to detect short stops, such as may occur when a person fills up the car with petrol, stops to post a letter or similar activities (e.g. when the engine is left running):

Such locations are found by removing from the data any data points where the movement between successive data points is less than the accuracy rating of the GPS device. In our case, the GPS devices in use are rated to have an accuracy of within + or -20 metres. However, when stationary, the position rarely changes by more than a few metres, with a speed of 0.0 kph, or nearly so. Therefore, by removing those points where the speed is shown to be zero, and there is little change in position, we can detect when there is a stop lasting two minutes or longer, and define that as a probable trip end.

The authors acknowledge the problems that my arise in the track records such as signal loss and warm-up time:

The devices in use are rated to acquire signal in 15-45 seconds, and generally succeeded, when stationary, to acquire position within no more than 15-20 seconds. However, if the device is immediately in motion when it is turned on, such as if an in-vehicle device is used and a person gets in the car and drives off immediately, a much longer time may be required to acquire position. This results from the vehicle motion, which requires the device to take longer to fix its position, and may also be exacerbated if there are interruptions to the signal, resulting from tall buildings or heavy tree canopies, while the device is attempting to acquire position. In our experiments, we found that in-motion acquisition time depended on how long it had been since the device was last turned on. For elapsed stop times of less than an hour or two, signal acquisition was still relatively fast and generally about 30-60 seconds. However, if a longer time had elapsed since the last use, the position acquisition could become lengthy, and exceeded, in a few cases, 1 kilometre of travel distance and about 2 minutes or more of time.

To develop an automated procedure to analyse the data, a rules based algorithm is suggested as follow:

  • The difference in successive latitude and longitude values is less than 0.000 051 degrees; and
  • The heading is unchanged or is zero; and
  • Speed is zero; and
  • Elapsed time during which these conditions hold is equal to or greater than 120 seconds.

If there is a break in the record, meaning that the engine was turned off, of between 30 and 120 seconds, this is also defined as a potential trip end.

From the analysis of known trips recorded by GPS and the application of the rules discussed in this paper, the authors determine that their is about a five purcent error rate in both detecting false trip ends, and in failing to detect real trip ends.

Relation to my thesis: With “Learning Significant Locations and Predicting User Movement with GPS“, “Elimination of the Travel Diary: An Experiment to Derive Trip Purpose from GPS Travel Data” and “Exploring the Potentials of Automatically Collected GPS Data for Travel Behaviours Analysis” I explored issues in manipulating user generated location information in the transportation planning research. The main problems with location quality depends on the sensed-data (e.g. premature end of data-stream due to urban canyon), the GIS database and the data processing (e.g. coding of trips).


OVD to BCN

Posted: July 15th, 2006 | No Comments »

189885379 372Cde3D9E O 189885455 B1320C78Fd O 189885482 2Eedb08671 O 189885530 08281783Fd O


Exploring the Potentials of Automatically Collected GPS Data for Travel Behaviours Analysis

Posted: July 12th, 2006 | No Comments »

Schoenfelder, S., Axhausen, K.W., Antille, N. and Bierlaire, M. (2002) Exploring the potentials of automatically collected GPS data for travel behaviour analysis A Swedish data source, Arbeitsberichte Verkehr- und Raumplanung, 124, IVT, ETH, Zuerich.

This paper presents an approach to gain longitudinal travel behavior data by means of GPS. The purpose of this work is to fill the absence of mobility survey that last longer than one week. Beside the problem inherent to longitudinal surveys (limited pool of respondents, fatigue effects), the authors identify the potential technical drawbacks as: transmission problems, warm-up times before getting a fix, cost of post-processing of the GPS data. They also highlight the importance off taking the user into account:

The level of user interaction is believed to be an important issue for the development of future survey design incorporating GPS data collection elements.

A clustering technique was used to identify unique origins and destinations of travel. Their approach sets a tolerance distance or zone in which different final positions are per definition considered as only one destination.

Relation to my thesis: Similar to Elimination of the Travel Diary: An Experiment to Derive Trip Purpose from GPS Travel Data, this study suggests an approach to analyze people’s mobility based on sometimes missing sensed data, GIS issues such as digital land-use(data not including private road and parking spaces. The inaccuracy of available GPS data requires additional processing work. Indeed, the data post-processing must take these uncertain information to predict travel behaviors.


Elimination of the Travel Diary: An Experiment to Derive Trip Purpose from GPS Travel Data

Posted: July 12th, 2006 | No Comments »

Jean Wolf, Randall Guensler, and William Bachman. Elimination of the travel diary: An experiment to derive trip purpose from GPS travel data. Notes from Transportation Research Board, 80th annual meeting, January 7– 11, 2001, Washington, D.C.

This paper presents the results of a proof-of-concept study to obtain trip purposes out of GPS data and replace traditional travel diaries. In their findings, the authors mention that the equipment packages deployed for the pilot study proved to have many more problems than anticipated. The off-the-shelf units and cabling used were not optimized for durability (mobile device power problems or application errors. Some participants were dropped because numerous data errors indicated problems with equipment performance, cabling connections, and user operations. Therefore, an improved equipment packages would be necessary for commercial deployment.

Another significant issue cause the incorrect trip purpose assignment for 7% of the trips. All of these trips were misidentified as a result of inaccurate land use assignment. These land use assignment errors resulted from GPS position errors (e.g. uncorrected GPS data or premature termination of data stream), inaccurate parcel boundaries in GIS database, inaccurate assignment of parcel to the GPS trip end, or inaccurate coding of the land use in the parcel database.

Relation to my thesis: Example from the field of transportation research of the use of sometime poor location information quality. The quality depends on the sensed-data (e.g. premature end of data-stream due to urban canyon), the GIS database and the data processing (e.g. coding of trips).


Local Positioning Systems, The Book

Posted: July 6th, 2006 | No Comments »

0849333490.01. Aa240 Sclzzzzzzz V54318953 I came across Kris Kolodziej’s work while writing the CatchBob postmortem. Back then he had written Indoor Location Technology Opens New Worlds. His book Local Positioning Systems was released in May of this year. The reason for a book on local positioning is that global positioning systems do not work where people are: indoors and in cities. Kolodziej provides an overview of some of the different technologies that can be used to provide positioning solutions in support of location based services.

Relation to my thesis: the kind of condensed industry experience to consolidate my technical knowledge in positioning systems (mainly on the technologies/methods I do not use). I already own Location-Based Services (The Morgan Kaufmann Series in Data Management Systems).


Live Air Traffic in 3D

Posted: July 6th, 2006 | No Comments »

fboweb.com is known for the delivery of real-time air traffic information. They now provide live flight tracking in 3D via Google Earth for inbound traffic for several US-based airports (SFO, LAX, O’Hare, JFK) and specific flights. More morbidly, the provide traces of flight incidents and accidents.

Dal1489 Approach Sfo Dal1489 Approach Sfo2 Dal1489 Approach Sfo3

Relation to my thesis: Real-time map visualization of “things”.


Visualizing Geospatial Information Uncertainty: What We Know and What We Need to Know

Posted: July 5th, 2006 | No Comments »

Alan M. MacEachren and Anthony Robinson and Susan Hopper and Steven Gardner and Robert Murray and Mark Gahegan and Elisabeth Hetzler. “Visualizing Geospatial Information Uncertainty: What We Know and What We Need to Know“. In Cartography and Geographic Information Science,, Vol. 32, No. 3, pp. 139–160, July 2005.

The papers reviews and assess progress toward visual tools and methods to help analysts manage and understand information uncertainty. First the authors note that there is no comprehensive understanding of the parameters that influence successful uncertainty visualization. In turn, without this understanding, effective approaches to visualizing information uncertainty to support real-world geospatial information analysis remain elusive.

Conceptualizing Uncertainty
Uncertainty is an ill-defined concept, and the distinction between it and related concepts such as data quality, reliability, accuracy, and error often remaining ambiguous in the literature. According to Hunter and Goodchild (1993), when inaccuracy is known objectively, it can be expressed as error; when it is not known, the term uncertainty applies. Pang et al (1997) delineated three types of uncertainty related to stages in a visualization pipeline: collection uncertainty due to measurements, and models in the acquisition process, derived uncertainty arising from data transformations, and visualizations uncertainty introduced during the process of data-to-displac mapping.

Decision Making with Uncertainty
The research seems to take for granted that visual depictions of uncertainty are useful for decision making. Tversky and Kahneman (1974) not a conflict: some experts are dependent on statistical analyses to incorporate uncertainty into their decision, but lay users tend to ignore or misinterpret staticstical probalilities and instead rely on less accurate heuristics when making decisions. This divergence prompts 2 questions related to my topic:

  1. will providing information about data uncertainty in an explicit visual way help a lay or expert map reader make different decisions
  2. if they do make different decisions, will provision of information about data uncertainty lead to better, more correct, deicsions or simply cause analysts to discount the unreliable informations

Cliburn et al. (2002) address the idea of making decisions based on uncertain data with the help of uncertainty representations. They list the depiction of uncertainy as a drawback, because policy makers (the users for their study) typically want issues presented with not ambiguity. One participant in their study suggested that a depiction of uncertainty could be used to discredit the models rather than having the intended effect of signaling unbiased results.

Topology of Uncertainty
The literature makes it clear that there are a variety of kinds of uncertainty and to be useful, representations of uncertainty, visual or other, must address this variety. Therefore, there are there have been efforts to delineate the components of information uncertainty and related them specifically to visual representations methods. The earliest conceptual framework for geospatial uncertainty separated error components of value, space, time, consistency and completeness. All the approaches have in common the observation that uncertainty itself occurs at different levels of abstraction.

Most of the efforts to formalize an approach to uncertainty visualization within geovisualization (and GIScience more generally) derive from longterm work on Spatial Data Transfer Standards (SDTS) (Fegeas et al. 1992; Moellering 1994; Morrison 1988). The focus of the initial SDTS effort was on specifying categories of “data quality” which were to be encoded as part of the metadata for car tographic data sets: The categories of data quality defined as part of the SDTS are:

  • Lineage
  • Positional accuracy
  • Attribute accuracy
  • Logical accuracy
  • Logical consistency
  • Completeness

Gahegan and Ehlers (2000) focused on modeling uncertainty within the context of fusing activities between GIS and remote sensing. Their approach matched five types of uncertainty—data/value error/precision, space error/precision, time error/precision, consistency, and completeness—against four models of geographic space: field, image, thematic, and object, as shown below:

Gahegan And Ehlers Uncertainty

From an InfoVis rather than SciVis perspective, Gershon (1998) took a very different approach than Pang, focusing on kinds of “imperfection” in the information about which an analyst or decision maker might need to know. His argument is that imperfect information, while involving uncertainty, is more complex than typically considered from the viewpoint of uncertainty alone.

Gershon Taxonomy Imperfect Knowledge

Building on the typology efforts, Thomson et al. 2004 propose a typology of uncertainty relevant to geospatial information visualization in the context of intelligence analysis:

Thomson Uncertainty2004

Visual Signification of Uncertain Information
The most basic methods of visually representing uncertainty are available through direct application of Bertin’s (1983) visual variables, following guidelines already used in traditional cartography. The original set of variables includes location, size, color value, grain (often mislabeled as texture), color hue, orientation, and shape. In work focusing specifically on uncertainty visualization, Davis and Keller (1997) asserted that using color hue, color value, and “texture” are the “best candidates” for representing uncertain information using static methods. However most of the uncertainty visualization research includes an implicit assumption that users of uncertainty information are homogeneous.

Testing Use and Usability
Very little has been done to empirically evaluate whether the proposed applications work, or whether the theoretical perspectives lead to supportable hypotheses. In subsequent research, MacEachren et al. (1998) tested three methods of representing reliability, i.e., certainty, of health data on choropleth maps and again found that color saturation, counter to their prediction, was less effective for signifying uncertainty than the alternatives tested. Results from Schweizer and Goodchild (1992) , based on user performance on tasks ranging from simple value look-up to overall map comparisons, indicated that reliability information can be added successfully to choropleth maps without inhibiting users’ map-reading ability. Leitner and Buttenfield (2000) went beyond this to consider the impact of different representation methods on map interpretation for decision making. Map detail was found to have limited impact on results, but the maps that depicted uncertainty led to significantly more correct location decisions than those that did not. Response times were similar with and without uncertainty representation, from which the authors conclude that representation of uncertainty acts to clarify mapped information rather than to make the map cluttered or complex.

There is a need for a more systematic approach to understanding:

  • the use of information uncertainty in information analysis and decision making, and
  • the usability of uncertainty representation methods and manipulable interfaces for using those representations.

Discussion
We cannot yet say definitively whether decisions are better if uncertainty is visualized or suppressed, or under what conditions they are better; nor do we understand the impact of uncertainty visualization on the process of analysis or decision making. There is little agreement in the literature about the best way to represent uncertainty. A great number of these methods seem to have potential for displaying attribute certainty on static and dynamic data representations, but only a few of them have been empirically assessed and the results have not been studied in depth,

Challenges
The paper concludes by identifying seven key research challenges in visualizing information uncertainty, particularly as it it applies to decision making and analysis. The ones that relate to my work are:

  • Understanding the components of uncertainty and their relationships to domains, users, and information needs
  • Understanding how knowledge of information uncertainty influences information analysis, decision making, and decision outcomes
  • Understanding how (or whether) uncertainty visualization aids exploratory analysis
  • Developing methods for capturing and encoding analysts’ or decision makers’ uncertainty
  • Assessing the usability and utility of uncertainty capture, representation, and interaction methods and tools

Key references are:
Gahegan, M., and M. Ehlers. 2000. A framework for the modelling of uncertainty between remote sensing and geographic information. ISPRS Journal of Photogrammetry and Remote Sensing 55(3):176-88.

Gershon, N. D. 1998. Visualization of an imperfect world. Computer Graphics and Applications (IEEE) 18(4): 43-5.

Relation to my thesis: I am interested in methods to help users of location-aware system manage and understand information uncertainty. Methods can be to design adaptable core system or like in this case appropriate uncertain information visualization. This paper first acknowledges that there is no comprehensive understanding of the parameters that influence successful uncertainty visualization, and actually the approaches to support real-world geospatial information analysis remain elusive. The authors call for more systematic approach to understanding the usability of uncertainty representation methods and manipulable interface for using those representations. This is an area I might want to cover. A research question could be to know if decisions are better if location uncertainty is visualized or suppressed, and under what conditions they are better. The challenges mentioned by the authors are also part of my research domain such as “understanding the components of information uncertainty and their relationships to domains, users, and information needs and assessing the usability and utility of uncertainty capture, representation, and interaction methods and tools.

Finally, I am very interested in the different topologies of uncertainty, as I have already tried to sketch in the past. The approach by Thomson et al. 2004 is inspiring. Part of my work is to define what (spatial) uncertainty is in the context of ubiquitous computing.


Mobile Proximity-Based Service in Japan

Posted: July 5th, 2006 | No Comments »

The IHT covers a story about a service in Japan to bridge the digital with the physical world. Three Japanese companies and GeoVector offer a mobile phone that displays information from the Internet describing the object the users are looking at (i.e. pointing at with the pone).

The phones combine satellite-based navigation, precise to within no more than 9 meters, or 30 feet, with an electronic compass to provide a new dimension of orientation. Connect the device to the Internet and it is possible to overlay the point-and-click simplicity of a computer screen on top of the real world.

via P&V

Relation to my thesis: A real-world instantiation of a proximity based service that is part the quest of context driven information supply. Beyond the obvious technical issues and quality of the regular services (tourism, yellow pages, buddy finder), I am wondering in what way such a service could engage the users in interacting with their surroundings. What could be the invitations for participation…


Global Airport database in Google Earth

Posted: July 5th, 2006 | No Comments »

I imported into Google Earth the “only” 2782 airports (out of 9300 entries) with IATA code contained in the Global Airport database. I am not getting used to the trick of converting coordinates into decimal-degrees values.

Worldwide Airports Bcn Gva


Mobile Monday Barcelona

Posted: July 4th, 2006 | 1 Comment »

Last Night, was the launch of the Barcelona chapter of Mobile Monday Barcelona co-founded and organized by the unique Rudy de Waele. MobileMonday CEO Jari Tammisto introduced the concept of the event. Jari has been in the mobile industry (managerial position at the finnish national operator) for a long time and admitted he did all the mistakes that could have been done, including believing in pagers as main device for communication and investing in city-wide networks thinking GSM would fail.

Mobile Monday Barcelona

The evening’s topic was about mobile marketing & advertising.

Ricardo Baeza-Yates Director of Yahoo! Research Barcelona presented the open research and their vision called FUSE (for Find, Use, Share, and Expand), that is using search to fuse a myriad of services and applications, all of which center on knowledge and its application. Yahoo! bases its search strategy on the Wisdom of the Crowd, by delivering relevant content based on links, tags and search queries. However one challenge is to have people tagging content. On of their experiment is called ESP Game, a game in which players compete to tag images and therefore introduce knowledge while playing. Then Ricardo mentioned the specificities of mobile search: queries have more variaty, less time is spent on the results and (surprisingly to me) more words are used in the query.

Russell Buckley (MobHappy and AdMob) drew on the lessons of 6 years of Mobile advertising, he looked at successes and failures. His global messages is that “Mobile Advertising works if done correctly”. Russell went on analyzing the failures of the push-based mobile marketing/advertising model, the so-called “Starbucks myth” (i.e. Bluetooth messages generated when people pass nearby a coffee place). Indeed, the mobile industry has yet not really started finding relevant ways to use proximity and space as triggers for interaction. Somehow in response to that Ana Caralt, CEO of McCann Interactive Spain, described a successful advertisement campaign in which a billboard invited people to connect to it via Bluetooth to download content. I tend to believe that “ubiquitous companies” have the opportunity to engage people to interact using their infrastructures.

Relation to my thesis: Russell Buckley’s talk was very relevant in highlighting the difficult balance between pervasiveness and intrusiveness (e.g. being present and not annoying, finding the right timing, the right added-value and the right way to interact) mobile marketing campaigns must deal with. Same goes with the design of ubiquitous environments