Even Insight Research doesn’t Always Tell the Truth

Posted: January 29th, 2007 | No Comments »

Extracted from an inspiring talk “Lipstick on a pig” given by Clive Grinyer at the European Market Research Event.

London Heathrow Airport Terminal 5 forecast that future travellers would be older. Research into older travellers showed they often go into the toilet, so many new toilets were planned.

However, deeper investigation discovered they were going into the toilets….to hear the announcements. It was the only place they could find where they could clearly hear the flight calls! So now the airport is putting new audio areas where you can clearly hear your flight call….

Relations to my thesis: A nice example of the limitations and (sometimes) subjective analysis in user studies. Then it highlights a very interesting adaptation of some people in a very complex and high-tech infrastructure such as an airport.


Visualizing the Amount of Flickr Geottaged Images By Location

Posted: January 28th, 2007 | 1 Comment »

Browsing the Flickr visualization clusters, I stumbled on 2 hacks to visualize on Google Earth the amount of pictures geotagged by locations.

<a href="Daniel Catt

Beau Gunderson

 Flickr Flickr-09.09.2006  Flickr Flickr-09.09.2006-No-Labels

Relation to my thesis: Similar projects as my Tracing the visitor’s eye and Granularity Level Used to Geotag Images exploratory experiments


The Technological Tower of Babel

Posted: January 26th, 2007 | No Comments »

Still in the theme of around messy and heterogeneous vision of ubicomp, a new graphic from Eboy paints the way technologies are playing out, forming some sort of technolological Tower of Babel.

via LUCI’s group

 Eboy Wp-Content Uploads 2007 01 Pt Babeltower 01T
.


Comparing AI's Failures with Ubicomp's Visions

Posted: January 25th, 2007 | 1 Comment »

There seems to be a growing trend to critique the calm and seamless vision of ubiquitous computing aiming to the “fantasy of the perfect” (referring to Lucy Suchman’s Human-Machine Reconfigurations). Matthew Chalmers pioneered with his notion of seamful design, slightly inspired by Weiser. Recently Genevieve Bell and Paul Dourish suggested that ubicomp is messy and seamlessness is a misleading vision. From my own experience, those points of view are a thin minority in a still extremely techno-utopian-driven research field. In contrary, it seems that designers acknowledge more the messiness of the everyday life. For example, in a post on reconfiguring the old future, Ben Kraal mentions that:

seamful design of ubicomp systems not only recognises the impossibility of a completely seamless, invisible, ubicomp infrastructure but embraces the messiness of everyday life.

He also points to the recent thoughts of Larry Irons who compares the failure of AI with the promises of ubicomp.

I was reminded of the many promises that artificial intelligence made for expert systems in the 1980s as he describes how the designers of context-aware, ubiquitous computing think they can make it work. [...] Having machines act in a sociable manner are credible after all the fiascos and shortfalls of the past 30 years. [...] The challenge is AI-hard. Yet, a fair reading would probably characterize it as AI-impossible.

I particularly enjoy Larry’s answer on a question about the politics of ubiquitous computing and how we should question first the hype around seamless interface as being the default objective:

Good question, but it won’t find a reasonable answer as long as designers are unwilling to ask the obvious questions about claims made by those who hype the need for a seamless interface to ubiquitous computing environments. You can only meaningfully address the question about the politics of ubiquitous computing ethics when a seamful interface is considered the default design objective. In my mind, this is why Greenfield is correct to insist that seamlessness must be the optional mode in such applications

Relation to my thesis: As mentioned earlier, I am really glad to see more thoughts on the perspective of ubicomp as being inherently messy. The comparison with the failures of AI is rather relevant for research aiming at a calm and seamless future. This is related to the “are we there yet” question around the definition of ubicomp and my e-minds paper Getting real with ubiquitous computing: the impact of discrepancies on collaboration which was my first (immature) attempt to highlight the limitation of the seamless vision around ubicomp. My “disturbed city” flickr set is also an abstract attempt to reveal the messiness of the urban life from which I do not perceive ubicomp as a solution. My possible talk at LIFT on Embracing the real world’s messiness, will be the opportunity to reflect on all this.

This also makes me think about the discussion between Brenda Laurel and Bruce Sterling about the design for pleasure and technological fairies (design for illusion, for magic) that happened at Ubicomp 2006.


Reactable Concert at Sala Castellò in Barcelona

Posted: January 25th, 2007 | No Comments »

Reactable concert featuring Sergi Jordá, Günter Geiger, Martin Kaltenbrunner, Marcos Alonso. The reactable, is a state-of-the-art multi-user electro-acoustic music instrument with a tabletop tangible user interface developed by the Music Technology Group within the Audiovisual Institute at the Pompeu Fabra University.

Reactable concert at  Sala Castellò in Barcelona Reactable concert at  Sala Castellò in Barcelona Reactable concert at  Sala Castellò in Barcelona

The same day, I was watching Jeff Han’s presentation on his touch-driven computer screen at TED.


Catching the Bus: Studying People and Practice at Intel

Posted: January 23rd, 2007 | 2 Comments »

In a talk given at the UCI Laboratory for Ubiquitous Computing and Interaction the anthropologist Ken Anderson (manager of People and Practices Research at Intel) discusses Intel’s work at understanding mobility and spatiality in urban and transnational settings. A podcast is available.

Relation to my thesis: Relevant thoughts and stories around individuals, collectiveness and productivity in mobile settings (bus, train, tube), sporadic creation of little “we”‘s… and a nice quote: “the rubbish is your future” referring to people building homes over garbage. It made me think of cities like San Francisco that is partially build over rubbish. It is also interesting to hear an anthropologist reflecting on the past of the field and thinking about how ubiquitous technologies provided by western corporations can be a new form of colonialism.


Model of my Research Focus

Posted: January 21st, 2007 | No Comments »

The current model of my research in which I integrate the user-generated location information and a split between a split between the physical, measured, virtual and social spaces (inspired by Managing Multiples Spaces) theoretically influencing the emergence of uncertainty. The social space still needs further development.

Research Focus Model-3


Litterature Map

Posted: January 20th, 2007 | No Comments »

I have been playing around trying to sketch a literature map of my research. Here is the current high-level status.

Literature Map-1
If I ever need to theorize more my framework, I still keep situated action, embodied interaction and distributed cognition in the drawer.


Taxonomy for Visualizing Location-Based Information

Posted: January 19th, 2007 | No Comments »

Suomela, R., and Lehikoinen, J. Taxonomy for visualizing location-based information. Virtual Reality 8, 2 (2004), 71–82.

This paper concentrates on analyzing different visualizations for location-based applications. It studies two factors that affect the visualization of location-based data. The two factors are the environment model they use, ranging from three dimensions (3D) to no dimensions (0D) at all; and the viewpoint, whether it is a first-person or a third-person view. The authors suggest a taxonomy featuring model-views (MV) for visualizing location-based data.

The environment model is used to denote how many dimensions the application uses in visualizing the environment. If no environment model is used, the user does not gain specific location information of an object, except that the object might be somewhere close by.

  • 3D environment model: these applications have an accurate 3D model of the environment and they place the location-based data onto its actual location in either the virtual or augmented view
  • 2D environment model: the locations of the virtual objects are accurately projected onto a plane
  • 1D environment model: application only shows one aspect of the location-based data
  • No environment model: the applications present the data to the user but nothing about its location or relation to the user

The user’s view to the location-based data is one of the two:

  • First person view: the user views the location-based data from a user-centric view, and the location-based data is spread around him or her
  • Third person view: the user views both the location-based data and his or her representation

The first-person views, MV(x, 1), can help the user in wayfinding and provide additional information on objects. It is easy to show where the next waypoint is or the direction to it, and all visible real-world objects can be digitally augmented with additional information. The third-person views on the other hand can show the user a much wider area in all the directions around the user, as they are not restricted to the user’s current viewpoint and orientation.

Navigational tasks and views
For some tasks, the egocentric views are better, while for other tasks, some other views would be preferred. Navigational tasks with digital maps can be defined as searching tasks (naïve search and primed search), and exploration tasks. A fourth task can be defined as a targeted search. In a targeted search, the target is shown on the map; in primed search, the target is known, but does not appear on the map; in naïve search, there is no a prior knowledge of the position of the target, and the target is not shown on the map; in exploration, no specific target has been set.

Alignment
An important aspect concerning maps and navigation is alignment; that is, it specifies how the map is oriented with respect to the user and the environment. A map may be reader aligned, in which case the orientation of the map remains constant with regard to the reader’s body. An environment-aligned map, on the contrary, is oriented consistently with regard to the environment; in other words, north on the map always corresponds to north in the environment.

People difficulties in map reading
Even though a 2D map display is a well known visualization technique, it has been found that the most severe problem with
using traditional 2D maps is the inability to understand the spatial relationships between the real-world objects, and, therefore, to match the map and terrain model in one’s mind (a study shows suggests that up to 64% of the population have difficulties in map reading.

Models and location accuracy
Not all of the models need similar accuracy for the location. The AR applications need to determine the user’s viewpoint very accurately, as they need to know how the real world is aligned to the user. On the other hand, applications that only list the virtual objects do
not need to know the location very accurately.

Examples from the taxonomy

3D environment model: first person view; MV(3, 1)
Mv-3-1

3D environment model: third person view; MV(3, 3)
Mv-3-3

2D environment model: first person view; MV(2, 1)
Mv-2-1
For example, car navigation system
2D environment model: third person view; MV(2, 3)
Mv-2-3
The application only needs to know the user’s location; other sensor information is not necessary. Increasing the map scale can compensate for an inaccurate location of the user, but if the user’s location is not know accurately, there is no point in showing “You Are Here”. Previous studies have shown that a map is easier to use if it is aligned forward up.
1D environment model: third person view; MV(1, 3)
Mv-1-3
Relation to my thesis: The authors mentions the relation between the model and accuracy to position the user’s viewpoint. Yet, they suggest that virtual objects do not need to be perfectly located. If the hardware does not have accurate sensors, the third-person views might be more user-friendly. This still has to be studies and proved. Moreover, they mention that “location-based information is, typically, a set of virtual objects in a certain area, and that virtual object have a precise location in the real world”. I do not agree about virtual object having a precise location, when one think for example of Flickr geotagged images attached to the an area (i.e. a place) and not a position. Many times, an area does not have clear limits such as walls and people have different perspective of an area.
The model lack of the time dimension, since virtual objects are not necessarily fixed.

Relevant reference:
Aretz AJ (1991) The design of electronic map displays. Hum Factors 33(1):85–101


Delivering Real-World Ubiquitous Location Systems

Posted: January 19th, 2007 | No Comments »

Borriello, G., Chalmers, M., LaMarca, A., and Nixon, P. Delivering real-world ubiquitous location systems. Commun. ACM 48, 3 (2005), 36–41.

This paper emphasize on the practical aspects of getting location-enhanced application deployed on existing devices without installing special infrastructure. It provides an overview of different types of ubiquitous location system. Based on two case studies, the authors reveal some interesting issues in the deployment of location-aware systems such as:

Edinburgh is an old city with many narrow streets and high buildings; its latitude of 55° north—almost as far north as Alaska—accentuates the urban canyon effects that hamper GPS.
[...]
On average, one or more access points were detected 48% of the time, and Place Lab could provide an accurate location. Two or more access points were detected for only 22% of the time. Indeed, the overall detection rate increased from 48% to 69% when excluding period of time visitors appeared to be indoors.
[...]
The game designers were surprised, for example, that rain, snow, and leaves on trees strongly affect WiFi and GPS.
[...]
The transfer of packets to and from access points can show significant asymmetry, and high packet loss can occur despite apparent network access.

Even if not standing in opposition to research aimed at improving accuracy and broadening availability, the authors suggest that we should offer pragmatic solutions while we continue to improve, adapt, evaluate the underlying technology of ubiquitous location systems.

Relation to my thesis: A reference I can use in a position paper for the workshop on Common Models and Patterns for Pervasive Computing to highlight the issues of deploying a WiFi-based location system such as CatchBob!. Besides the issues and challenges mentioned in this paper, I will add (among others things) the the uniqueness capabilities of pervasive devices.