Pervasive Health Conference

Posted: April 30th, 2006 | 1 Comment »

Call For Paper of the Pervasive Health Conference 2006

“Pervasive healthcare is emerging research discipline, focusing on the development and application of pervasive and ubiquitous computing technology for healthcare purposes. Pervasive healthcare seeks to accommodate the growing need for healthcare arising from a number of factors, including the increase in life-style and chronic diseases, the increased complexity of large healthcare organizations, providing healtcare services in rural and underserved areas worldwide, and enabling patients and relatives engage more closely in self-care and treatment.”

Relation to my thesis: Besides security, health is a primer sector seeking solutions in ubicomp. I expect the grows of conferences mixing ubicomp with other fields such as sustainable development, urbanism, transports, entertainment, … to share solutions and implications.


Ubiquitous Companies

Posted: April 30th, 2006 | No Comments »

In his preparation for the “place” panel at the networked publics conference, Nicolas mentions, among many thoughts, something we have been talking about for a while: Ubiquitous companies. Ubiquitous companies are part of our everyday lives that we do not notice. The shape our physical world while remaining invisible. 2 examples are
JDDecaux: A world-wide marketing company that offers outdoor advertising on so-called “street furnitures”.
Geberit: A leader in sanitary technology for public spaces

 Pages Product Images Afaaa2
The ubiquitous and invisible presence of JCDecaux


Usage-Centered Design

Posted: April 28th, 2006 | No Comments »

Next week, I’ll be TAing a class on Usage-Centered Design. Usage-Centered Design is a methodology that provides a framework made of easy to understand and flexible models for interface and interaction design. I understand it as a subset of User-Centered Design that emphasizes on software engineering methodologies such as use of abstract models to define concrete problems and use of an UML-inspired notification for collaborative work (among designers/developers/users). The formal process and systemic design mixes with iterative and validation parts that introduce fast prototyping techniques at an early development stage. Therefore Usage-Centered Design relies and emphasizes on thoughtful design to prevent the shortcoming usability testing and users studies. The goal is not to confuse what users with what they truly need.

In the context of the lab, abstract modeling of systems is a key skill to practice. Indeed, it is important to learn to:

  • to represent the basic ideas founding a system;
  • to get a global vision that allows then to zoom in on specific details;
  • to keep the “doors open” to innovations and multiple design/development alternatives

Usage-centered design models ask students to compile and generate ideas (brainstorm), organize them (simplify and generalize) and focus on the necessary details (elaboration on key points). They should also learn to link clues collected from the analysis in Lab 2 (Questionnaire) and 3 (Contextual design) feed the Usage-centered design models.
Ucd Process
Simplified global picture of the labs

Usage-centered design is driven by 3 closely related abstract models. The role model captures the characteristics of the roles that users play in relation to a system. The task model represents the structure of the work users need to accomplish in relation with a system. The content model represents the contents and organization of the user interface needed to support the identified tasks.

 ~Jblat Material Diss Interf Notes Imatges3B Procesucd
Process of the Usage-Centered Design methodology

User roles model
From lab 2 and 3 students collected clues on the basic users needs to use the system (who are the users, what do they do, what do they want to do, …) and detected potential design problems (what does not work well, what is not effective, …). Based on that they can deduce roles played by the users. A role is an abstract collection of needs, interest, expectations, behaviors and responsibilities that represent a relation between a type of user and the system (more a less similar to Actors in UML use cases). This relation is what the user roles model tends to capture. Each role should be informally described and can be structured in terms of profiles (competence, proficiency, type of interaction, usability criteria). a User Roles Map is used to represent the relations (affinity, specialization, composition) between the roles.

User Role Model Example
Example of a very simple user roles model

Task model (Use cases)
Uses cases are used to understand and model the nature of the work supported by each role. They help define what the users intend to do in their work using the system and what is needed to support it. A use case is a narrative description of what the system offers (black-box view) to the users as well as the interactions between the users (roles) and the system. The narration is divided in 2 parts, the user action model and the system response model.

The tasks emerge from the roles defined in the User Role Model. They can be identified by asking what what do each role need to accomplish, what do they need to know to be capable of doing it and how it can be supported by the system. It can also be what information the user examines, changes or create, how can the system inform and how do they need to inform the system.

In this lab, we will only focus on the essential use cases, that is structured narratives with generalized and abstract (no technological constraints, no complete interaction) descriptions. The structured narratives consists in three elements: a sentence describing the global intention and then the user intentions linked to the system responsibilities.

 ~Jblat Material Diss Interf Notes Imatges3B Casdusessencial
Example of the structure narrative of an essential use case

However, use cases are interrelated among each others. A Use Case Map provides a global structure of the work supported by the system and the user interface. The relations can be of type: specialization, extension, composition and affinity.

 ~Jblat Material Diss Interf Notes Imatges3B Cu Especialitzacio
Example of a specialization relationship among use cases in the Use Case Map

Content model
A good interface architecture implies to specify the tools, material and space, therefore distribute the content of the user interface among different space and interconnect them. The content model provides an abstract representation of the content of the different interaction contexts of the system and the relation between them. The content of each interaction space represent the tools and material needed to complete the previously defined use cases. In other words, it is a derivation fo the task model. A navigation map is used to design the changes of interaction contexts imposed to the role to achieve their defined tasks.

In this lab, we plan to stick to low-tech, high-fidelity prototyping. An advantage of this approach is to prevent student to preoccupy about the visual appearance and the behaviors of the interface. They also might learn to keep “doors open” to alternative paths (or roll-backs) that could potentially prevent low-fidelity prototyping carrying “pre-concieved” ideas. The way to build a content model is:

  1. For each use case narrative define the tools and material the interaction space needs to offer
  2. Define the data and content needed and describe them on a Post-it
  3. For the identified data extract the required function and operation. For each tool, describe its function that it used for on the Post-it

Content Model Example
Example of a content model for a specific interaction context

Navigation Map Example
Simplified example of a navigation map


If Blogs Where Atoms and not Bits

Posted: April 23rd, 2006 | 1 Comment »

From left to right: Technorati, an RSS aggregator, empty RSS feeds, a wordpress upgrade
Technorati Blogs My Physical Rss Aggregator  45 132952881 194442E177 Wordpress Upgrade


The Introduction of Location-Aware Systems

Posted: April 23rd, 2006 | No Comments »

Luckington Positioning technologies are a piece of the everyware puzzle. GPS navigation systems stories provide examples on how location-aware systems are introduced into transactions never before subject to technical intervention. Having once to reach a specific building in the chaotic (barely no roadway signals + local driving habits) city of Milan, the car navigation system helped me to almost completely focus on my task. However, such navigation systems are far from designed to become invisible. Partially due to the known system flaws (inaccuracy, non-updated/missing geo data, fuzzy indications …) the user questions the system reliability when there is a visual mismatch between the physical observations of the driver and the digital (graphic and voice) indications. A clear situation of uncertainty. However, navigation system seem intrusive outside of critical situations. As highlighted in a survey, “13% (of drivers with GPS units) would rely solely on their GPS to get them to their destination, rendering them completely oblivious to the world around them”. Street signs are bypassed by the navigation system (like I did in Milan… expect that there were barely any road signs there). It is an interesting example of supplanting our judgments (taking decisions according to road sign, a physical environment and sense of direction) by complying to an external system (Marshall McLuhan). Interestingly I stumbled on an opposite approach to navigation systems in the comment “As a Taxi driver I used a GPS for awhile but it was too inaccurate and distracting, now I just use a downloaded city map.” An interesting appropriation technique by removing the burden having the deal with inaccurate information and rely on trusted data (the driver’s experience of the area, his observations and a basic map).

The introduction of GPS creates new type of incidents (the everyware types of Blue Screen of Death!) like what happens in the village of Luckington. Since a road closure, dozens of drivers have blithely followed directions from their satellite navigation systems, not realising that the recommended route goes through the ford. Other similar stories this month include the motorists sent to the edge of a 100ft drop on an unclassified road at Crackpot in North Yorkshire.

There is not suprise to see courses and training session that teach how to use them and a popular swiss TV show went into explaining how the technology works (probably in order to better understand how to use it) and warning of the system shortcomings for the potential buyers.

Relation to my thesis: Here, I try to understand how location-aware are introduced in our lives and our ways to deal with them and their imperfections. In what way it introduces undue complications into ordinary operations (the taxi driver comment) and by extension how they can be harmless (motorist sent to the edge of a clif).


Everyware

Posted: April 23rd, 2006 | 2 Comments »

Everyware BookcoverEveryware by user experience designer and critical futurist Adam Greenfield

Adam Greenfield entered my radar in the early CatchBob! days, as one of the first strong voice advocating for user experience and social studies in ubicomp. He had published a notorious post on Ethical Guidelines for Ubicomp. These guidelines have now been formalized, expended and are one of the output of Everyware.

I understand this book as a response to the dissonance in ubicomp between engineering endeavors and skepticism (moderate enthusiasm) coming from the HCI community. A way to set things straight and ground discussions from the noisy and eclectic ubicomp community. Adam’s intention is certainly not to prevent the movement towards the 3rd age of computing, but to question the implications of the scale up of ubicomp and genuinely how to improve what he coined as “everyware”. The definition of Everyware is: information processing embedded in everyday objects dissolving in behavior.

The relevant parts form my thesis:

Thesis 19: Everyware is always situated in a particular context
Personal computing is largely independent of context. The shift toward the post-PC era raises the importance of the physical and social environments. This is what Paul Dourish coined in his theory of “embodied interaction” [1] [2] [3] which says that interactions derive their meaning by occurring in real time, real space and between people. However it is hard (if not impossible) to make context-aware systems fully grasp and adapt to physical and social settings. In my work on location-awareness, I design systems based on various technical limitations and constraints (coverage, latency, accuracy, predictability. …). I try to find ways to manage the mismatches (discrepancies) between the real, measured and virtual spaces and have “some” understanding of their impacts on actions and interactions. This endeavor fits to the challenge of managing the everyware sensitivity to physical settings (location) in the context of interaction.

Thesis 31: Everyware is a strategy for the reduction of the cognitive overload
I have a hard time buying into the “calm technology” vision behind ubiquitous computing and so is partially Adam Greenfield. He acknowledges that “Brown and Weiser were probably wrong as to just how strong an incentive it (encalming) would provide, they were correct that the specter of global information overload would prompt at least some developers to pursue less intrusive interface”. First of all I doubt that the current industry has the development methodologies to produce gentle interfaces. Secondly we barely know how to make them smart and deeply adaptive. Thirdly our coevolution with technologies is about accommodation and appropriation. We play with noise and disorder. It is what allow us to be in command. Besides, we live in a world of information overload since the first 500 books were printed after the creation of the movable type.

In my thesis, I might be looking for design rules to support collaborative geolocalized everyware experiences in the presence of location uncertainty. Uncertainty that I find inherent to ubiquitous environments. A question I am interested in from a Stavros Antifakos paper is the tradeoff between the cognitive load, which displaying uncertainty information causes, and the added value that it provides.

Thesis 34: Everyware insinuates itself into transactions never before subject to technical intervention
Quoting Mike Kuniavsky endowing furniture and other everyday things with digital intelligence “can introduce all kinds of complexity and failure modes that don’t currently exist”. Paraphrasing Paul Robeson “whatever marginal “improvement” is enacted by overlaying daily life with digital life with digital mediation has to be balanced against the risk of screwing up something that already works, however gracelessly or inelegantly.
Understanding the impacts the uncertainty generated by the intrusion of positioning systems in our lives and designing location-aware for collaboration accordingly is my research interest.

Thesis 37: Everyday life presents designers of everyware with a particularly difficult case because so very much about it is tacit, unspoken, or defined with insufficient precision
It is the problem of engineeringly objectifying a context and therefor reducing it [1]. It is a core issue in the intersection of CSCW and ubicomp.

Thesis 38: Everyware is problematic because it is hard to see literally
Everyware is about dissembling technologies while in the same time we need systems to be intelligible and accountable. How can uncertain spatial information be communicated (feedback) in a non-intrusive way. What can designer sweep under the rug of non-intrusiveness?

Thesis 40: The discourse of seamlessness effaces or elides meaningful distinctions between systems
Phrases like “seamless interaction”, “seamless integration”, “seamless interconnection”, or “seamless interfaces” are still very much part of ubicomp’s rhethoric. I completely share Adam’s analysis of the flaws of seamlessness. First, as underlined by Weiser, it is a negatively homogenizing attribute that flattens out the perceptibility of a system boundaries. Second, it is dishonest as heterogeneity is often held together with the digital equivalent of duct tape and chewing gum. Third, it carries paternalist values as it deprives the user of meaningful participation in the decision that affect his experience. Fourth, it can be harder to foster user appropriation and ownership critical to positive experience of technology, as there is no “handle” and ways to reach into a system.

Spatial positioning and accuracy limits and carries a representational scheme of finite scope (e.g. a room with a timestamp). I work of how much and how these seams can be carried to the users in a collaborative settings. How can this type of information should be regulated.

Thesis 43: Everyware produces a wide belt of circumstances where human agency, judgment, and will are progressively supplanted by compliance with external standards and norms
Paraphrasing Marshall McLuhan “when we rely on technical systems to ameliorate the burdens of everyday life, we invariably allow our organic faculties to atropy to a corresponding degree”. It makes me think of Michel Serres’s description of exo-darwinism saying the we progressively externalize our faculties from memory to reasoning and imagination). We lose some functionalities to free us from their constraints and therefore evolute. Now to a lower-lever (to my thesis), Nicolas already mentioned the design use behind the automatic disclosure of location awareness information in his research. I am wondering how much of that applies to my work. Maybe resolving or disambiguating uncertainty has an impact on the level of immersion Maybe the user should be part of the disambiguation process…

Thesis 48: Those developing everyware may have little idea that his is in fact what they are doing
I am naturally pleased to see people like Adam getting the heat off the back of engineers. He mentions the material, economical and time constraints bounding the engineer work. Hower he does not give us the credit to have the ability (time?) to contextualize our work. I think this is getting less and less true as our work in everyware is getting closer to social constraints. Many engineers are now getting potty trained. Last year I tried to sketch the constraints in the context of ubicomp (Adam’s material behind the technical and physical (environment) constrains, I think that time is part of the economical constraints and as I state below the social constraints are now part of the boundaries also)
Ubicomp Constraints

Then Adam goes on listing the high level infrastructural problems everyware faces such as balkanized developments and standards resulting in lack of interoperability, the non-existing network infrastructure to have everyware everywhere, the lack of formalized ways “à la UML” to discuss everyware issues and last but not least… the “why? question” (thesis 58) that is we have forgotten to ascertain whether or not everyware makes any sense to anyone outside of the contours of our ubicomp consensual hallucination. To this list I would add the still primitive information retrieval techniques that we have to face the amount of data generated by everyware. This is the type of “AI-hard” problem similar to computationally grasping a physical, social and cultural context.

Adam’s ethical design guidelines for the design and deployment of ubiquitous technology come as natural in the last part of the book. Among them, the one I relate to the most are:
- We’re not very good at doing “smart” yet (and probably smartness will never be achieved in a holistic level…)
- Everyware must default to harmless and go beyond the engineering principle of “graceful degradation”
- Everyware must be self-disclosing and seamlessness could be an optional mode of presentation
- Ubiquitous systems must not introduce undue complication into ordinary operation
- Ubiquitous systems must offer users the ability to opt out and at any point

The book bibliography is online.


The Management of Human Errors in User-Centered Design

Posted: April 20th, 2006 | No Comments »

Rizzo, A., Parlangeli, O., Marchigiani, E., Bagnara, S. (1996) Guidelines for managing human error. SIGCHI Bullettin, 12, 312-320.

A paper on the role of error analysis in usability evaluation providing guidelines aimed at supporting a qualitative analysis of human error

Relation to my thesis: I am more interested in the management of system “errors” (due to limitations and constraints, not bad design). But it is also good to be aware of the user-centered design practices for evaluating and dealing with human errors. Ultimately the same goal is shared: minimize incidence of error, maximize error detection, and make easier error recovery. In the case of spatial awareness, an “error” is rather subjective interpretation. Location accuracy is not binary, nor discrete but continuous.


Making Sense of Sensing Systems: Five Questions for Designers and Researchers

Posted: April 20th, 2006 | No Comments »

Bellotti, V., Back, M., Edwards, W. K., Grinter, R. E., Henderson, A., and Lopes, C. 2002. Making sense of sensing systems: five questions for designers and researchers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves (Minneapolis, Minnesota, USA, April 20 – 25, 2002). CHI ’02. ACM Press, New York, NY, 415-422.

Bellotti et al. inspire themselves from human-human interaction (HHI) studies to inform the design of interaction in sensing systems (ubiquitous environment). Their claim is that since we are moving away from the standard graphical user interfaces, we must reframe interaction. Norman’s gulfs of execution and evaluation must be tackled again in ubiquitous computing. Based on a communication rather than cognition emphasis of Norman’s seven stages of execution present five questions for designing interaction with sensing systems:

Bellotti Five Questions

Each question as a relevant aspect for my interest in spatial uncertainty (and more specifically on uncertainty generated by the sensed context),

  • Address: how to disambiguate intended target system that are triggered by location
  • Attention: Provide relevant feedback about the system attention (i.e. its accuracy, update rate, …). Keep users aware of what their peers are learning about them (their location, its accuracy, …)
  • Action: diminishing uncertainty about likely and acceptable actions
  • Alignment: determining and provide a relevant locations to the users
  • Accident: Without a GUI, ambiguity is a serious problem

Relation to my thesis: Ubiquitous computing suffers from a contradiction. Ubicomp designer attempt to create “invisible interface” in which the UI “disappears” into the environment while still maintaining strong communication conventions. The challenge is to develop location-aware ubiquitous systems that can communicate more naturally and effectively with people and that even when they reach states of uncertainty. That is avoiding Norman’s gulf of execution and evaluation. Getting inspiration from HHI is relevant to find clues on how manage and repair the communications between humans and sensing (such as dealing with ambiguity and uncertainty).

Reblogged from my own Making Sense of Sensing Systems: Five Questions for Designers and Researchers


BigDog

Posted: April 19th, 2006 | No Comments »

Humanoid looking robots are boring. They rarely display life-like properties. Shapeless ugly robots are definitively much more attractive because they give a better sense of autonomy and adaptation. BigDog, developped by Boston Dynamics, is powered by a pleasantly annoying gasoline engine that drives 4 articulated legs articulated. It can handle many types of terrain and conditions as demonstrated by the video:

In addition, I find Dario Floreano‘s work on bio-inspired bots using self-assembling, self-organising capabilities (such as the MicroFlyer) very inspiring for the design of pleasant “intelligent” environments.


De Vuelta de Asturias

Posted: April 19th, 2006 | No Comments »

Oil Vinegar