Intelligibility and Accountability: Human Considerations in Context-Aware Systems

Posted: February 18th, 2006 | 1 Comment »

Bellotti, V.; Edwards, W. K. Intelligibility and accountability: human considerations in context aware systems. Human Computer Interaction. 2001. 16 (2-4): 193-212.

In this essay Bellotti and Edwards argue that there are human aspects of context that cannot be sensed or even inferred by technological means, so context-aware systems cannot be designed simply to act on our behalf. It is the human and social aspects of context that seem to raise the most vexing questions. Because people, unlike systems and devices, make unpredictable judgments about context. In other words they improvise (Sccuhman, 1987)

Although these are the very aspects of context that are difficult or impossible to codify or represent in a structured way, they are, in fact, crucial to making a context-aware system a benefit rather than a hindrance or—even worse—an annoyance.

This entails making certain contextual details and system inferences visible to users in a principled manner and providing effective means of controlling possible system actions.

Context-aware systems mediate between people, and must be accountable and so must their users:

Users need to be able to understand how a system is interpreting the state of the world. Context-aware systems must be intelligible as to their states, “beliefs,” and “initiatives” if users are to be able to govern their behavior successfully (Dourish, Accounting for System Behaviour: Representation, Reflection and Resourceful Action, 1997). [...] context-aware systems must also provide mechanisms that enforce accountability of users to each other.

Bettotti and Edward propose two crucial features to support the user in making his own inferences

Intelligibility: Context-aware systems that seek to act upon what they infer about the context must be able to represent to their user what they know, how they know it, and what they are doing about it.

Accountability: Context-aware systems must enforce user accountability when, based on their inferences about the social context, they seek to mediate user actions that impact others.

However there are drawbacks in differing power to the user:

  • If systems don’t do anything, there will be too many matters that users must deal with themselves, somewhat undermining the point of context-aware systems.
  • Even if the system is enabled to take action, it will constantly be annoying the user with warnings or queries if it can’t go ahead and do things on its own.

Therefor the authors present different design strategies (probably based on a probabilistic approach to detect the system’s state of correctness) for control and minimize the human effort:

  • If there is only slight doubt about what the desired outcome might be, the user must be offered an effective means to correct the system action.
  • If there is significant doubt about the desired outcome, the user must be able to confirm the action the system intends to take.
  • If there is no real basis for inferring the desired outcome, the user must be offered available choices for system action.

Relation to my thesis: Another essay on the balance between visibility and control and empowering users of context-aware systems to reason for themselves about the nature of their systems and environment and to decide how best to proceed. This vision is supported by two key features of context-aware infrastructure: intelligibility and accountability. The authors talk about strategies to minimize the human effort. It would be interesting to analyze in what conditions there is a positive and negative impacts on the human and on a group effort.


Pervasive Computing: Vision and Challengers

Posted: February 18th, 2006 | No Comments »

M. Satyanarayanan, “Pervasive Computing: Vision and Challenges,” IEEE Personal Communications, August 2001.

This papers sets the challenges in computer systems posed by pervasive computing. To stay focused, it avoids digressions into areas important to pervasive computing such as human-computer interaction, expert systems and software agents.

Two distinct earlier steps in the evolution of pervasive computing are distributed systems and mobile computing. Some of the technical problems in pervasive computing correspond to problems already identified and studied in those fields. In some cases, the demands of pervasive computing are sufficiently different that new solutions have to be sought.

The research agenda of pervasive computing subsumes that of mobile computing, but goes much further. Moreover, the whole of all the technical challenges is much greater than the sum of the part.

Satyanarayanan keeps a pragramtic approach to invisility:

In practice, a reasonable approximation of this ideal (invisibility) is minimal user distraction

and talks about balancing proactivity and transparency towards the user

Proactivity is a double-edged sword. Unless carefully designed, a proactive system can annoy a user and thus defeat the goal of invisibility. How does one design a system that strikes the proper balance at all times? Self-tuning can be an important tool in this effort. A mobile user’s need and tolerance for proactivity are likely to be closely related to his level of expertise on a task and his familiarity with his environment. A system that can infer these factors by observing user behavior and context is better positioned to strike the right balance.

Taxonomy Ubicomp
Taxonomy of Computer Systems Research Problems in Pervasive Computing. Already blogged in Wireless Campus LBS.

Relations to my thesis: Invisibility is about minimal user distraction. I am not sure about “minimal”… I would replace it with relevant. I like the taxonomy that shows the new layers of complexity. The whole of all these layers is much greater than the sum of the parts. Understanding the balance between annoying proactivity and inscrutable transparency is at the heart of my work.


Coping with Uncertainty

Posted: February 17th, 2006 | No Comments »

M. Satyanarayanan. “Coping with Uncertainty,” IEEE Pervasive Computing, vol. 02, no. 3, p. 2, July-September, 2003.

The Editor in chief’s note of the IEEE Pervasive Computing issue focusing on systems that deal with uncertainty.

M. Satyanarayanan mentions that digital computing allowed us to eliminate uncertainty in state representation and transformation and that it is now ironic that today’s all-digital world, uncertainty reappears as a major concern at a higher level of representation.

Dealing with uncertainty in pervasive environments might be found in subtle system to user and user to system communication. Both quantitative and qualitative approaches seem valid

How does a system strike a happy medium at all times, even when the environment or user context changes?

How verbose should a system be in keeping its user informed about what is going on underneath?

Relation to my thesis: This issue of IEEE Pervasive Computing was one of the first to mention uncertainty. Even with its quantitative approach (Bayesian Filtering for location estimation or modeling the user patience) it acknowledges more qualitative approaches. Good since I plan to mix the two approaches.


Awareness and Coordination in Shared Workspace

Posted: February 16th, 2006 | No Comments »

Dourish, P. and Bellotti, V. (1992). Awareness and Coordination in Shared Workspaces. Proceedings of the ACM Conference on Computer-Supported Cooperative Work CSCW’92 (Toronto, Ontario), 107-114. New York: ACM.

Awareness is an understanding of the activities of others, which provides a context for your own activity. This context is used to ensure that individual contributions are relevant to the group’s activity as a whole, and to evaluate individual actions with respect to group goals and progress.

Awareness information can be explicitly generated, directed and separate from the shared work object or passively collected and distributed, and presented in the same shared work space as the object of collaboration.

Dourish and Bellotti suggest that awareness information provided and exploited passively through the shared workspace, allows users to move smoothly between close and loose collaboration.

Most awareness systems embody an assumption that a simple awareness of other’s activity needs to be augmented with other explicit, or restrictive mechanisms for ensuring an easy collaboration. However there are 3 potential problems:

  • The price of heightened awareness for the group is clearly restriction in the potential activities of individuals
  • Individuals will receive what the initiator of the information deems to be appropriate. However appropriateness can only be determined in the context of the other individuals’ activities
  • Delivery is controlled more by the sender than by the recipient

Relation to my thesis: I am interested in explicitly generated and passively collected information about uncertainty and uncertainty-awareness in collaborative environments in general.


On Uncertainty in Context-Aware Computing: Appealing to High-Level and Same-Level Context for Low-Level Context Verification

Posted: February 16th, 2006 | 1 Comment »

Padovitz A., Loke S. W., Zaslavsky A., On Uncertainty in Context-Aware Computing: Appealing to High-Level and Same-Level Context for Low-Level Context Verification, in S. K. Mostefaoui et al (eds.) International Workshop on Ubiquitous Computing, 6th International Conference on Enterprise Information Systems (ICEIS) , 2004, INSTICC Press, Portugal, pp. 62 – 72

In context-aware systems factors that promote uncertainty are:

  • Unsatisfactory combination of attribute types to infer
  • Intrinsic ambiguity between two or more situations that impedes a straightforward reasoning about the correct contex
  • Inherent inaccuracy and unreliability of many type of low-level sensors, which may lead to contradicting or substantially different reasoning about context (focus of this paper).

A context-aware system needs to resolve discrepancies as well as high-level context ambiguities that result from the contradicting sensor readings. Sensors sometimes yield to different or contradicting results when systems deal with sensors that are inherently innaccurate.The authors approach is:

We suggest a general high-level, logical approach that makes use of existing context reasoning and acquisition techniques that enables a context-aware system to resolve context ambiguities and optimize sensor-reading values. Our approach is the following: in order to verify a given sensor reading (i.e. low-level contextual information) such as location or light, we use other sensor readings and inferences upon such sensor readings.

We present a system prototype that filters sensed location readings according to a logical scheme using high-level contextual situations. We also present a simulation, used for critically assessing the logical filtering approach.

Logical filtering improves in general location error. However the degree of success of such an approach is dependent on the suitability of the system’s contextual configuration. The environment must somehow be controlled.

Related other positivist approaches like: Towards Reasoning About Context in the Presence of Uncertainty.
References to read:
Glassey R., Ferguson I., Modeling Location for Pervasive Environments, First UK-UbiNet Workshop, London, UK, 2003

Mäntyjärvi J., Seppänen T., Adapting Applications in Mobile Terminals Using Fuzzy Context Information, Mobile Human-Computer Interaction: 4th International Symposium, Mobile HCI 2002, Pisa, Italy, September 18-20, 2002

M. Satyanarayanan. “Coping with Uncertainty,” IEEE Pervasive Computing, vol. 02, no. 3, p. 2, July-September, 2003.

Relation to my thesis: This papers provides a positivist (quantitative/engineering) approach to my research topic on how to handle uncertainties that emerge when systems try to become aware at runtime and are indecisive in reasoning about the true situation. The difference with my approach is that they try to computationally (probability-based solution) decrease uncertainty while I think this is not enough and communicating about the system state/discrepeancier is necessary to disambiguate uncertain situations (or at least supporting disambiguation). Moreover, context is more about accurately detecting a location.


The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility

Posted: February 16th, 2006 | No Comments »

Mark S. Ackerman. “The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility.” John Carroll (ed.), HCI in the New Millennium, Addison-Wesley, 2001.

In CSCW, there is an inherent gap that divides what we know we must support socially and what we can support technically. Exploring, understanding, and ameliorating this gap is the central challenge of CSCW as a field and one of the central problems for HCI.

CSCW assumptions and findings:

  • Social activity is fluid and nuanced, and this makes systems technically difficult to construct properly and often awkward to use (Garfinkel 1967; Strauss 1993).
  • Members of organizations sometimes have different (and multiple) goals, and conflict may be important as cooperation in obtaining issue resolutions (Kling 1991)
  • Exceptions are normal in work progresses (Suchman, & Wynn, 1984)
  • People prefer to know who else is present in a shared space, and they use this awareness to guide their work (Erickson, et al., 1999)
  • Visibility of communication exchanges and of information enables learning and greater efficiencies (Hutchins, 1995)
  • The norm for using a CSCW system are often actively negotiated among users (Strauss, 1991)
  • There appears to be a critical mass problem for CSCW systems (Markus, 1990)
  • People not only adapt to their systems, they adapt their systems to their needs (co-evolution) (Orlikowski, 1993; O’Day, Bobrow, Shirley, 1996)
  • Incentives are critical

There are two major arguments against the importance of any social-technical gap:

1. Some new technology or software technique will shortly solve the gap (unlikely)
2. The gap is merely historical circumstance and we will adapt to the gap in some form (co-evolution: we adapt resources in the environment to our needs. Our culture will adapt itself or the limitations of the technology, so the technical limitations are not important). It goes against a central premise of HCI that we should not force users to adapt.

If the social-technical gap is real, important, and likely to remain, then we must

  1. ameliorate the effects of the gap
  2. further understand the gap

So far, CSCW has only been working on first-order approximation, that is tractable solutions that partially solve specific problems with known trade-offs. CSCW shares problems of generalizability from small groups to a general population (as do all social sciences), prediction of affordances (as does HCI), and the applicability of new technological possibilities (as does the rest of computer sciences)

Study Design Construction Cycle

Relation to my thesis: This paper provides on overview of CSCW and the high-level challenge for the framework of my thesis (I am less interested in the CSCW as a science section). My thesis has a natural emphasis on “what we can support technically”, how to deal with the limitations when they are hardly manageable due to the complexity of the real world, and how it impacts the social. Ways to find a balances between technically working and organizationally workable in ubicomp. My work is linked to Greenberg and Marwood, CSCW technical researchers who demonstrated the social-technical gap (Marwood, B., & Greenberg, S. (1994). Real Time Groupware as a Distributed System: Concurrency Control and Its Effect on the Interface. Proceedings of the Computer Supported Cooperative Work : 207-217.

If concurrency control is not established, people may invoke conflicting actions. As a result, the group may become confused because displays are inconsistent, and the groupware document corrupted due to events being handled out of order. (p. 207)

My claim is that a technical solution is unlikely and co-evolution does not solve everything especially with the constant evolution of technologies and our techno-push world. However, I am wondering on gow to go beyond first-order approximation and constributing “cool toys”.


3GSM Gatherings

Posted: February 15th, 2006 | 2 Comments »

Informal Mobile Sunday Barcelona (setup by Stuart Mudie) and the 3GSM Gathering of the Mobilitst (organized by Rudy de Waele and Gotomedia) was the opportunity to meet practitioners and observers of the mobile industry including:

Markus Angermeier, accessibility expert, creative director design for Aperto‘s, consultant for Plazes (mobile phone demo and brainstorming on proximity-based scenarios), and the german Bundeswehr, initiator of the web 2.0 mindmap, and self-proclaimed Beetles world’s biggest fan.

Russell Buckley from MobHappy, discussion on going beyond preaching the converted on LBS issues and why the message and clues fail to be understood in a techno-push world.

Alex Kummerman, Clickmobile, LBS and social software enthusiast. Interesting concept on providing an location-based social network management platform. I would find this an interesting supporting tool for informal, elastic communities, thriving on spontaneity that deal with mobility beyond an area or city level. A scenario involved the members of a sport team fan club. Alex mentioned the research of Michel Simatic on multiplayer mobile games and the technical constraints.

Oliver Starr from MobileCrunch who ranted on ubiquitous computing and how it does not work even if the technology is available. His luggage was loaded on a flight to Milano instead of BCN and he had to jump on that plane for security reasons (a luggage cannot flight without his passenger). Unfortunately in the transfer in Milano, his language did not follow him and stayed in Italy. Great object as first class citizen and malfunction of tracking technologies story.

Josep Ganyet, well-traveled teacher at the UPF, focus on usability testing. Talk about Don Norman’s emotional design, the spanish male multi-tasking and BCN in general. He flickered the cocktail.

Jaakko Villa, CEO of idean research. A few words about their user experience research methodologies…

David Mery, Symbian “evangelist” (a company that obviously needs one…), discussion on Symbian’s recent efforts to support its developers community and Symbian’s plan to open to the lower-end smartphone market. No words about David’s past as suspected terrorist.

101385630 A61752881D M

Relation to my thesis: Exchanging with people from the industry, on their playground is a healthy exercise. Oliver Starr’s story supports my claim on the utopia about ubicomp spread by the perfect “I want to track and keep control of my luggage” scenarios while the real-world in which dealing with exceptions is the rule and where heterogeneity emerges from complex physical, economical, and human constrains. 3GSM carries this seamless picture of a mobile and connected world. I still find ironic that people talking about the wireless world get stuck in the real world constraints: Russell Beattie’s Dial up… Wow, it still works and Stuart Mudie’s For the past five days, I’ve been living in the future – the mobile future.


Lessons from Clumsy Automation

Posted: February 14th, 2006 | No Comments »

Woods, D. D. (1997). Human-centered software agents: Lessons from clumsy automation. In J. Flanagan, T. Huang, P. Jones, & S. Kasif, S. (Eds.), Human centered systems: Information, interactivity, and intelligence (pp. 288–293). Washington, DC: National Science Foundation.

This paper is about the operational complexity, difficulty and new challenges generated by automated systems that are not human and practice-centered. These kind of systems become a burden instead of assisting us to reduce our the user’s work and information.

The data shows that “strong, silent, difficult to direct automation is not a team player” (Woods, 1996)

Automation surprises begin with miscommunication and misassessments between the automation and users which lead to a gap between the user’s understanding of what the automated systems are set up to do, what they are doing, and what they are going to do.

Some of these systems are perceived as black boxes that do not provide some level of visibility of agent activities, some level of management. On the other hand, too much data and complete flexibility overwhelm users.

The key to research on human-centered software agents is to find levels and types of feedback and coordination that support team play between machine subordinates and human supervisor that helps the human user achieve their goals in context.

Reference to read:
Norman, D.A. (1990). The ‘problem’ of automation: Inappropriate feedback and interaction, not ‘over-automation.‘ Philosophical Transactions of the Royal Society of London, B 327:585–593.

The papers gives a list of references in other domains like the medecine, aviation, electronic troubleshooting.

Relation to my thesis: Ubiquitous environments must become team players and go beyond the techno-push vision of “if we build them, the benefits will come”. Our research already shows that automated location awareness created some sort of inertia in terms of communication and strategy planning. Ubicomp systems should have some level of manageability, “seamfullness” and visibility (the visible disappearing computer?). My interest on uncertainty is tightly related to the “user surprise” described:

Automation surprises begin with miscommunication and misassessments between the automation and users which lead to a gap between the user’s understanding of what the automated systems are set up to do, what they are doing, and what they are going to do.


Implications for Design

Posted: February 13th, 2006 | 1 Comment »

Dourish, P. 2006. Implications for Design. Proc. ACM Conf. Human Factors in Computing Systems CHI 2006 (Montreal, Canada).

Often ethnography is seen as an approach to field investigation that can generate requirements for systems developments. Dourish suggests that “implication for design” may not be the best metric for evaluation and may fail to capture the value of ethnographic investigations.

The term “ethnography,” indeed, is often used as shorthand for investigations that are, to some extent, in situ, qualitative, or open-ended. Similarly, the term is often used to encompass particular formulations of qualitative research methods such as Contextual Inquiry (Beyer, H. and Holtzblatt, K. 1997. Contextual Design: Defining Customer-Centered Systems. Morgan Kaufman.).

This view of ethnography as purely methodological and instrumental supports the idea that “implications for design” are the sole output of ethnographic investigation.

and argues this way

In this way, the domain of technology and the domain of everyday experience cannot be separated from each other; they are mutually constitutive. The role of ethnography, then, cannot be to mediate between these two domains, because ethnography does not accept their conceptual separation in the first place.

What I have tried to argue here is that a bullet list of design implications formulated by an ethnographer is not the most effective or appropriate method. Ethnography provides insight into the organization of social settings, but its goal is not simply to save the reader a trip; rather, it provides models for thinking about those settings and the work that goes on there.

More than the discussion on ethnology I got really interested on the discussion of the social-techical gap introduced by Ackerman, M. 2000. The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility. Human-Computer Interaction, 15, 179-203. and how people adapt to technologies.

Ackerman critiques the intuition that people adopt and adapt technologies because the technologies are poorly designed, and that better designed technologies would obviate the need for such adaptation and appropriation.

Certainly, though, what it does is to refigure “users” not as passive recipients of predefined technologies but as actors who collectively create the circumstances, contexts, and consequences of technology use. HCI research has, of course, long had an interest in aspects of the ways in which people might configure, adapt, and customize technologies

Reference to read:
Ackerman, M. 2000. The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility. Human-Computer Interaction, 15, 179-203.

Relation to my thesis: I am trying to understand in what ways ethnology can be applied (or not!) in my research and how I can make the output of my thesis go beyond a bullet list of implication for design of ubicomp environements. I discovered Ackermann’s vision of people adapting to technologies and that technologies can be badly designed in order to be adapted and then appropriated. This goes in the direction I see the relation between technology and people. Imperfect, non-flat, anti-seemless technology can be good for appropriation. The positive sides of imperfect technology.


3GSM World Congress…

Posted: February 13th, 2006 | No Comments »

3Gsmwold
… has started.