Intelligibility and Accountability: Human Considerations in Context-Aware Systems
Posted: February 18th, 2006 | 1 Comment »Bellotti, V.; Edwards, W. K. Intelligibility and accountability: human considerations in context aware systems. Human Computer Interaction. 2001. 16 (2-4): 193-212.
In this essay Bellotti and Edwards argue that there are human aspects of context that cannot be sensed or even inferred by technological means, so context-aware systems cannot be designed simply to act on our behalf. It is the human and social aspects of context that seem to raise the most vexing questions. Because people, unlike systems and devices, make unpredictable judgments about context. In other words they improvise (Sccuhman, 1987)
Although these are the very aspects of context that are difficult or impossible to codify or represent in a structured way, they are, in fact, crucial to making a context-aware system a benefit rather than a hindrance or—even worse—an annoyance.
This entails making certain contextual details and system inferences visible to users in a principled manner and providing effective means of controlling possible system actions.
Context-aware systems mediate between people, and must be accountable and so must their users:
Users need to be able to understand how a system is interpreting the state of the world. Context-aware systems must be intelligible as to their states, “beliefs,” and “initiatives” if users are to be able to govern their behavior successfully (Dourish, Accounting for System Behaviour: Representation, Reflection and Resourceful Action, 1997). [...] context-aware systems must also provide mechanisms that enforce accountability of users to each other.
Bettotti and Edward propose two crucial features to support the user in making his own inferences
Intelligibility: Context-aware systems that seek to act upon what they infer about the context must be able to represent to their user what they know, how they know it, and what they are doing about it.
Accountability: Context-aware systems must enforce user accountability when, based on their inferences about the social context, they seek to mediate user actions that impact others.
However there are drawbacks in differing power to the user:
- If systems don’t do anything, there will be too many matters that users must deal with themselves, somewhat undermining the point of context-aware systems.
- Even if the system is enabled to take action, it will constantly be annoying the user with warnings or queries if it can’t go ahead and do things on its own.
Therefor the authors present different design strategies (probably based on a probabilistic approach to detect the system’s state of correctness) for control and minimize the human effort:
- If there is only slight doubt about what the desired outcome might be, the user must be offered an effective means to correct the system action.
- If there is significant doubt about the desired outcome, the user must be able to confirm the action the system intends to take.
- If there is no real basis for inferring the desired outcome, the user must be offered available choices for system action.
Relation to my thesis: Another essay on the balance between visibility and control and empowering users of context-aware systems to reason for themselves about the nature of their systems and environment and to decide how best to proceed. This vision is supported by two key features of context-aware infrastructure: intelligibility and accountability. The authors talk about strategies to minimize the human effort. It would be interesting to analyze in what conditions there is a positive and negative impacts on the human and on a group effort.
Check out this:
http://www.thefinalmile.net/blog/
The last 4 blogs…
Some people appear to be making some MONEY hear in context-aware applications… The question is how much Google bought them ( http://www.wi5d.net ) for and what will happen with these context-aware search techniques. Interesting stuff.