Lessons from Clumsy Automation
Posted: February 14th, 2006 | No Comments »Woods, D. D. (1997). Human-centered software agents: Lessons from clumsy automation. In J. Flanagan, T. Huang, P. Jones, & S. Kasif, S. (Eds.), Human centered systems: Information, interactivity, and intelligence (pp. 288–293). Washington, DC: National Science Foundation.
This paper is about the operational complexity, difficulty and new challenges generated by automated systems that are not human and practice-centered. These kind of systems become a burden instead of assisting us to reduce our the user’s work and information.
The data shows that “strong, silent, difficult to direct automation is not a team player” (Woods, 1996)
Automation surprises begin with miscommunication and misassessments between the automation and users which lead to a gap between the user’s understanding of what the automated systems are set up to do, what they are doing, and what they are going to do.
Some of these systems are perceived as black boxes that do not provide some level of visibility of agent activities, some level of management. On the other hand, too much data and complete flexibility overwhelm users.
The key to research on human-centered software agents is to find levels and types of feedback and coordination that support team play between machine subordinates and human supervisor that helps the human user achieve their goals in context.
Reference to read:
Norman, D.A. (1990). The ‘problem’ of automation: Inappropriate feedback and interaction, not ‘over-automation.‘ Philosophical Transactions of the Royal Society of London, B 327:585–593.
The papers gives a list of references in other domains like the medecine, aviation, electronic troubleshooting.
Relation to my thesis: Ubiquitous environments must become team players and go beyond the techno-push vision of “if we build them, the benefits will come”. Our research already shows that automated location awareness created some sort of inertia in terms of communication and strategy planning. Ubicomp systems should have some level of manageability, “seamfullness” and visibility (the visible disappearing computer?). My interest on uncertainty is tightly related to the “user surprise” described:
Automation surprises begin with miscommunication and misassessments between the automation and users which lead to a gap between the user’s understanding of what the automated systems are set up to do, what they are doing, and what they are going to do.