Prototypes in the Wild: Lessons from Three Ubicomp Systems
Posted: July 17th, 2006 | No Comments »Carter, S. and Mankoff, J. 2005. Prototypes in the Wild: Lessons from Three Ubicomp Systems. IEEE Pervasive Computing 4, 4 (Oct. 2005), 51-57
Ubicomp research tends now to explore evaluation techniques including field studies that drive invention, early stage requirements gathering, and prototyping iteration. The authors evaluated three ubicomp systems at multiple design stages to provide a better understanding of how ubicomp evaluation technique should evolve. The designer must understand how to meet the user needs (what is evaluated) with the limits of feasibility depending on the availability of network connectivity and data, of sensors and algorithm for interpreting the data they produce, and of tolls with with to ease the building of applications.
The author’s suggested implication for evaluating interactive prototypes is:
Based on our experiences, we feel that field-based interactive prototypes provide invaluable feedback on a system’s use and co-evolution. However, they’re difficult and time consuming to deploy, and maintaining them unobtrusively is challenging. Designing for remote updates and using local champions and participatory design might mitigate these issues.
Relation to my thesis: I am considering evaluating my design (e.g. intelligible system to cope with spatial uncertainty) and in a field study to determine how well it performs. Based on my experience with CatchBob! I am concidering writing an article on the “Lessons learned from the design and deployment of a pervasive game”.