Posted: January 18th, 2006 | No Comments »
I met Rudy De Waele, co-founder of Random One at the Hotel Omm (a nice lobby to setup a mobile office). Rudy draws humble respect because he entered the mobile industry with a real business model: SMS for TV and sport events. Nowadays he keeps this very refreshing “no fluff just stuff” attitude.
Rudy is a member of the Carnival of the Mobilitst and organizes under the gotomobile umbrella the 3GSM Gathering of the Mobilists on February 14 at the Hotel Palace in Barcelona. A can’t miss BCN event.
Posted: January 17th, 2006 | 1 Comment »
fboweb.com offers a web-based flight tracking service (and more like maintaining pilots logbook, facilities locator, weather). It is pretty interesting that it does not only targets aviation professionals but also aviation enthusiast, or just the casual user who needs to track a flight. I am wondering what are the usage scenarios of flight tracking as a hobby.
Track unlimited flights! Single aircraft or entire fleets; search for flights within a certain radius; Get flight alerts sent to your email, cellphone or pager!
Similar to Flight Aware.
Posted: January 17th, 2006 | No Comments »
For one of my doctoral school course, I plan to make a very basic model and agent-based simulation of CatchBob! (or something similar but with more players) using RePast. RePast allows modeling directly in Java and includes a GIS interface allowing agents to live on maps.
A few background words on computer models and multi-agent simulations:
The goal is to gain insight into the operation of systems. Computer simulations is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. The common feature of all computer simulation is to attempts to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.
I am interested in discrete event simulation and more specificaly an agent-based simulation. In agent-based simulation, the individual entities in the model are represented directly (rather that by how the agent’s state is updated from one time-step to the next. Agent-based simulation has been effectively used in ecology, where it is often called individual based modeling and has been used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulation is that of reproducibility of the results (except when humans are part of the simulation).
Posted: January 16th, 2006 | No Comments »
In “The Human Experience” Abowd, Gregory D., Elizabeth D. Mynatt, and Tom Rodden. In IEEE Pervasive Computing, 2002, the authors focus on physical interaction, general application features, and theories of design and evaluation to match the goals of Weiser’s human-centered vision of ubiquitous computing:
- Everyday practices of people must be understood and supported
- The world must be augmented through the provisioning of heterogeneous devices offering different forms of interactive experience
- Networked devices must be orchestrated to provide for a holistic user experience.
Defining the appropriate physical interaction experience
The advance of sensing and recognition technologies allow us to move beyond the traditional input as explicit communication. Therefor there is a shift toward implicit from explicit means of human input to more implicit form of input:
In other words our natural interactions with the physical environment provide sufficient input to a variety of attendant services, without any further user intervention. For example, walking into a space is enough to announce your presence and identity in that location.
I would ponderate such statements. As we saw in CatchBob! we must be careful with the balance between implicit and explicit communication. Explicit input carries an intention, it is an act of communication, while implicit input misses this kind of contextual information. Depending on the context, stepping in a space might not be enough or might be too much to announce a presence. Explicit input is the user’s way master the measured world and to control his relation between the physical and virtual world.
The communication from the environment to the user – the output – has become highly distributed. The challenge is to coordinate across many output locations and modalities without overwhelming our limited attention spans.
Seamless integration of physical and virtual worlds
Of course, the noble goal of “seamless integration” raises my eyebrows. I have a tendency to be more pragmatic and not advertise the “S”(eamless) word in ubicomp. Somehow, I like to stick with Weiser’s “phase I” but have nothing against reading “post-phase I” thoughts and visions. Abrow et al. notice the emergence of three features in ubicomp applications:
- We must be able to use implicitly sensed context from physical and electronic electronic environment to determine a given service’s correct behavior.
- We must provision automated service to easily capture and store memories of live experiences and serve them up for later use
- We move towared the infusion of ubicomp into our everyday lives, the services provided will need to become constantly available partners with the human users, always interrupted and easily resumed.
I am wondering here how much implicit, automated and omnipresent is good for the user. Where is the balance? And then how much can we master this balance in uncontrolled environments? An obvious challenge is to make context-aware computing truly ubiquitous?
Everyday computing
Everyday computing promotes informal and unstructured activities typical of much of our everyday lives. It focuses on activities rather than on tasks. [...] Of course, activities and tasks are not unrelated to each other. Often an activity will comprise several tasks, but the activity itself is more that these component parts. [...] The emphasis on designing for continuously available interaction requires addressing these features of informal daily lifes:
- They rarely have a clear beginning or end
- Interruption is expected as users switch attention
- Multiple activities operate concurrently and might be loosely coordinated
- Time is an important discriminator in characterizing the ongoing relationship between people and computer
- Associative models of information are needed, because information is reused from multiple perspectives
Theories of design and evaluation
The shift from a single machine with and individual to a broader set of organizational and social arrangements has seen the development of new models of interaction to support the design process in broader organizational setting. The ubicomp community is currently exploring three main models of cognition as guides for future design a evaluation
Activity theory
Built on Lev Vygotsky’s work (The Instrumental Method in Psychology):
activity theory recognizes concepts such as goals (objects), actions, and operations. However, both goals and actions are fluid, based on the world’s changing physical state instead of more fixed, a priori plans. [...] The user’s behavior is shaped by the capabilities implicit in the tool itself.32 Ubicomp’s efforts informed by activity theory, therefore, focus on the transformational properties of artifacts and the fluid execution of actions and operations.
Situated action
In this model, knowledge in the world continually shapes the ongoing interpretation and execution of a task. [...] Ubicomp’s efforts informed by a situated action also emphasize improvisational behavior and would not require, nor anticipate, the user to follow a predefined script.
Distributed cognition
This theory focuses on the collaborative process, where multiple people use multiple objects to achieve a larger systems goal [...] Ubicomp efforts informed by distributed cognition focus on designing for a larger system goal in contrast to using an individual appliance. These efforts emphasize how information is encoded in objects and how different users translate or transcribe that information.
Richer understanding of settings
There is an obvious need to gain a rich understanding of the everyday world to inform IT development. The challenge for ubicomp designers is uncover the very practices through which people live and to make these invisible practices visible and available to the developers to ubicomp environments (as already mentioned in Resonances and Everyday Life: Ubiquitous Computing and the City).
Assessment of use
We must also assess the utility of ubicomp solutions:
To understand ubicomp’s impact on everyday life, we navigate a delicate balance between predicting how novel technologies will serve a real human need and observing authentic use and subsequent coevolution of human activities and novel technologies. [...] there has been surprisingly little research published from an evaluation or end-user perspective in the ubicomp community.
The need for new measures
There is still the question of how to apply qualitative and quantitative evaluation methods and doing empricial evaluation with the deployment of more living laboratories.:
Evaluation in HCI reflects these roots and is often predicated on notions of task and the measurement of performance and efficiency in meeting these goals and tasks. However, it is not clear that these measures can apply universally across activities when we move away from structured and paid work to other activities. [...] This shift away from the world of work means that there is still the question of how to apply qualitative or quantitative evaluation methods[...] By pushing on the deployment of more living laboratories for ubicomp research, the science and practice of HCI evaluation will mature.
Posted: January 16th, 2006 | 1 Comment »
I know Adam Greenfield from his Ethical Guidelines for Ubicomp. Now that he is about to release his more-than-expected “Everyware” I found out in an interview he gave to InternetActu, that I share some of his views on ubicomp including:
External constraints
It seems to me that poorly designed ubicomp is inevitable. The constraints are external rather than coming from bad designers or bad technologies. They are economical (budget pressure, schedule pressure) and political (support for sound design practicies. Adam puts it that way:
it is not generally the case that designers are not up to the task of providing good user experiences. It is, rather, either through time or budget pressure, or lack of a respected internal constituency for sound design practice, that users and their requirements are pushed to the periphery.
Scale-up
Users facing daily frustration, self-blaming, and systems not working as advertised are nothing new (Don Norman, and others…). However, now that we are moving from the desktop into the omnipresence we face a terrible challenge of scale-up and we might reach the level of “intolerable experience”. Adam says:
This is distressing enough at the scale we currently encounter, but, as we’ll see, as the ambit of technical intervention and interaction begins to migrate from the desktop out into broader realms of everyday life, and from theoretical to actual, the prospect of bad user experience becomes intolerable.
The utopia
Many people describe ubicomp as seamless and adaptive. Like Adam, I question this techno-optimism and in what ways we want the integration and balance of control to take place. Adam:
I’ve seen a great deal of techno-optimism and even -utopianism around ubicomp, including a fair amount from people who should know better. [...] there hasn’t really been much in the way of people pushing back against the idea of ubicomp, in a measured and knowledgeable way.
Adam’s vision of ubicomp
Ubicomp is far more than “smart” objects, which might be best regarded as a symptom of a deeper paradigm just now unfolding. For me, it’s fundamentally about the surfacing of information that has always been latent in our lives; pattern recognition and machine inference based on large amounts of such information; and about the domain and scale of technical mediation contemplated – both wider and narrower, higher and lower than has been the case previously.
Posted: January 16th, 2006 | No Comments »
New Advances in RFID Help Food Traceability talks about the current constraints of RFID tags (UHF RFID, active tags, …) and about the ongoing developments to go beyond the limitations.
The limitations of Active tags vs. Passive tags
True, longer range has always been available if there is a battery in the RFID tag, and this is a viable solution for vehicles and trailers. However, these so called active tags have limited life and they are expensive, relatively large and with more parts to go wrong. That has meant that UHF (Ultra-high frequency) passive tags have been standardized for pallets and cases of food and other produce at the behest of leading US and European retailers and the US Military.
The limitations of Passive tags
UHF RFID can behave very unpredictably when water or metal is nearby, let alone in the way. As Hong Kong Airport (tagging baggage) and Metro (trailing tagging of food) have found, sometimes the proximity of water or metal can prevent any reads taking place. At other times, things can be unexpectedly and annoyingly sensed 50 meters away, creating confusion about what one is sensing. In Europe, the problems of UHF are compounded by the Military and other vested interests preventing UHF radio regulations permitting higher power and wider bandwidth and this is greatly restricting range and control of interference between readers in trials of pallet and case tagging of food.
Now, the improvement of HF RFID include:
- New ways of extending the range of HF RFID, sometimes even to ten meters.
- Password protected HF tags of controllable range, more tolerant of water and metal than those at UHF.
- RFID devices that work well with difficult substances and RFID which is even sterilization tolerant.
- Use of Surface Acoustic Wave chips to replace the silicon chips in RFID tags
Posted: January 14th, 2006 | No Comments »
Posted: January 13th, 2006 | No Comments »
Ark, W.S. and Selker, T. (1999) “A look at human interaction with pervasive computers“. IBM Systems Journal,Vol. 38, No. 4, 504-508
This old (1999) paper was on of the first to discuss pervasive computing from the HCI perspective. It acknowledges the four major aspects of pervasive computing that appeals to the general population:
- Computing is spread throughout the environment
- Users are mobile
- Information appliances are becoming increasingly available
- Communication is made easier – between individuals, between individuals and things, and between things
and presents among others:
Posted: January 13th, 2006 | 2 Comments »
Kellar, M. , Reilly, D., Hawkey, K., Rodgers, M., MacKay, B., Dearman, D., Ha, V., MacInnes, W.J., Nunes, M., Parker, K., Whalen, T. & Inkpen, K.M. (2005). It’s a Jungle Out There: Practical Considerations for Evaluation in the City. In Proceedings of the CHI 2005 Extended Abstracts, Portland, OR. 1533 – 1536. [poster]
This paper that although traditional methods of evaluation of mobile and ubiquitous computing research my be difficult to apply in dynamic and unpredictable environments like cities, the challenges are surmountable and field research can be crucial component of evaluation.
I am interested by the range of issues that the experimental control and the ability to observer behaviors, because they highlight the external factors that impact both research and adoption of mobile, ubiquitous computing technologies:
- Software: interrupted session due to bad Bluetooth connectivity, software failures prevented one participant pair from using the handheld at one point
- Material: battery power had to be carefully managed during the long study days.
- Weather: Bright sunlight made it difficult at times to view the handheld displays
- Audio and video: it was difficult to capture quality audio recordings due to background noise, which was in general far worse than that encountered during feasibility testing and pilots.
Posted: January 13th, 2006 | No Comments »
Walter van de Velde’s draw of context: