Dan Saffer’s recent book, Designing Gestural Interfaces, makes you think anew about the hand dryers and faucets in public restrooms that respond to waving hands. In fact, Dan notes that gestural interfaces are currently found in specialized products paired to specialized activities in specialized environments. As he observes,
Public restrooms are currently a great example of this, but other spaces could easily take on this sort of “hothouse” environment. The next likely place for such experimentation is kitchens: they feature lots of activities, plus a contained environment with tons of specialized equipment (pp. 160-161)
Designing Gestural Interfaces is the first attempt I’ve seen to provide an in-depth discussion of the challenges in designing devices that people control through gesturing. Although it isn’t the central point of the book, Dan discusses restroom interfaces that wet hands, dry hands, flush toilets, and dispense SaniSeats. And one of his example photographs is notated, “Apparently, public restrooms are excellent places to find gestural interfaces.”
I remember the first time, a few years ago, when I tried to get water flowing through a faucet in a public restroom that used sensor detection. Initially, it was not obvious to me how the faucet worked, and I suspect others continue to experience the same problem based on the photo to the left that I took during a recent visit to a physician’s office.
Gestures as Embodied-Interaction
Dan draws from the work of Paul Dourish for his overall conception of how to design for the interaction involved in gestural interfaces, as instances of embodied-interaction, with all the resources and constraints that come along with a body acting. For it is the body’s presence or absence, engagement or disengagement, movement or stillness, that gestural interfaces must sense to respond to activity. Without going into detail, Don Norman makes a similar point in his recent book, The Design of Future Things, in relation to automation vs. augmentation.
Dan’s book offers a range of insights into the process of designing gestural interfaces that many experienced designers can use. He offers two chapters on patterns that provide a taxonomy of patterns of gesture currently in use for touchscreens and interactive surfaces, as well as patterns for free-form gesture controls. Dan summarizes patterns well,
Patterns are, by their nature, tropes that appear throughout many products…A pattern has to show up in multiple products over a period of time before it becomes established.
A previous post noted the importance of context to the design of ubiquitous computing interfaces that attempt to go beyond the desktop metaphor. Dan offers additional insights into that challenge. He notes that metaphors allow designers to make abstract concepts concrete and offers the following insights:
You can say, “Make your hand flat and hold it up and move it back and forth, left and right,” but it is much easier to say, “Wave.” This is true not only for simple gestures but also for complex ones, perhaps even more so. “Move your hand like you are stirring a pot” is considerably easier to describe than it would be otherwise…gestures on their own do contain meaning, and that meaning can become mixed up with another metaphor that you try to lay on top of it. For instance, if your “stirring the pot” gesture has nothing to do with mixing something (e.g., images, sounds, etc.), users may be confused by it, and certainly might have difficulty remembering it.
Dan provides a chapter each on the topic of documenting interactive gestures, prototyping interactive gestures, and communicating interactive gestures. He provides links in each chapter to useful tools for designers engaged in each of the topics.
I especially thought the discussion of storyboarding, especially the use of swimlanes, important in the chapter on documenting interactive gestures. Dan points out that the swimlanes technique of storyboarding fits well into the approach initially offered in Lucy Suchman’s study of people interacting with copy machines. He notes,
Suchman suggested a way to document interactions with devices that showed four things: actions not available to the machine (e.g., those performed by the user), actions available to the machine (i.e., what it can do in response to human action), effects available to the user (i.e., the system feedback and the next possible steps), and the design rationale for these items.
I highly recommend reading Dan’s book if you are in the least curious about how to approach experience design for interfaces controlled by gestures.
Share this post…