Experience Design and the Intelligibility of Interfaces

February 16, 2010

Created by Timo Arnall and Jack Schulze

As I noted in a post on Peter Morville’s Findability several years ago,

“Interfaces are not what they used to be. The computer-human interface is both more and less than it was a few years ago. Interfaces are not only, or even primarily, a screen anymore. Yet, screens remain important to most design efforts, even though interfaces are increasingly part of the environment itself. As John Thackara and Malcolm McCullough both recently pointed out, entire cities are developing into user interfaces as ubiquitous computing environments expand.”

Caleb, over at MobileBehavior, recently observed that mobile phones do not yet provide users with a graphic language for touch interactions. Caleb’s post points to an early visualization of a standard graphic language offered by Timo Arnall of the Touch project, which researches near field communication. Caleb makes his point by talking about the confusion that consumers experience when faced with a visual tag (v-Tag), or 2D Barcode, and does so with the following Weather Channel forecast that offers viewers an opportunity to interact with a visual tag using their mobile phones (wait until about 45 seconds into the video). The forecast fails to indicate to viewers what the v-tag does. 

The user experience team that developed the v-tag for that particular forecast must have assumed viewers would know it represented an invitation to interact. A search on the Weather Channel website fails to return any information on the use of v-tags in their media programming though.

In a previous discussion of Dan Saffer’s book, Designing Gestural Interfaces, I made a similar point about mundane gestural interfaces in public bathrooms, a setting with fairly established graphic language conventions. Yet, even such mundane gestural interfaces can pose difficulty for users. As I noted,

I remember the first time, a few years ago, when I tried to get water flowing through a faucet in a public restroom that used sensor detection. Initially, it was not obvious to me how the faucet worked, and I suspect others continue to experience the same problem based on the photo I took during a recent visit to a physician’s office.

gesture_water

Among other observations, it is important here to note that these examples provide clear instruction for why experience design encompasses user experience. Specifically, people only experience a user interaction if the interactive capability of an artifact is intelligible, if they recognize the artifact as an instance of that kind of thing, i.e. an invitation to interact with media or machinery. Who knows how many people noticed the Adidas logo embedded in a v-Tag on their running shorts, or shoes, and failed to see it as an invitation to a user experience?

People can’t use an interface if it is not recognizable as such or, as the Palcom team coined it, palpable to their use. Otherwise, the invitation to experience, what Dan Saffer calls the attraction affordance, fails. Consider the more telling example of the symbol at the top of this post. It represents an RFID signal environment for devices using the Near Field Communication (NFC) standard. Indeed, Timo Arnall and Jack Schulze’s recent work for the Touch project demonstrates the spatial qualities of an RFID device’s signal, the shape of its readable volume.

Dan Saffer, in Designing Gestural Interfaces, touches on the fact that we are currently missing common symbols for indicating when an interactive system “is present in a space when it would otherwise be invisible,” or when we just wouldn’t recognize it as such. Adam Greenfield’s Everyware made a similar point a half decade ago.

Posted by Larry R. Irons

Share this post…

add to del.icio.us : Add to Blinkslist : add to furl : Digg it : Stumble It! : add to simpy : seed the vine : : : TailRank : post to facebook

Advertisements

Future Home Interfaces: Beautiful Seams for Everyday Life

February 25, 2009

An earlier post, Metaphorical Refrigerators, Design, and Ubiquitous Computing, pointed to the need to go beyond the desktop metaphor in thinking about he design of interfaces in the connected home.  The video below offers a clear example of the direction such a transformation in thinking about interfaces must take. It starts off slow, so give it some time to see the point. I particularly like the implicit control that the user retains over most of the interactions, though the zany, intelligent agent is a little far-fetched.

Share this post…

add to del.icio.us : Add to Blinkslist : add to furl : Digg it : Stumble It! : add to simpy : seed the vine : : : : post to facebook

Posted by Larry R. Irons


Everyware, Findability, and AI (Part 3)

January 7, 2007

As Part 2 in this series indicated, my interest in ubiquitous computing started with the sort of issues raised by Lucy Suchman’s initial research on artificial intelligence applications, specifically expert systems. I’ve been waiting to read Lucy’s second edition of Plans and Situated Actions, titled Human-Machine Reconfigurations before finishing this series of entries. It is an interesting read, and I think several themes introduced by Suchman’s most recent work nicely highlight the contributions in Adam Greenfield’s Everyware.

Everyware offers a number of interesting and provocative insights into the phenomena of ubiquitous computing. The most sensible, and provocative, insight offered by Greenfield relates to whether the design of ubiquitous computing needs to aim for seamless interaction with people using connected devices, or whether a rigorous focus is needed on how to make seamful interaction the guiding design practice. Read the rest of this entry »


Everyware, Findability, and AI (Part 2)

January 3, 2007

Part 1 promised that Part 2 would discuss Greenfield’s Everyware. However, before we get to that discussion, a few considerations on Moreville’s Ambient Findability are needed. The discussion of Moreville’s book will make clear the contributions offered in Everyware.

Greenfield and Moreville express skepticism about the ability of artificial intelligence to solve basic problems related to ambient findability and Everyware, what Greenfield terms ambient informatics. As more and more ordinary devices are available for people to engage as they go about routine activities, the sheer challenge of finding the right device among those available to support an activity promises to develop into a significant hurdle. Both authors recognize the challenge. Yet, Greenfield and Moreville both fail to discuss straightforwardly the challenges faced by attempts to manage relationships between connected devices. Read the rest of this entry »


Everyware, Findability, and AI (Part 1)

December 17, 2006

I read Adam Greenfield’s Everyware in August of this year, but haven’t written anything about it yet. I like the book, a lot. It led me to think again about a number of issues that I kind of put to the side over the last two decades as I’ve made a living as a knowledge worker, i.e. methods analyst, technical writer, multimedia developer, Professor of Communication, web designer, human capital manager, e-Learning researcher, learning architect, customer experience designer. However, Adam’s book made an impression on me initially, more because of things that I experienced in the late 1980s and early 1990s than for its relevance today, though it is extremely relevant to today’s challenges in relating human experience to the ubiquitous nature of computing technology. Read the rest of this entry »