As I noted in a post on Peter Morville’s Findability several years ago,
“Interfaces are not what they used to be. The computer-human interface is both more and less than it was a few years ago. Interfaces are not only, or even primarily, a screen anymore. Yet, screens remain important to most design efforts, even though interfaces are increasingly part of the environment itself. As John Thackara and Malcolm McCullough both recently pointed out, entire cities are developing into user interfaces as ubiquitous computing environments expand.”
Caleb, over at MobileBehavior, recently observed that mobile phones do not yet provide users with a graphic language for touch interactions. Caleb’s post points to an early visualization of a standard graphic language offered by Timo Arnall of the Touch project, which researches near field communication. Caleb makes his point by talking about the confusion that consumers experience when faced with a visual tag (v-Tag), or 2D Barcode, and does so with the following Weather Channel forecast that offers viewers an opportunity to interact with a visual tag using their mobile phones (wait until about 45 seconds into the video). The forecast fails to indicate to viewers what the v-tag does.
The user experience team that developed the v-tag for that particular forecast must have assumed viewers would know it represented an invitation to interact. A search on the Weather Channel website fails to return any information on the use of v-tags in their media programming though.
In a previous discussion of Dan Saffer’s book, Designing Gestural Interfaces, I made a similar point about mundane gestural interfaces in public bathrooms, a setting with fairly established graphic language conventions. Yet, even such mundane gestural interfaces can pose difficulty for users. As I noted,
I remember the first time, a few years ago, when I tried to get water flowing through a faucet in a public restroom that used sensor detection. Initially, it was not obvious to me how the faucet worked, and I suspect others continue to experience the same problem based on the photo I took during a recent visit to a physician’s office.
Among other observations, it is important here to note that these examples provide clear instruction for why experience design encompasses user experience. Specifically, people only experience a user interaction if the interactive capability of an artifact is intelligible, if they recognize the artifact as an instance of that kind of thing, i.e. an invitation to interact with media or machinery. Who knows how many people noticed the Adidas logo embedded in a v-Tag on their running shorts, or shoes, and failed to see it as an invitation to a user experience?
People can’t use an interface if it is not recognizable as such or, as the Palcom team coined it, palpable to their use. Otherwise, the invitation to experience, what Dan Saffer calls the attraction affordance, fails. Consider the more telling example of the symbol at the top of this post. It represents an RFID signal environment for devices using the Near Field Communication (NFC) standard. Indeed, Timo Arnall and Jack Schulze’s recent work for the Touch project demonstrates the spatial qualities of an RFID device’s signal, the shape of its readable volume.
Dan Saffer, in Designing Gestural Interfaces, touches on the fact that we are currently missing common symbols for indicating when an interactive system “is present in a space when it would otherwise be invisible,” or when we just wouldn’t recognize it as such. Adam Greenfield’s Everyware made a similar point a half decade ago.
Posted by Larry R. Irons
Share this post…