Experience Design and the Intelligibility of Interfaces

February 16, 2010

Created by Timo Arnall and Jack Schulze

As I noted in a post on Peter Morville’s Findability several years ago,

“Interfaces are not what they used to be. The computer-human interface is both more and less than it was a few years ago. Interfaces are not only, or even primarily, a screen anymore. Yet, screens remain important to most design efforts, even though interfaces are increasingly part of the environment itself. As John Thackara and Malcolm McCullough both recently pointed out, entire cities are developing into user interfaces as ubiquitous computing environments expand.”

Caleb, over at MobileBehavior, recently observed that mobile phones do not yet provide users with a graphic language for touch interactions. Caleb’s post points to an early visualization of a standard graphic language offered by Timo Arnall of the Touch project, which researches near field communication. Caleb makes his point by talking about the confusion that consumers experience when faced with a visual tag (v-Tag), or 2D Barcode, and does so with the following Weather Channel forecast that offers viewers an opportunity to interact with a visual tag using their mobile phones (wait until about 45 seconds into the video). The forecast fails to indicate to viewers what the v-tag does. 

The user experience team that developed the v-tag for that particular forecast must have assumed viewers would know it represented an invitation to interact. A search on the Weather Channel website fails to return any information on the use of v-tags in their media programming though.

In a previous discussion of Dan Saffer’s book, Designing Gestural Interfaces, I made a similar point about mundane gestural interfaces in public bathrooms, a setting with fairly established graphic language conventions. Yet, even such mundane gestural interfaces can pose difficulty for users. As I noted,

I remember the first time, a few years ago, when I tried to get water flowing through a faucet in a public restroom that used sensor detection. Initially, it was not obvious to me how the faucet worked, and I suspect others continue to experience the same problem based on the photo I took during a recent visit to a physician’s office.

gesture_water

Among other observations, it is important here to note that these examples provide clear instruction for why experience design encompasses user experience. Specifically, people only experience a user interaction if the interactive capability of an artifact is intelligible, if they recognize the artifact as an instance of that kind of thing, i.e. an invitation to interact with media or machinery. Who knows how many people noticed the Adidas logo embedded in a v-Tag on their running shorts, or shoes, and failed to see it as an invitation to a user experience?

People can’t use an interface if it is not recognizable as such or, as the Palcom team coined it, palpable to their use. Otherwise, the invitation to experience, what Dan Saffer calls the attraction affordance, fails. Consider the more telling example of the symbol at the top of this post. It represents an RFID signal environment for devices using the Near Field Communication (NFC) standard. Indeed, Timo Arnall and Jack Schulze’s recent work for the Touch project demonstrates the spatial qualities of an RFID device’s signal, the shape of its readable volume.

Dan Saffer, in Designing Gestural Interfaces, touches on the fact that we are currently missing common symbols for indicating when an interactive system “is present in a space when it would otherwise be invisible,” or when we just wouldn’t recognize it as such. Adam Greenfield’s Everyware made a similar point a half decade ago.

Posted by Larry R. Irons

Share this post…

add to del.icio.us : Add to Blinkslist : add to furl : Digg it : Stumble It! : add to simpy : seed the vine : : : TailRank : post to facebook


Visual Tags at DevLearn08

October 3, 2008

The eLearning industry is taking a step into visual tagging this fall with the eLearning Guild announcement that each attendee at DevLearn08 will receive a personalized QR code containing their contact information. As we noted in earlier posts,

Visual tagging is useful in creating social networks around products and events [such as DevLearn08], augmenting people’s experience with places, mobile learning, and transacting eCommerce at websites, among other potential uses.

DevLearn08 plans to use QR codes for its v-Tags. Take a look at the video tutorials on how DevLearn08 plans to use v-Tags.

Share this post…

add to del.icio.us : Add to Blinkslist : add to furl : Digg it : Stumble It! : add to simpy : seed the vine : : : TailRank : post to facebook


Call them Visual Tags (v-Tags), not 2D Barcodes

August 13, 2008

A vTag for Skilful Minds generated with Google Chart API

For those who think discussions of semantic value and meaning are pointless, with no relationship to technology adoption, you may want to skip this post. 

We first discussed visual tags in 2006. Many people today refer to them as 2d barcodes. However, a crucial difference exists between what things are like and what they in fact are. Calling visual tags (v-Tags) 2d barcodes is like calling YouTube a video database, Flickr a photo database, or Del.icio.us a favorites list.  Literally, the description is accurate. Functionally, it is meaningless. Read the rest of this entry »


Is a Social Network on Your Foot?

August 7, 2008

The social networking capabilities of Web 2.0 technologies provide numerous opportunities for product and service providers to engage customers. Two interesting examples of companies reaching out to engage their customers come from the footwear industry, specifically Nike and adidas. Some of you may already know about these two examples. However, the difference in social networking strategy between the two is worth thinking about.
Read the rest of this entry »


Mobile Learning and Visual Tags

August 4, 2008
eLearning Guild QR Visual Tag

eLearning Guild QR Visual Tag

We first discussed visual tags a couple of years ago as Web 2.0 technology initially emerged in mobile devices such as cell phones. Referring to two visual tagging techniques available at the time, we noted:

Shotcode and Semacode make mobile information seeking over the web work like scanning a bar code to determine the price of an item. They make offline media interactive. It is pure pull, unless you consider the offline advertising “pushy”. The metadata necessary for accessing relevant information is largely in the context, the embodied situation of the user.

Take a look at the following video for an overview of how visual tagging works, in this example it is for advertising services.

So, how does this relate to mobile learning, or m-Learning as the eLearning Guild refers to the practice? Read the rest of this entry »


Cell Phones, Semacodes, and Impulse Buying

September 26, 2007

Back in January 2006, in a discussion of Peter Morville’s Findability, we noted two innovative approaches to using the built-in digital cameras of mobile devices, like cell phones, to input URLs for locating web sites to retrieve information using offline visual tags. Specifically, we noted,

Shotcode and Semacode make mobile information seeking over the web work like scanning a bar code to determine the price of an item. They make offline media interactive. It is pure pull, unless you consider the offline advertising “pushy”. The metadata necessary for accessing relevant information is largely in the context, the embodied situation of the user. Consider the experience of walking down the sidewalk past a bus stop with large sign displays for a musical artist. You see the artist, you read the title to their new CD, pull out your mobile phone, and take a picture of a symbol on the sign to call up a rich media advertisement, or informational message, on the artist.

H&M has recently taken the technique to the next step in Europe. Impulse shoppers can use their cell phone to snap a picuture of a semocode associated with a product, pull up a catalogue and make the purchase by charging the item to their cell phone bill. The semacodes are used on posters and in magazine advertisements so the buyer does not need to provide information to the seller, in this case H&M.

Share this post…

add to del.icio.us : Add to Blinkslist : add to furl : Digg it : Stumble It! : add to simpy : seed the vine : : : TailRank : post to facebook


Forget Tags and Folksonomies, Try Place-Based Stories

October 10, 2006

From the first time I heard the work folksonomy I really liked the concept. The idea of building meta-data about places and things from the people who experience them really seems cool if you have an appreciation for sociability. However, I must say that the new twist on the concept offered by [murmur] provides people moving through places with thick descriptions rather than tagged information aggregated by collaborative filtering software. [murmur] is available in Toronto, San Jose, Vancouver, and Montreal. It is really an oral history project that allows you to access stories about places you pass through while you are there. But, the basic concept is much more than oral history. Read the rest of this entry »


On Findability and Visual Tags

January 4, 2006

Interfaces are not what they used to be. The computer-human interface is both more and less than it was a few years ago. Interfaces are not only, or even primarily, a screen anymore. Yet, screens remain important to most design efforts, even though interfaces are increasingly part of the environment itself. As John Thackara and Malcolm McCullough both recently pointed out, entire cities are developing into user interfaces as ubiquitous computing environments expand.

Peter Morville has outlined one approach to the challenges posed by ubiquitous computing for people who need to go places or find things. He calls it “ambient findability”: “…a fast emerging world where we can find anyone or anything from anywhere at anytime” (p. 6). Read the rest of this entry »