On Findability and Visual Tags

Interfaces are not what they used to be. The computer-human interface is both more and less than it was a few years ago. Interfaces are not only, or even primarily, a screen anymore. Yet, screens remain important to most design efforts, even though interfaces are increasingly part of the environment itself. As John Thackara and Malcolm McCullough both recently pointed out, entire cities are developing into user interfaces as ubiquitous computing environments expand.

Peter Morville has outlined one approach to the challenges posed by ubiquitous computing for people who need to go places or find things. He calls it “ambient findability”: “…a fast emerging world where we can find anyone or anything from anywhere at anytime” (p. 6). Peter uses a “wayfinding” metaphor to develop observations on the relationship of ubiquitous computing and user experience. Peter admits to the limitations of the wayfinding, or navigational, metaphor, in the sense that the web is not in fact spatial. However, he contends that making things findable means classifying information using controlled vocabularies, and developing ways to retrieve it. In other words, if there are no paths to retrieve what you want, or go where you want, then of course you cannot get lost. On my prompting, David Weinberger responded to a jibe by Peter in Findability about the limitations of folksonomies relative to taxonomies, or ontologies. And, Peter appears to have softened his view a bit. Yet, I think it is fair to assert that the discussion in Findability strongly supports the use of controlled vocabularies of meta-data for information retrieval.

For much of Peter’s discussion, Findability focuses on getting from here to there, or finding this or that, here or there. Yet, I was interested in finding examples of information resources in Findability that employ distribution methods for information to make the ambient findability criteria practicable. Google Earth and global positioning systems certainly provide access to information about who goes where, and potentially, what they do. Yet, to this point, neither solves the key issues faced in trying to readily locate information about things in your immediate context or proximity. Until location awareness becomes a standard capability in both mobile devices and, using perhaps RFID, in physical locations, these services seem best suited to locate information about things in distant contexts. I’d appreciate hearing other people’s thoughts about this point.

I just received the most recent Springwise Newsletter which provides a couple of interesting examples of approaches that work effectively in retrieving information about items in the user’s immediate context. Each approach uses the built-in digital cameras of mobile devices, like cell phones, to input URLs for locating web sites to retrieve information using offline visual tags. Shotcode and Semacode make mobile information seeking over the web work like scanning a bar code to determine the price of an item. They make offline media interactive. It is pure pull, unless you consider the offline advertising “pushy”. The metadata necessary for accessing relevant information is largely in the context, the embodied situation of the user. Consider the experience of walking down the sidewalk past a bus stop with large sign displays for a musical artist. You see the artist, you read the title to their new CD, pull out your mobile phone, and take a picture of a symbol on the sign to call up a rich media advertisement, or informational message, on the artist. Or, consider a business card passed to you at a conference with the URL for the speaker’s blog printed on the back.

In each case, the symbol is a visual tag, created for a specific URL, meaning the user does not have to find anything. No typing is needed to input the URL. Think of a yellow pages book of visual tags for each business that you just take a picture of with your cell phone to retrieve the telephone number, and access additional marketing information, such as the menu of a restaurant.

Share this post…

add to del.icio.us : Add to Blinkslist : add to furl : Digg it : Stumble It! : add to simpy : seed the vine : : : TailRank : post to facebook

One Response to On Findability and Visual Tags

  1. […] first discussed visual tags a couple of years ago as Web 2.0 technology initially emerged in mobile devices such as cell […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: