Wednesday, January 07, 2009

ADRIAN ON SPACE & SOUND

SOUND IN ONLINE SPACE

This past year a couple of programs on echo-location grabbed my attention. First was this: the story of a boy who lost his eyes very early yet who has taught himself to "see" (http://www.youtube.com/watch?v=YBv79LKfMt4) well enough to skateboard, play basketball and have a fairly average childhood by using "clicking" sounds. If you haven't seen this, I'd spare the time. It is absolutely awe-inspiring and also very emotional too.

Second was the recent Nat Geo program on echo-location which claimed that humans have echo-location ability equal to that of dolphins, or better than bats!

Like any good marketer this news of newly discovered and untapped human potential and promise got me wondering how we could exploit and integrate sound better into what we do. Actually I'm not quite that sad, my friend Rohini asked me to write a companion piece to hers and Adam’s for her blog.

Quite a bit has been written about how companies are using mnemonic cues in their advertising and retail environments, but one still untapped opportunity is the Web. If you think of sound on the Web it's probably in conjunction with unfortunately chosen intro music for some cheesy site. That along with the various boings, chirps, rocket noises, door slams, swooshes and tweets of many modern applications has probably forced your default mode to mute. However, I think the echo-location metaphor proves that there is a different way to think about sound on the Web - as a navigation tool.

For example, volume levels could be used to indicate relative depth or distance from you - useful in the event of working with multiple windows in multiple layers. Tones could guide mouse movements, subtly reinforcing desired actions or warning about potentially dangerous ones. In the same way that a car gives audio cues about operation, when to shift, whether everything is working well or not, etc. I think it's pretty easy to see how sound – and a standard for sound – could be quite useful in a next-generation navigation scheme.

It's interesting to speculate on how many of the current computing idioms evolved to compensate (either consciously or not) for the lack of sound. Now that our hardware is capable, can sound transform how we use technology?

[ This is a contribution from Adrian Ho, a partner at Zeus Jones. http://www.zeusjones.com/ ]

3 comments:

Guy said...

This is a very interesting idea. The main issue would be what you note: users muting the sound, not considered a necessity when computing. Imagine you’ve created the perfect aural interface component to your application. It’s for naught if some other application that the user wants to run all the time has annoying sounds.

The CSS specification handles this in part by instructing programmers to assume that any or all of their styles, including aural styles, may be ignored.

But it’s the coexistence of multiple applications that creates the issue here. One application can’t (yet) squelch other applications. How often have you unmuted to watch an online video, and had a sound from another application interfere? (Or from another web page you happen to have open?)

What we want users to be able to do is tell their computers or devices, I’m in Application X now; don’t disturb me with sounds from other applications unless it’s Y or Z.

That may take a while. Let’s invent it.

Adrian said...

Yeah right with 8 cores and multithreading would be like having people talking at you all at once - would have to be something you can trigger when needed or ignore other times

Mystic Brain said...

Yes, this needs some sort of "disintegration" technology. Seriously, let's invent it?