<body>

Blog

Tuesday, May 22, 2007

multi-touch + sound + haptics = groupthink?

A couple of things occurred to me during Jeff Han's multi-touch interface demo (see it here).

1) Add sound. The part where he's zooming in and out of a cloud of dots and talking about data modeling, I wanted to hear whooshing sounds and changes in background hum to give me additional information about my realitve location within the 3D space.

2) Use sound to simulate haptic (touch) information. Can you fool the brain with sound? As you drag your fingers around on a virtual surface, if you represent changes in surface texture as changes in sound, do you kind of feel it? Not to mention efficiency. You can type really fast if you can touch-type, not having to look at the screen to see where your hands are. Wht if instead you could "hear/feel" where they are? Keep the eyes free for detail work.

3) What about eye-tracking, too? Hands can do one thing while eyes look somewhere else.

4) Multi-user. Multiple people could look at a screen and create something: images and music based on where they're each looking, highlight certain data, meanwhile you can track what each is doing with their hands. If you had enough processing power (how about multiple automonomous CPUs sending synched OSC info to a server which runs the display) you could track a whole crowd's responses in real-time. Latency would be kind of okay because it would make people focus on one spot until the changes began - might be less chaotic, too.



previously:

April 2007 * May 2007 * June 2007 * July 2007 * August 2007 * September 2007 * October 2007 * November 2007 * December 2007 * January 2008 * February 2008 * March 2008 * May 2008 * July 2008 * August 2008 * September 2008 * October 2008 * November 2008 * December 2008 * January 2009 * April 2009 * May 2009 * June 2009 * July 2009 * August 2009 * September 2009 * October 2009 * November 2009 * January 2010 * February 2010 * March 2010 *