Wednesday, May 04, 2011

SXSW 2011: People as Peripherals: The Future of Gesture Interfaces

"People as Peripherals". The title of this speech conveys unease about a future where humans are little more than input devices for our computer overlords. Not surprisingly, presenter Lee Shupp segued from gesture interfaces to brain implants, and from there to technological Singularity.

At the first glance, there is neither much to fear, nor great promise to expect from such gesture interfaces as Kinect, a Microsoft game console. Current interfaces suffer from the case of "gorilla arms": you have to wave your arms vigorously in big, sweeping gestures to make yourself understood to the machine. You are also limited by a small square of space where you need to stand so the computer would capture your gestures correctly. Even so, it's all too often inaccurate, if Kinect is any indication. It's a long way from here to detecting micro-gestures, such as subtle finger movements.

Lee Shupp Lee Shupp speaks about gestural interfaces, brain interfaces and Singularity. More pictures from SXSW 2011 are in my photo gallery.

Not unlike at a science fiction convention, the audience pointed out plenty of other problems gesture interfaces will have to solve before they are seamlessly integrated into our lives. How would a gesture-driven plane cockpit respond if a pilot sneezes? How would such interfaces adjust for body language differences between cultures? For example, in many oriental cultures it's considered extremely rude to point your foot to anyone. Never mind the bugs -- the potential of well-implemented gesture interfaces can be equally disturbing. A guy in the audience expressed a wish for an interface that would understand sign language. He can sign much faster than type, and he'd like to "text" while driving without raising his hands from the wheel. (I sure hope for the sake of the humanity that his wish won't come true.)

But before we can even make sensors that understand sign language, there are more basic problems to be solved. As a person in the audience pointed out, current interfaces require that you come to them. You are supposed to stand in front of the machine and wave your arms at it. That doesn't integrate well with our daily lives. However, I saw this Technology Review article, Talking to the Wall, about an experimental technology that lets you turn any wall in a building into a touch-sensitive surface. Now that surely has a few killer apps in it.

Lee Shupp's vision of transhumans Lee Shupp's vision of transhumans. More pictures from SXSW 2011 are in my photo gallery.

Brain interfaces are still in a rudimentary stage too, says Lee Shupp. So far brain implants haven't done much more than allowed people control a cursor on the computer. There are serious obstacles to their adoption. To connect a brain to a machine you have to drill holes in the skull, and sending thought commands requires concentration, which is hard for humans to achieve in the multitasking world. Finally, Shupp says, if people can't read people, how can computers? For that matter, if computers can read our brain signals, does that mean we can't lie anymore? To the guy who asked that last question, Shupp recommended "The Truth Machine" by James Halperin, a SF novel where this is addressed.

Despite these nontrivial problems, he believes brain implants will take off. Already 80000 people worldwide have them. An informal survey of the room shows that most people here think we will be using brain interfaces in 50 years. At some point brain implants will likely augment our intelligence, and we'll on the road to Singularity. And then, if this slide correctly reflects Shupp's vision of transhumans, we will spend our time with our brains plugged in directly into simulated medieval worlds. Swords: the original gestural interfaces. ;-)

(Tangentially related, here is another take on Singularity, where the original popularizer of the concept, Vernor Vinge, discusses the concept with several science fiction writers.)

No comments: