I have to admit, I'm charmed by the technology and demonstration of Perceptive Pixel's multi point touch screen (shown earlier at TED and lately highlighted in Fast Company). I'm charmed, but I'd like to know a little more about the use cases. You see, if you watch the TED video, or the new one produced for Fast Company (direct link here) you'll see that the applications being demoed are, in themselves, pretty neat. Google Earth, some 3d model editing, graph/tree layouts, etc. However, there are other parts of the demo where Jeff appears to be just shuffling stuff around on a screen (mostly pictures). I'd like to know how it performs with a reasonably complex interface (like SketchUp).
I think that Jeff Han is a visionary, and I'm excited to know where this innovation is heading - at the same time, I'd like to better understand the nature of the disruption that is implied. As it is a tactile interface (well, sort of), perhaps this can only be answered by experience.
Comments